Step1. change hostname
- edit
/etc/hostname
- edit
/etc/hosts
- run
hostname -F /etc/hostname
- check the new hostname running
hostname
command
NOTICE: Renaming user needs to login as root, for doing this you can:
- add root password
sudo passwd root
- login as root and proceed with step 2
- after the step2 revert root user
- remove password
passwd -d root
- lock account
passwd -l root
Step2. rename user
- run
usermod -l newuser -d /home/newuser -m olduser
- run
groupmod -n newgroup oldgroup
- Check and edit files
/etc/subuuid
and/etc/subgid
Step3. Rebuild SSH keys
- run
rm -v /etc/ssh/ssh_host_*
dpkg-reconfigure openssh-server
Using vmkfstools
vmkfstools -E oldname.vmdk newname.vmdk
su (for root access)
wm size ( as per your screen ratio mine is 16:10) 900x1440
wm density (as per your requirement dont put higher value.. increase gradually ) like 320
reboot (first check you are satisfied with your settings or not then reboot)
Reset Command
wm size reset
wm density reset
reboot
adb shell ps
to obtain PID:
adb shell ps | grep com.package.name | tr -s [:space:] ' ' | cut -d' ' -f2
adb -d logcat <your package name>:<log level> *:S
Example:
adb -d logcat com.example.example:I *:S
-d denotes device | -e denotes emulator
List packages
pm list packages
Example output:
..
package:com.nextcloud.client
package:cz.hipercalc
package:com.google.android.youtube
Unlike adb shell
the adb exec-out
command doesn't use pty which mangles the binary output.
To backup and compress a directory:
adb exec-out "tar -zcf - /system 2>/dev/null" > system.tar.gz
Note that if you are using this technique for a command that produces output on STDERR, you should redirect it to /dev/null, otherwise adb will include STDERR in its STDOUT corrupting your output.
adb exec-out screencap -p > test.png
Via adb utility:
# Backup
adb [-d | -s devicename] backup -f /path/backup_file the.package.name
# Full backup
adb [-d] backup -apk -shared -all -f /path/backup_file
# Restore
adb [-d | -s devicename] restore -f /tmp/backup_file
To find package names could use pm list packages
extract android backup files using Android Backup Extractor (tool)
# Archlinux package is available via AUR: pacaur -S android-backup-extractor-git
abe unpack /tmp/backup_file /tmp/backup_file.tar
Env vars:
export http_proxy=http://address:port
export https_proxy=https://address:port
# for socks proxy
export all_proxy=socks://address:port
Eg. with socks proxy
ssh -d -D 9999 user@host
export all_proxy=socks://localhost:9999
Http proxy through socks one
# install [polipo](https://www.irif.fr/~jch/software/polipo/)
polipo socksParentProxy=localhost:9999
export http_proxy=localhost:8123
Using youtube-dl utility (more flags at: https://github.com/rg3/youtube-dl/blob/master/README.md)
youtube-dl -f bestaudio --extract-audio --audio-format mp3 --audio-quality 0 {URL_HERE}
Download entire playlist with a naming "NN - title.ext"
youtube-dl -f bestaudio --extract-audio --audio-format mp3 --audio-quality 0 -o "%(playlist_index)s - %(artist)s - %(title)s.%(ext)s" {URL_HERE}
Get stream info
youtube-dl -F {URL_HERE}
Using imagemagick utilities:
$ identify output.png
output.png PNG 1272x740 1280x849+4+106 8-bit sRGB 52564B 0.000u 0:00.000
Using imagemagick utilities:
convert srcfile.png -crop WIDTHxHEIGHT+LEFT+TOP destfile.png
WIDTH, HEIGHT is the size of the crop area
LEFT, TOP refers to the offset of the crop area
Mastro way:
W=1024 H=768 L=4 R=4 T=100 B=100
convert srcfile.png -crop $((W-L-R))x$((H-T-B))+$L+$T destfile.png
To simply make a offline copy of a site, give:
wget --mirror --page-requisites --adjust-extension --no-parent --convert-links {url}
Useful script pdfextractor.sh
#!/bin/bash
# this function uses 3 arguments:
# $1 is the first page of the range to extract
# $2 is the last page of the range to extract
# $3 is the input file
# output file will be named "inputfile_pXX-pYY.pdf"
gs -sDEVICE=pdfwrite -dNOPAUSE -dBATCH -dSAFER \
-dFirstPage="${1}" \
-dLastPage="${2}" \
-sOutputFile="${3%.pdf}_p${1}-p${2}.pdf" \
"${3}"
Alternative using qpdf utility:
qpdf --pages input.pdf 1-10 -- input.pdf output.pdf
To crop a pdf with left, top, right and bottom margins of 5, 10, 20, and 30 pt (points):
pdfcrop --margins '5 10 20 30' input.pdf output.pdf
To crop something away, use negative values:
pdfcrop --margins '-5 -10 -20 -30' input.pdf output.pdf
NOTE: If you run only the command pdfcrop input, it will output a file titled input-crop.pdf with zero margins
----------------- -----------------
| | | | | |
| | | | | |
| 1 | 2 | | 3 | 4 | . . .
| | | | | |
|_______|_______| |_______|_______|
pdfnup utility from pdfjam package. pdfjam is a shell-script front end to the LaTeX 'pdfpages' package.
pdfnup --nup 2x1 --suffix test file.pdf
will create a file-test.pdf with 2 pages in 1.
---------------
| |
| 1 2 |
| |
| 3 4 |
| |
| 5 6 |
| |
|_____________|
pdfnup utility from pdfjam package. pdfjam is a shell-script front end to the LaTeX 'pdfpages' package.
pdfnup --no-landscape --nup 2x3 --suffix test file.pdf
will create a file-test.pdf with 6 pages in 1.
To make single pdf by merging multiple pdf as pages:
# pdfunite is contained in poppler package
pdfunite in-1.pdf in-2.pdf in-n.pdf out.pdf
Alternative using pdfjoin (from texlive package)
pdfjoin pdfjoin --landscape --paper a4paper --rotateoversize false -o merged.pdf a.pdf b.pdf
Using pdfseparate utility from poppler:
pdfseparate infile.pdf /tmp/dest/page-%d.pdf
More at: https://www.cups.org/doc/options.html
Ensure to select right printer with lpoptions -d Samsung_M2070_Series
.
To enum available printers do: lpstat -p -d
Print a pdf:
lpr filename.pdf
# alternative
lp filename.pdf
Canceling a Print Job
lprm job-id
# alternative
cancel job-id
Print landscape:
lpr -o landscape filename.pdf
Display print job queue:
lpq
Print odd pages in reverse:
lpr -o fit-to-page -o media=A4 -o Collate=True -o page-set=odd -o outputorder=reverse filename.pdf
place printed pages into the feed and print the even pages:
lpr -o fit-to-page -o media=A4 -o Collate=True -o page-set=even filename.pdf
mastro's way:
alias print_odd='lpr -o fit-to-page -o media=A4 -o Collate=True -o page-set=odd -o outputorder=reverse'
alias print_even='lpr -o fit-to-page -o media=A4 -o Collate=True -o page-set=even'
print_odd file1.pdf
print_even file1.pdf
print_odd file2.pdf
print_even file2.pdf
...
Syntax is:
git push <remote> <commit hash>:<branch>
eg:
git push origin C:master
ls segment_???.ts| sort -n | awk '{print "file " $1}' > /tmp/files.txt
ls segment_????.ts| sort -n | awk '{print "file " $1}' >> /tmp/files.txt
ffmpeg -f concat -i files.txt -threads 0 -c copy out.mp4
Alternative method:
grep .*.ts index.m3u8 | xargs cat | ffmpeg -i pipe: -c:a copy -c:v copy output.mp4
Using webm as target codec, initial capture (approx 3GB) became < 2MB
ffmpeg -i Screencast\ 2.avi -codec:v libvpx-vp9 -crf 50 -b:v 4000k screencast2.webm
Taken from: https://trac.ffmpeg.org/wiki/AudioVolume
To normalize the volume to a given peak or RMS level, the file first has to be analyzed using the volumedetect filter:
ffmpeg -i input.wav -filter:a volumedetect -f null /dev/null
Read the output values from the command line log:
[Parsed_volumedetect_0 @ 0x7f8ba1c121a0] mean_volume: -16.0 dB
[Parsed_volumedetect_0 @ 0x7f8ba1c121a0] max_volume: -5.0 dB
... then calculate the required offset, and adjust gain accordingly:
ffmpeg -i input.wav -filter:a "volume=5dB" output.wav
If you want to normalize the (perceived) loudness of the file, use the loudnorm filter, which implements the EBU R128 algorithm:
ffmpeg -i input.wav -filter:a loudnorm output.wav
This is recommended for most applications, as it will lead to a more uniform loudness level compared to simple peak-based normalization. However, it is recommended to run the normalization with two passes, extracting the measured values from the first run, then using the values in a second run with linear normalization enabled. See the loudnorm filter documentation for more.
ffmpeg -i input.mp4 -profile:v baseline -level 3.0 -s 640x360 -start_number 0 -hls_time 10 -hls_list_size 0 -f hls index.m3u8
ffmpeg -i input.mov -r 0.25 output_%04d.png
NOTE: -r 0.25
option goes after the -i input.mov
part, because it's controlling the frame rate of the output.
ffmpeg -i video.mp4 audio.mp3
To normalize audio in the meantime:
ffmpeg -i video.mp4 -filter:a loudnorm audio.mp3
taken from: https://www.davd.io/download-encrypted-hls-content-with-ffmpeg/
ffmpeg -i "$1" -c copy -bsf:a aac_adtstoasc $2
ffmpeg -i input.mp4 -vf "transpose=1" output.mp4
Here, transpose=1 parameter instructs FFMpeg to transposition the given video by 90 degrees clockwise. Here is the list of available parameters for transpose feature.
0 – Rotate by 90 degrees counter-clockwise and flip vertically. This is the default. 1 – Rotate by 90 degrees clockwise. 2 – Rotate by 90 degrees counter-clockwise. 3 – Rotate by 90 degrees clockwise and flip vertically.
To rotate videos by 180 degrees clockwise, you to need to mention transpose parameter two times like below.
ffmpeg -i input.mp4 -vf "transpose=2,transpose=2" output.mp4
The above commands, encode audio/video. To simply rotate video by changing metadata the command is:
ffmpeg -i input.mp4 -map_metadata 0 -metadata:s:v rotate="90" -codec copy output.mp4
Rotate video and fade in/out
ffmpeg -i media1.mp4 -vf 'fade=in:0:30,fade=out:6087:60,transpose=2' -af 'afade=out:st=202:d=1' -threads 0 media1-ok.mp4
Copied from: https://stackoverflow.com/questions/4807474/copying-the-gnu-screen-scrollback-buffer-to-a-file-extended-hardcopy
# Method 1
1. Ctrl + A : (get command mode)
2. hardcopy -h
# Method 2 - Save only selected buffer
1. make selection with ^A[
2. Ctrl + A + :bufferfile /tmp/somefile.txt
curl "https://docs.google.com/spreadsheets/d/{key}/gviz/tq?tqx=out:csv&sheet={sheet_name}"
Response Format: Options include tqx=out:csv
(CSV format), tqx=out:html
(HTML table), and tqx=out:json
(JSON data).
Export part of a sheet: Supply the range={range}
option, where the range can be any valid range specifier, e.g. A1:C99
or B2:F
.
Execute a SQL query: Supply the tq={query}
option, such as tq=SELECT a, b, (d+e)*2 WHERE c < 100 AND x = 'yes'
.
Export textual data: Supply the headers=0
option in case your fields contain textual data, otherwise they might be cut out during export.