mostly unorganized collection of bash oneliners

TOTOC (TOC categories)
bash tricks utilities rice system maintenance general unix unix configuration workstation window management software analysis / manipulation debugging communication monitoring network performance measuring performance throttling text transformation file discovery file analysis file statistics file management data visualization data validation data conversion data manipulation data unborking data migration data hoarding data destruction data undestruction photo/video management (TODO: cleanup) unorganized ffmpeg garbo udp multicast prefix stdout lines with timestamps (different approaches sorted by performance) software specific git virtualization: virtualbox virtualization: qemu apkbuild stuff alpine general rescuing a mounted disk after you just now accidentally overwrote the partition header muxing subs/fonts from tv fansub with bluray video + vorbis audio (superceded by all-ffmpeg (TODO: find) or sushi.py (TODO: find)) ripping audio CDs with embedded tags in CP-932 (shift-jis) rescuing deleted files from a unix filesystem by grepping the entire physical disk for parts of the file contents (bgrep is https://github.com/tmbinc/bgrep or one of its many forks) ssh from NAT box (ed) to NAT box (exci) through 3rd server (tower) memleak related deprecated stuff (slow / unsafe / just bad) todo
bash tricks
variable chomping tee stdout and stderr to log files get absolute path to script directory wildcard-glob all hidden files in current directory (excludes the . and .. files) pipe to multiple programs (multi-algorithm hashing/checksums persistent banner at top of terminal persistent banner at bottom of terminal
utilities
stopwatch stopwatch, start new lap with any key stopwatch, start/stop with any key, reset with enter whois ip flash screen according to system time (jerry-rigged time sync) compare time drift between servers show a block of random letters on each keypress sum a list of timestamps ms paint
rice
ansi 16-color test deadbeef progress bar hexdump color (TODO: broken on alpine??)
system maintenance
get installed packages on debian unban ip from fail2ban debug fail2ban / monitor file for changes and run command unlock tower when betterinitramfs hits that race condition where it doesn't bother trying kill orphaned processes (binary was deleted) check if user/group IDs are in sync
general unix
fix ssh as root to centos copy user permissions to group set default file permissions for new files in a folder clear cache (unlike some claims, this cannot cause corruption) determine which process is blocking a volume from unmounting kill all processes which block /mnt/nas from unmounting
unix configuration
monitor /etc/nginx for changes, autorestart and test configuration
workstation
turn off display through ssh maximize active window clipboard monitor from local machine "cli", connect to a remote server "srv" over vnc turn off wifi powersaving for less latency
window management
move window to screen corner
software analysis / manipulation
remove rpath and set only runpath list rpath and runpath of all libraries set local rpath for all libraries list all exported std:: functions
debugging
unique thread states from gdb
communication monitoring
dump http traffic with send/recv color coding sniffing unix sockets pcap-filter traffic estimate
network
ping subnet rate-limiting network device enp3s0 to 2MB/s look for latency spikes over tcp/udp
performance measuring
ovh rescue system speedometer monitor udp packets per second plus derivative, show column-wise RX/TX: num-packets, num-since-last, derived, derived% monitor network interfaces, show total bytes and packets Received/Transferred, and current transfer rate every 0.5 seconds monitor cpu use of a process: utcNow totalUser totalKernel deltaUser deltaKernel deltaBoth (deltas are percent where 100% means one core maxed) monitor interrupts show memory usage per program (group multiple instances) get resistent memory for each process monitor free memory (prints % of free ram) show notification on low memory figure out what process is accessing a file measure file transfer speed of single file measure file transfer speed of single file, assuming unknown size, assumes you are in /proc/$(pidof whatever)/ measure file transfer speed of single file with %progress and ETA (total overkill and also in perl), still assumes you are in /proc/$(pidof whatever)/ compression efficiency test
performance throttling
simulate slow storage media through nbd (hdd-over-tcp) and network throttlinlg
text transformation
escape variables for safe use in sed scan for list of URLs ($kw) in files and download them locally to ./mir, replacing references with a half-sha1 hash pointing to local file scan for references to google fonts and redirect those too
file discovery
find all digicam pics on an hdd list all flacs in the format: bitrate size duration filename
file analysis
kks line count diff images count unique visits in one or more nginx access logs iterate through JDownloader2 log files, list each new link as it was seen for the first time chronologically v2; prefix lines with scrape start timestamp sort list of keywords by occurence using awk sort list of keywords by occurence using grep/sort/uniq show occurences of each value using awk show occurences of timestamps grouped by day-of-week and hour hexdump 32 bytes wide hexdump 32 bytes wide, not affected by system endian diff two binary files, show warning (hex=0) or hexdiff (hex=1) side-by-side machine code disassembly from clipboard find all images with a particular RGB value at a given coordinate check whether all hashes, listed in multiple files, occur in a master hash table
file statistics
list all folders with ftm files, show count(ftm) count(non-ftm) filepath
file management
copy all new ftm files to checksum-origname.ftm, skip newer remote, touch newer local for each file in current directory, copy to d2 if not exist, warn if exist and size mismatch remove all symlinks - delete files smaller than 5MB - find and sort by time manual file merge: monitor two files for differences compare keepass csv compare keepass csv v2: hilight entries not identical in all files working with corrupted filenames recursive md5sum extract iso without root list iso contents, sorted by iso-offset turn back last-modified timestamp of files more recently modified than 2 hours ago check file fragmentation scan for dupes of file repack all tgz to txz (TODO: paralleli.sh) compare resuls: size(gz) size(xz) diff% diffBytes filename compare a folder with a tarchive of said folder compress everything in a folder to separate .txz files compare size before/after remove everything that is not a .txz file for the brave: automate the previous three find all folders named "_", ensure they have no siblings, then move all their contents up one level and delete the _ directory create an m3u8 playlist of all files within a directory monitor folder for new png files and convert to jpg extract tar file without clobbering/overwriting/replacing existing files, logging stdout and stderr to separate files extract the colliding files into a subfolder normalize md5sum files for diffing adjust last-modified of all files in a folder (dsc*) against a known-good timestamp in another (img*) append last-modified to filename; YYYY-MMDD-HHMMSS in JST (Tokyo) rewrite symlink destinations from sdb_ov to sda_ov find duplicate files and replace with hardlinks kill apple dotfiles
data visualization
graph dependency tree of package retrieve unix epoch timestamps from a logfile and print them human-readable, sub(.{6}) converts from nanosecond print unix epoch timestamps as human-readable, 83% faster than the awk version
data validation
find fucked conversions verify a column in a file is strictly increasing by one
data conversion
find last occurence of "L YYYY/DD/MM - HH:MM:SS:", convert to unix epoch, print age in seconds base-10 to base-36, csv file with 3 columns (numeric, numeric, text) jq: convert twitch chat from chat_downloader into plaintext jq: print selection of keys (size,artist,bpm) if the keys are non-null jq/awk: dump youtube chatlog (from chat_downloader) with time-difference tracking
data manipulation
urldecode (wow please don't actually use this) sort md5sums by file, excluding first folder convert chrome evaluated css clipboard layout to regular css (join every 3 lines into one line) dos2unix alternatives unix2dos alternatives
data unborking
search for correct character encoding (TODO: figure out which of these three to keep) alternatively or even
data migration
migrate server monitor process with strace and print timestamps also when idle move processed mkv files to other server once next file is created, delete file once size has been verified transfer rar files from other server, unpack and delete each archive as it finishes downloading tar files into split archives (for burning to CD/DVD/Blurei/DNA) extract split archive which spans multiple CDs/DVDs (see above) share a blockdevice with another machine over network/tcp
data hoarding
record an icecast stream with inband metadata, will need a program to strip that before the file can be played
data destruction
fill a hdd with random data (much faster than /dev/urandom) and show a progressbar in another terminal provoke SMART errors by overwriting a dying drive repeatedly so the server provider actually swaps it out like they should
data undestruction
undelete a folder from a btrfs disk also see "rescuing a mounted disk after you just now accidentally overwrote the partition header"
photo/video management (TODO: cleanup)
compare file sizes in one directory to another, figuring out which files to keep based on size compress phone videos (retarded useless output filtering, kept for reference) compress phone videos (the real deal) move images out of burst galleries into parent folder, renamed/renumbered to avoid collisions check for collisions before moving out of temp dir compressing images before passing them back to the phone transfer phone images to external disk: skip existing at target, delete the transferred ones rsync from the phone over ssh too sync folders: MOVE missing files from current to oth, leave dupes alone sync folders: copy missing files from current to oth, leave dupes alone move all files named IMG_YYYYMMDD_ into subfolders of YYYYMM compare md5sums made on phone with md5sums at destination after the above two compare files that exist in both locations, print warning if file in current dir is twice as large as oth for each file $x in $dir, either symlink file from $oth/pics/$x to $oth/gallery/$dir, or copy file to $oth/pics/$x if 404 compress phone pics (TODO: paralleli.sh) compress screenshots (TODO: paralleli.sh)
unorganized
monitor nginx log for unique visitors generate jpegs until base64 sha1sum matches (really don't recall the purpose of this one) store all of latin1 and cp437 (msdos codepage) to a text file which windows 7 notepad can read read file from random point split opus file by OggS magic play opus file from random segment, needs to be OggS aligned, prefixedwith the first two OggS atoms from start of file (47+701 byte) dump local ldap in columns serialize specific values from local ldap display pictures in sixel terminals
ffmpeg garbo
add tracknumber suffix to all files with a tracknumber tag rename all files from bandcamp to sane format convert all wav to flac with conditional downsampling grab metadata for all wav files convert all flac/wav/mp3 files below $PWD into $base/relpath/file.ogg find all zip files with audio files in them, unpack to /dev/shm/zt and convert to $base/zip-path/relpath/file.ogg ffprobe all audio files to a log file collect all unique metadata keys from the ffprobe log file and show each with an example value for all m4a files: show size, length, kbps, codec config, filename, and finally sort the list by kbps fmd5 - grab frame checksums from a set of media files for comparison with another copy (i think?? todo) compare fmd5 hardsub blureis with embedded pgs image subs discard all subtitles in mkv file then extract the first subtitle track as a PGS (blurei imagesub) list supported demuxers and their common file extensions draw frame-type, number and timestamp on top of video for slicing purposes locate and extract video segments with motion get average fps find out-of-order frames scan through media files for errors scan through media files for errors (turbo edition) get max and mean volume normalize audio with rms normalize audio with ebur128 visualize audio with mpv create spectrum image using ffmpeg realtime spectrogram realtime spectrogram on linux realtime spectrogram on windows plot audio level plot audio level v2 plot audio level v3 bitrate graph check if mono or stereo find big embedded album covers find files with excessive samplerate or sampledepth compare two videos vp9 encoding with libvpx-vp9 (software) vp9 encoding with vp9_vaapi (intel gpu) vp8 encoding wtih libvpx (software) vp8 encoding with vp8_vaapi (intel gpu) screen capture screen capture with h264_vaapi screen casting screen casting on linux screen casting on linux over udp record webcam record webcam on linux record webcam on mac/osx encode h264 encode webm (TODO: this is bad, fix it) stabilize videos (TODO: this multithreading is awful, fix with paralleli.sh or something) encode cdparanoia to flac and mp3 compare lame qualities by generating spectrograms of the audio difference (i think?? todo) extract all scene changes to jpg files, and print timestamps / byte offsets print position, frame number, and timestamp (dts and absolute) of every I-frame (keyframe) screenshot every I-frame (keyframe) screenshot every second of a 1080p letterboxed 2.39:1 video from 1 minute on alternatively if your ffmpeg build segfaults when creating jpg screenshots but not png screenshots: get distance between each scene change in bytes find compatible pixel formats for codec split video into 1-second files get crc32, dts and size of all video packets in stream... ...and count duplicate packets udp broadcast (superlow quality; adjust 480/ultrafast/36 if you are not streaming from tmux on a phone) receiving the broadcast fix videos with audio/video offset record vimeo livestreams (and other services where the m3u times out after 60min)
udp multicast
note: udp multicast group 224.0.0.1 is a special case udp multicast server: takes datagrams and replies with hostname udp multicast client: takes message on stdin and prints replies udp broadcast server: takes datagrams and replies with hostname udp broadcast client: takes message on stdin and prints replies udp multicast full-duplex udp broadcast full-duplex relay udp multicast, either from one network device to another, or from one multicast group to another -- test with the ffmpeg udp video broadcast forked listen to tcp port, dump to stdout tcp client udp server udp client minimal tcp chat client
prefix stdout lines with timestamps (different approaches sorted by performance)
the generators: 2 million lines of "hi", and a slower one to test buffering gawk, int-seconds, 477 K/s unbuffered, 600 K/s BUFFERED python, int-seconds, 440 K/s unbuffered, 636 K/s BUFFERED python, milliseconds, 222 K/s unbuffered, 282 K/s BUFFERED perl, int-seconds, 225 K/s BUFFERED perl, milliseconds, 203 K/s BUFFERED gnu-moreutils, int-seconds, 190 K/s unbuffered perl, int-seconds, 168 K/s unbuffered perl, milliseconds, 154 K/s unbuffered
software specific
SID-Wizard: running with dual-sid and $PWD mapped in alpine qemu vm: setting up networking rtorrent: force tracker recheck for selected torrent and move to next upper: dump 10 largest files in directory to server over https mpv: watch video with hardware-accelered decoding mpv: watch video in a linux tty feh: convert gps coordinates to google maps format get gps coordinates from image retroarch-android: filter out non-default config values to a new minimal config file
git
find commit that a (possibly modified) file is from
virtualization: virtualbox
change uuid remove status bar shrink disk convert dynamic/fixed enlargen dynamic image set vm time offset
virtualization: qemu
mount virtual disk image as physical partition defrag/optimize a virtual disk more efficiently than a regular fstrim
apkbuild stuff
build new apk until it succeeds (superceded by vabuild, TODO) retrieve parameter for all uncommitted apkbuild files, sorted by package name retrieve parameter for all uncommitted apkbuild files, sorted by parameter value backup uncommitted stuff delete all packages except base and ccache better but untested find all APKBUILD files that depend on python and have a -dev subpackage, then... ...browse results as a traversable file list (N/J to navigate, K to enter/exit vim) compare apkbuild files between local and remote compare all uncommitted recipes to a remote find packages where name and directory differs find all files with substr in its name and tar -tvf them
alpine general
list all tagged packages
rescuing a mounted disk after you just now accidentally overwrote the partition header
diffable recursive file listing, easy to read: modes timestamp filesize filepath -> linktarget diffable recursive file listing, sortable: filepath // linktarget [modes/size/owner:group] @timestamp use rfl to compare two folders replicated on two drives use that for a quick disk comparison (think offsite backup) create lists of dying disk + your last backup and compare them to find files you haven't made a backup of yet diff the file listings to get a list of files you haven't made a backup of yet
muxing subs/fonts from tv fansub with bluray video + vorbis audio (superceded by all-ffmpeg (TODO: find) or sushi.py (TODO: find))
extract audio track from video and transcode to vorbis then mix bd video, vorbis audio, tv everything else
ripping audio CDs with embedded tags in CP-932 (shift-jis)
as root, collect cd info and rip the disc as regular user, convert with deadbeef, then
rescuing deleted files from a unix filesystem by grepping the entire physical disk for parts of the file contents (bgrep is https://github.com/tmbinc/bgrep or one of its many forks)
need hex strings of content that was near the top and bottom of the file locate instances of that data on the disk print the results as start/stop ranges and collect those ranges into files
ssh from NAT box (ed) to NAT box (exci) through 3rd server (tower)
alternative 1, if both source and destination are trusted (can ssh into tower): alternative 2, if the source/destination/both are untrusted (cannot ssh into tower): calculate memory usage from maps file (TODO: getting ~/maps)
deprecated stuff (slow / unsafe / just bad)
compress logs compress logs in subfolders delete partial copies of folders set up arch virt net mumble logs to html (this was replaced by a python script which I should upload somewhere) compare md5copy folders to local copy from md5copy to local 3ds diff
todo
restart busted usb controllers


# todo:

# s9 kvit
# link in dependencies (cdiff, dircmp, )


bash tricks

# variable chomping

echo "${FILE%%.*}"  example
echo "${FILE%.*}"   example.tar
echo "${FILE#*.}"   tar.gz
echo "${FILE##*.}"  gz


# tee stdout and stderr to log files

{ { echo "this is stdout"; echo "this is stderr" >&2; } > >( tee -a tmp2.sout ) 2> >( tee -a tmp2.serr >&2 ); } >tmp.sout 2>tmp.serr
cat tmp.sout | sed 's/^/STDOUT 1 /'; cat tmp.serr | sed 's/^/STDERR 1 /'; cat tmp2.sout | sed 's/^/STDOUT 2 /'; cat tmp2.serr | sed 's/^/STDERR 2 /';


# get absolute path to script directory

SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"


# wildcard-glob all hidden files in current directory (excludes the . and .. files)

ls {.[!.]*,..?*}


# pipe to multiple programs (multi-algorithm hashing/checksums

tee < your.tar >(sha1sum >/dev/shm/f2) >(sha256sum >/dev/shm/f3) | md5sum >/dev/shm/f1; cat /dev/shm/f{1,2,3}


# persistent banner at top of terminal

# usage: banner 34 your message here
# breaks scrolling
banner() { local p=$1; shift; printf '\033[%s' s 2r H "0;1;44m> $*..." K "1;999H" 3D$p% u >&2; printf '\n\033[7m[%s%%] %s \033[K\n\033[27m' $p "$*" >&2; }; unbanner() { printf '\033[%s' s r u; }


# persistent banner at bottom of terminal

# usage: banner 34 your message here
# scrolling works, text selection may be wonky, needs tput (ncurses)
banner() { local p=$1 h=$(tput lines); shift; printf '\033[%s' s "1;$((h-1))r" ${h}H "0;1;44m> $*..." K "$h;999H" 3D$p% u B A >&2; printf '\n\033[7m[%s%%] %s \033[K\n\033[27m' $p "$*" >&2; }; unbanner() { printf '\033[%s' s r u; }


utilities

# stopwatch

echo; for n in 5 4 3 2 1 ; do printf '%s...\n' $n; sleep 0.7; done; printf '\033[4A\033[J'; t0=$(date +%s%N); while ! read -t0.1; do t=$(date +%s%N); printf '\033[A'; echo $((t-t0)) | sed -r 's/(.{3}).{6}$/.\1 /'; done


# stopwatch, start new lap with any key

echo; for n in 4 3 2 1; do printf '%s... ' $n; sleep 0.7; done; printf '\r%10s%10s%15s%15s\n\n' total lap unix_now lap_start; t0=$(date +%s%N); t1=$t0; while true; do while ! read -t0.025; do t=$(date +%s%N); printf '\033[A'; for v in $((t-t0)) $((t-t1)) $t $t1; do printf '  %5d.%2s' "${v::${#v}-9}" "${v:${#v}-9:2}"; done; printf '\033[J\n'; done; t1=$t; done


# stopwatch, start/stop with any key, reset with enter

ts=0; on=; echo press a key; while true; do read -rn1 -t0.05 && k=1 || k=; t=$(date +%s%N); ((k)) && [ -z "$REPLY" ] && { on=; ts=0; }; td=$((ts+t-t0)); ((on)) && printf '\033[A\r%d.%2s\033[J\n' "${td::${#td}-9}" "${td:${#td}-9:2}"; ((k)) && { on=$((1-on)); (($on)) && t0=$t || ts=$((ts+t-t0)); }; done
# bonus perl port for machines with ancient bash versions such as macos
stty -icanon || stty -f /dev/stdin -icanon && perl -e 'use strict; use Time::HiRes qw(usleep gettimeofday); use IO::Select; my $sel=IO::Select->new(); $sel->add(*STDIN); my ($ts,$on,$t0,$td)=(0)x4; print "\n"; while (1){ my $k=0; if($sel->can_read(.02)) {sysread(STDIN,$k,1)} my $t=int(gettimeofday*100)/100; if($k eq "\n") {$on=0;$ts=0} $td=($ts+$t-$t0); if($on){printf "\033[A\r%.2f\033[J\n",$td} if($k){$on=1-$on; if($on){$t0=$t}else{$ts=($ts+$t-$t0)}}}'


# whois ip

function ipwhois_ripe() { { printf '%s\n' $1; sleep 1; } | ncat whois.ripe.net 43; }; function ipwhois_apnic() { { printf '%s\n' $1; sleep 1; } | ncat whois.apnic.net 43; }


# flash screen according to system time (jerry-rigged time sync)

# if your bash is dangerously old (mac osx), 74% cpu load
while true; do read -t0.0167; v=$(date +%s); [ $v = "$ov" ] && continue; ov=$v; c=$((v%7)); printf "\033[H\033[1;37;4${c}m\033[J\n$v\n"; done
# ...or if you have bash >= 4.2  (1.4% load)
while true; do read -t0.0167; printf -v v '%(%s)T' -1; [ $v = "$ov" ] && continue; ov=$v; c=$((v%7)); printf "\033[H\033[1;37;4${c}m\033[J\n$v\n"; done
# ...or even better, perl  (0.2% load)
perl -e 'use Time::HiRes qw(usleep); my $ov=-1; while (1) {usleep(16700); my $v=time(); if ($v==$ov) {next} $ov=$v; printf "\033[H\033[1;37;4%dm\033[J\n%d\n",$v%7,$v}'
# ...or if you want iso8601 and msec and can afford the bandwidth
perl -e 'use POSIX qw(strftime); use Time::HiRes qw(usleep gettimeofday); while (1) {usleep(16700); my ($v,$uv)=gettimeofday; printf "\033[H\033[1;37;4%dm\033[J%s.%03d\n",$v%7,strftime("%F\n%T", localtime($v)),$uv/1000}'


# compare time drift between servers

# the client, timecmp1.sh
while true; do awk '{printf $1" "}' /proc/uptime; date +%s.%N; sleep 1; done
# the server, timecmp2.sh
while read x; do awk -vw=$(date +%s.%N) -vr="$x" '{print r " " $1 " " w}' /proc/uptime; done | awk 'NF!=4{next} !lm0{rm0=$1;rw0=$2;lm0=$3;lw0=$4;dm0=lm0-rm0;dw0=lw0-rw0} {dm=$3-$1;dw=$4-$2;printf "drift: %5.2fu %6.3fw   Tu: %.2f %.2f   Tw: %.3f %.3f   u: %.2f %.2f   w: %.3f %.3f   diff: %.2f %.3f\n",dm-dm0,dw-dw0,$1-rm0,$3-lm0,$2-rw0,$4-lw0,$1,$3,$2,$4,dm,dw}'
# usage: cat timecmp1.sh | ssh 10.1.2.9 'cat >timecmp1.sh; bash timecmp1.sh' | ./timecmp2.sh
# also see https://github.com/9001/usr-local-bin/blob/master/timecmp


# show a block of random letters on each keypress

clear; [ -e /dev/shm ] && a=w || a=b; while true; do printf '\033[10H'; head -c 1000 /dev/urandom | base64 -$a 72 | sed "$(printf 's/^/\033[40G/g')"; read -u1 -n1 -r; done


# sum a list of timestamps

# expects [h:]m:]s,  example:  tsum  04:20  39:42  1:23:45
tsum() { local p='s/0?(.+):0?([0-9]+)/(\1*60+\2)/;' ts=0 h=0 m=0 s=0; for orig in $@; do s=$(echo $orig | sed -r "$p$p"); ts=$((ts+s)); h=$((ts/(60*60))); s=$((ts-h*60*60)); m=$((s/60)); s=$((s-m*60)); printf '%7s  %d:%02d:%02d  %s\n' $orig $h $m $s $ts; done; }


# ms paint

# change color with digit keys
r() { read -n1 -r c; [ "$c" = "$1" ]; }; b=2; stty -echo -icanon; printf '\033[H\033[0m\033[J\033[?1003h'; while true; do r $'\033' || { printf '%s\n' "$c" | grep -qE '[0-7]' && b=$c; continue; }; r \[ || continue; r "M" || continue; r "@" && d=y || d=; read z x y z < <(head -c 2 | hexdump -C); [ $d ] || continue; printf '\033[%d;%dH\033[1;4%dm \033[0m\033[D' $((0x${y}-32)) $((0x${x}-32)) $b; done
# BONUS CONTENT: the mouse tracking visualizer
printf '\033[?1003h'; h='3/1 "%02x" " " '; t='" " 6/1 "%_p"'; stty -echo -icanon; hexdump -e '"%06.6_ao  "'" $h$h " -e "$t"' "\n"'
# bonus content #2, balou23 on HN optimized it a bit:
p(){ for i;do printf "\e[$i";done;};r(){ read -rn1 c;[[ $c = $1 ]];};stty -echo cbreak;p H m J ?1003h;trap 'p ?1003l' 0;while d=;do r $'\e'||{ [[ $c =~ [0-7] ]]&&b=$c;continue;};r \[||continue;r M||continue;r @&&d=y;read _ x y _< <(head -c2|od -tu1);[ $d ]&&p "$((y-32));$((x-32))H" "4${b-2}m " m D;done


rice

# ansi 16-color test

echo; for l in {0..1}; do for f in {0..7}; do for b in {0..7}; do echo -ne "\033[$l;4$b;3${f}m SINEP \033[0m  "; done; echo; done; echo; done


# deadbeef progress bar

w=$(tput cols); echo; while true; do n=$(deadbeef --nowplaying '%e %l' 2>/dev/null | sed -r "s/^0*(..*):0*(..*) 0*(..*):0*(..*)$/($w*(60*\1+\2))\/(60*\3+\4)\n/" | bc); [[ $n != $on ]] && { printf '\033[A'; on=$n; head -c $n /dev/zero | tr '\0' '>'; echo; }; sleep 0.5; done


# hexdump color (TODO: broken on alpine??)

hexdump -C dump.bin | head | sed -r 's/^([^ ]*)  (.{48})  /\1\r\2\r/' | while IFS=$'\r\n' read -r ofs hex asc; do echo -ne "$ofs "; echo -ne "$hex" | sed -r 's/  / FF /;s/(..) ?/0x\1 \1 /g;s/.*/printf "SC%dm %s " &/e' | sed -r "s/ ?SC/$(echo -ne "\033[38;5;")/g"; echo -ne "\033[0m"; echo " $asc"; done


system maintenance

# get installed packages on debian

root@nitor:~# dpkg --get-selections > packages-dpkg-get-selections-new
root@nitor:~# dpkg-query -l > packages-dpkg-query-l-new


# unban ip from fail2ban

fail2ban-client set nginx-botsearch unbanip $IP


# debug fail2ban / monitor file for changes and run command

f="/etc/fail2ban/filter.d/nginx-botsearch.conf"; while true; do t=$(stat -c%Y "$f"); sleep 0.5; [[ $t -eq $ot ]] && continue; ot=$t; fail2ban-regex /var/log/nginx/error.log "$f" "$f"; done


# unlock tower when betterinitramfs hits that race condition where it doesn't bother trying

cryptsetup --tries 1 luksOpen --allow-discards /dev/md0 enc_root


# kill orphaned processes (binary was deleted)

find /proc -mindepth 2 -maxdepth 2 -name exe -printf '%p %l\n' 2>/dev/null | grep -E 'exe .* \(deleted\)' | tee /dev/stderr | sed -r 's@^/proc/@@;s@/.*@@' | while read pid; do kill -9 $pid; done 


# check if user/group IDs are in sync

cat /etc/group /etc/passwd | sed -r 's/:.*//' | sort | uniq | while IFS= read n; do u=$(grep -E "^$n:" /etc/passwd | sed -r 's/[^:]*:[^:]*://;s/:.*//'); g=$(grep -E "^$n:" /etc/group | sed -r 's/[^:]*:[^:]*://;s/:.*//'); printf '%s / %s / %s\n' "$u" "$g" "$n"; done | sort -n | awk '{c=1} $1==$3 {c=2} {printf "\033[3%sm%s\033[0m\n", c, $0}'


general unix

# fix ssh as root to centos

chmod 700 /root/.ssh/
chmod 600 /root/.ssh/*
restorecon -R -v /root/.ssh/


# copy user permissions to group

for n in {1..7}; do find -perm -${n}00 -and -not -perm -0${n}0 -exec chmod +0${n}0 '{}' \+; done


# set default file permissions for new files in a folder

setfacl -d -m u::rw somefoldere


# clear cache (unlike some claims, this cannot cause corruption)

sync; echo 3 > /proc/sys/vm/drop_caches


# determine which process is blocking a volume from unmounting

find /proc/ -maxdepth 3 -type l -printf '%p %l\n' 2>/dev/null | grep /mnt/nas


# kill all processes which block /mnt/nas from unmounting

lsof /mnt/nas | tee /dev/stderr | awk 'NR==1&&$2!="PID" {echo "unexpected-lsof-output";exit} NR>1{print $2, $1}' | while read -r pid cmd; do bin=/proc/$pid/exe && ls -al /proc/$pid/fd && printf '\nCMD: ' && tr '\0' ' ' </proc/$pid/cmdline && readlink /proc/$pid/exe | grep -qE "(^|/)$cmd$" && printf '\n\nkill it? ' && read -u1 -r && kill $pid; done 


unix configuration

# monitor /etc/nginx for changes, autorestart and test configuration

while true; do sum="$(find /etc/nginx -iname \*.conf -printf '%T@\n' | md5sum)"; sleep 0.2; [[ "x$sum" == "x$osum" ]] && continue; osum="$sum"; systemctl restart nginx; systemctl status nginx.service | cat; wget http://127.0.0.1/ -SO /dev/null; sleep 0.2; done


workstation

# turn off display through ssh

export DISPLAY=localhost:0:0; xset s activate; xset s off


# maximize active window

wmctrl -r :ACTIVE: -b add,maximized_vert,maximized_horz


# clipboard monitor

c=$(which xsel) || c=$(which xclip); while true; do sleep 0.1; h="$($c -o | tee /dev/shm/clc | md5sum)"; [ "$h" = "$oh" ] && continue; oh="$h"; cat /dev/shm/clc; echo; done


# from local machine "cli", connect to a remote server "srv" over vnc

srv> x11vnc -findauth
   # XAUTHORITY=/var/run/lightdm/root/:0
cli> ssh -t -L 5900:localhost:5900 srv 'x11vnc -nopw -noxrecord -snapfb -safer -nocmds -localhost -display :0 -auth /var/run/lightdm/root/:0'
cli> gvncviewer 127.0.0.1:0
# -noxrecord is a workaround for x11vnc "stack smashing detected" on popup windows
# -snapfb is a massive speedboost on this particular laptop (usually it's worse)


# turn off wifi powersaving for less latency

iwconfig $(iwconfig 2>&1 | awk '/ESSID:/ {print $1}') power off


window management

# move window to screen corner

title='^https://twitch.tv/[^ ]+$'
# get screen dimensions
read sw wh <<<$(xrandr | grep -E '^Screen 0: ' | head -n 1 | sed -r 's/.* current //;s/, .*//;s/ x / /')
# get window dimensions
eval $(xdotool search --name "$title" getwindowgeometry --shell)
# move window to top-right corner (x=screen-window, y=-13)
xdotool search --name "$title" windowmove -- $((sw-WIDTH)) -13


software analysis / manipulation

# remove rpath and set only runpath

ls -1 *.so* | while IFS= read -r x; do readelf -d "$x" | grep -E 'Library rpath: ' > /dev/shm/rpathtmp ; [ -s /dev/shm/rpathtmp ] && { echo "$x"; chrpath -d "$x"; patchelf --set-rpath "$(cat /dev/shm/rpathtmp | sed 's/.*Library rpath: .//;s/.$//')" "$x" ; }; done


# list rpath and runpath of all libraries

ls -1 | grep -E '\.so' | while IFS= read -r x; do echo; echo "$x"; readelf -d "$x" | grep -E '\(R.?.?PATH\)'; done 


# set local rpath for all libraries

ls -1 | grep -E '\.so$' | while IFS= read -r x; do patchelf --force-rpath --set-rpath ../lib "$x"; done


# list all exported std:: functions

{ find -iname \*.a; find -iname \*.so; } | while IFS= read x; do arg=''; [ ${x:(-2)} == so ] && arg=-D; nm -f posix $arg "$x" | c++filt | grep std:: | while IFS= read -r y; do printf '%s:  \033[1;33m%s\033[0m\n' "$x" "$y"; done; done | LANG=C sort -u > all_symbols


debugging

# unique thread states from gdb

gdb -ex "set pagination 0" -ex "thread apply all backtrace" -batch -p $pid | grep -vE '^Thr' | tr '\n' '\r' | sed 's/\r\r/\n/g' | while IFS= read -r x; do printf '%s %s\n' "$(printf '%s' "$x" | md5sum)" "$(printf '%s' "$x" | sed 's/\r/ --- /g')"; done | sort | uniq -cw32 | sort -n | sed 's/ --- /\n/g'


communication monitoring

# dump http traffic with send/recv color coding

function mon() { port=$1; stdbuf -oL tcpdump -i lo -Alnq 'tcp port '$port' and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' | sed -r "s/^([0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]{6} IP [0-9\.]+\.$port > [0-9\.]+: tcp [0-9]+)$/$(printf '\033[1;31m\\1')/;s/^([0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]{6} IP [0-9\.]+ > [0-9\.]+\.$port: tcp [0-9]+)$/$(printf '\033[1;33m\\1')/;"; }; mon 8080


# sniffing unix sockets

mv mysql.sock mysql.sockorig
socat -t100 -x -v UNIX-LISTEN:/var/lib/mysql/mysql.sock,mode=777,reuseaddr,fork UNIX-CONNECT:/var/lib/mysql/mysql.sockorig


# pcap-filter traffic estimate

# "randomly" sample a really busy tcp connection by picking packets based on seqno
/Applications/Wireshark.app/Contents/MacOS/tshark -i lo0 -f 'tcp port 3923 and tcp[5]%8==0'


network

# ping subnet

rm /dev/shm/*.pingr; for ip in {0..255}; do ping -w 1 -c 1 192.168.0.$ip 2>&1 | grep -E 'packets transmitted' | sed -r "s/^/$ip  /" > /dev/shm/$ip.pingr & done ; sleep 0.5; for x in {1..2}; do printf 'Up:   '; grep -h '1 received' /dev/shm/*.pingr | cut -d' ' -f1 | sort -n | tr '\n' ' '; printf '\nDown: '; grep -h '0 received' /dev/shm/*.pingr | cut -d' ' -f1 | sort -n | tr '\n' ' '; echo; sleep 1; done


# rate-limiting network device enp3s0 to 2MB/s

tc qdisc add dev enp3s0 root tbf rate 16mbit latency 50ms burst 1540
# reset/disable rate-limiting
tc qdisc del dev enp3s0 root


# look for latency spikes over tcp/udp

# first start some servers; you'll want https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py
python3 copyparty-sfx.py --tftp 3969 -q  # tcp3923/udp3969 server
socat tcp4-l:3921,fork,reuseaddr exec:/bin/cat  # tcp3921 echo-server
# then connect to the servers using any of the below (replace 127.0.0.1 with server ip) which prints a newline if latency d>0.04 sec:
# udp-client (curl-to-copyparty; hold ctrl-c to exit)
printf '\n\n';while true;do read -t0.006;timeout 0.1 curl -s tftp://127.0.0.1:3969/;done|awk '/^# per/{f="/proc/uptime";getline n<f;close(f);sub(/ .*/,"",n);d=n-p;p=n;printf"\033[A%.2f %.2f\n",n,d;if(d>0.04)print""}'
# tcp-client (curl-to-copyparty; new connection for each packet)
printf '\n\n';while true;do read -t0.006;curl -s 127.0.0.1:3923/?reset;done|awk '/^<head>$/{f="/proc/uptime";getline n<f;close(f);sub(/ .*/,"",n);d=n-p;p=n;printf"\033[A%.2f %.2f\n",n,d;if(d>0.04)print""}'
# tcp-client (socat-to-socat; new connection for each packet)
printf '\n\n';while true;do read -t0.006;printf 'a\n'|socat - tcp4:127.0.0.1:3921;done|awk '/^a$/{f="/proc/uptime";getline n<f;close(f);sub(/ .*/,"",n);d=n-p;p=n;printf"\033[A%.2f %.2f\n",n,d;if(d>0.04)print""}'
# tcp-client (socat-to-socat; one persistent connection)
printf '\n\n';while true;do read -t0.006;printf 'a\n';done|socat - tcp4:127.0.0.1:3921|awk '/^a$/{f="/proc/uptime";getline n<f;close(f);sub(/ .*/,"",n);d=n-p;p=n;printf"\033[A%.2f %.2f\n",n,d;if(d>0.04)print""}'


performance measuring

# ovh rescue system speedometer

while true; do n=$(ifconfig | grep 'RX bytes' | head -n 1 | sed 's/[^:]*://;s/ .*//'); d=$(( n - o )); d=$(( d / 1024 )); o=$n; echo "$n $d"; sleep 1; done | sed -r 's/(.*)(...)(...)(...) /\1.\2.\3.\4 /'


# monitor udp packets per second plus derivative, show column-wise RX/TX: num-packets, num-since-last, derived, derived%

while true; do cat /proc/net/snmp ; sleep 0.5; done | awk '$1 == "Udp:" && $2 != "InDatagrams" {id=$2-il; od=$5-ol; il=$2; ol=$5; idd=id-idl; odd=od-odl; idl=id; odl=od; iddp=int((idd*100)/(id+1)); oddp=int((odd*100)/(od+1)); printf "%s  %s  %6s %6s %6s %6s %5s%% %4s%%\n", $2, $5, id, od, idd, odd, iddp, oddp}'


# monitor network interfaces, show total bytes and packets Received/Transferred, and current transfer rate every 0.5 seconds

# on certain kernels where the 4 last columns drop to 0 frequently, increase sd to 1
clear; sd=0.5; while true; do sed 's/:/: /' </proc/net/dev; sleep $sd; done | awk '$1~/[^:]$/{if(!blank){blank=1;printf "\033[K\n\033[K\033[H\033[1;31m%12s \033[33m%12s %12s \033[31m%12s %12s \033[33m%8s %6s \033[31m%8s %6s\033[K\n", "device", "Rx_kbyte", "Rx_pkts", "Tx_kbyte", "Tx_pkts", "Rx[kb/s", "pk/s]", "Tx[kb/s", "pk/s]"}next} {blank=0; dev=$1; sub(/:$/,"",dev); pdev=substr(dev,length(dev)-11,length(dev)); cib=$2; cip=$3; cob=$10; cop=$11; dib=cib-ib[dev]; dip=cip-ip[dev]; dob=cob-ob[dev]; dop=cop-op[dev]; ib[dev]=cib; ip[dev]=cip; ob[dev]=cob; op[dev]=cop; printf "%12s \033[33m%12d %12d \033[31m%12d %12d \033[33m%8d %6d \033[31m%8d %6d\033[K\n", pdev, cib/1024, cip, cob/1024, cop, dib/(1024*'$sd'), dip/'$sd', dob/(1024*'$sd'), dop/'$sd'}'


# monitor cpu use of a process: utcNow totalUser totalKernel deltaUser deltaKernel deltaBoth (deltas are percent where 100% means one core maxed)

cd /proc/$(pidof deadbeef) && while true; do IFS=' ' read -r usr krn <<< $(cat stat | cut -d' ' -f14,15); dusr=$((usr-ousr)); dkrn=$((krn-okrn)); ousr=$usr; okrn=$krn; dtot=$((dusr+dkrn)); printf '%s  %6d  %5d  %3d  %3d  %3d\n' "$(date +%s.%N)" "$usr" "$krn" "$dusr" "$dkrn" "$dtot"; ov=$v; sleep 1; done


# monitor interrupts

watch -tn 0.2 "cat /proc/interrupts | cut -c-$(tput cols)"


# show memory usage per program (group multiple instances)

ps aux | awk '{p[$11]+=$4} END {for (i in p) printf("%5.1f %s\n",p[i],i)}' | sort | tail


# get resistent memory for each process

find /proc -maxdepth 2 -name status -exec tail -n 1000 '{}' + | awk 'function p() {if (rss>0) printf "%9s %5s %s\n", rss,pid,name; sum+=rss;name="?";pid=0;rss=0} $1=="==>" {p()} END {p();printf "%9s KiB == %.2f MiB == %.2f GiB\n",sum,sum/1024,sum/(1024*1024)} $1=="Name:" {sub(/^[^\s]+\s+/,"");name=$0;next} $1=="Pid:" {pid=$2;next} $1=="VmRSS:" {rss+=$2;next} $1=="HugetlbPages:"{rss+=$2;next} $1=="RssFile:"{rss-=$2;next}' | sort -n | while IFS=' ' read -r sz pid name; do cmd="$(tr '\0' ' ' < /proc/$pid/cmdline)"; printf '\033[0m%9s %5s \033[35m%s \033[36m%s\033[0m\n' "$sz" "$pid" "$name" "$cmd"; done | cut -c-$(($(tput cols)+14))
# also see `cat /proc/slabinfo`  and  `slabtop -o | tac`  and  `lsmod | sort -nk 2,2`


# monitor free memory (prints % of free ram)

while true; do awk '$1=="MemTotal:" {total=$2} $1=="MemFree:" {unused=$2} $1=="Buffers:" {buffered=$2} $1=="Cached:" {cached=$2} END {free=unused+cached; pfree=(free*100/total); printf "%s ", int(pfree)}' < /proc/meminfo; sleep 0.5; done


# show notification on low memory

while true; do f=$(awk '$1=="MemTotal:"{total=$2} $1=="MemFree:"{free=$2} $1=="Cached:"{free+=$2} END{print int((total-free)/1024)}' < /proc/meminfo); [[ $f -lt 768 ]] && notify-send "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA $f"; sleep 1; done


# figure out what process is accessing a file

find /proc/ -maxdepth 3 -type l -printf '%p -> %l\n' 2>/dev/null | grep your.filename


# measure file transfer speed of single file

while true; do fs=$(stat -c%s -- "$fn"); echo $((fs-fso)) | rev | sed -r 's/(...)/\1 /g' | rev; fso=$fs; sleep 1; done


# measure file transfer speed of single file, assuming unknown size, assumes you are in /proc/$(pidof whatever)/

fd=1; awk 'BEGIN {while(1==1) {getline v<"fdinfo/'$fd'";close("fdinfo/'$fd'");sub(/[^0-9]+/,"",v);d=v-o;o=v;printf "%.3f GB transferred, %.3f MB/s\n",v/(1024*1024*1024),d/(1024*1024);c="sleep 1;echo x";c|getline x;if(x!=x)break;close(c);}}'


# measure file transfer speed of single file with %progress and ETA (total overkill and also in perl), still assumes you are in /proc/$(pidof whatever)/

fd=3; nhist=15; sz=$(stat -c%s -- "$(readlink fd/$fd)"); perl -e 'use strict; use Time::HiRes qw(time usleep); my @hist; while (1==1) { open my $fh, "<", "fdinfo/'$fd'"; my $t = time; my $p = <$fh> =~ s/[^0-9]+//r; close $fh; push @hist, [$t,$p]; if (scalar @hist > '$nhist') { shift @hist; } my $t1 = $hist[0][0]; my $p1 = $hist[0][1]; my $td = $t-$t1; my $pd = $p-$p1; my $rem = '$sz'-$p; my $bps = 0; if ($td > 0) { $bps = $pd/$td; }; my $prc = 8.888; my $es = 0; my $em = 0; if ($rem > 0) { $prc = (100*$p/'$sz'); if ($bps > 0) { $es = $rem/$bps; } } $em = int($es/60); $es -= $em*60; printf("%.3f GB total, %.3f GB done, %.3f%%, %.3f MB/s, eta %dmin %2dsec\n", '$sz'/(1024*1024*1024), $p/(1024*1024*1024), $prc, $bps/(1024*1024), $em, $es); usleep 500*1000; }'


# compression efficiency test

# compress
for op in "cat" "pigz -9ck" "lbzip2 -9ckz" "pixz -9tk" "gzip -9ck" "bzip2 -9ckz" "xz -cze9T4"; do fn=${op%% *}; echo "[$fn]"; t0=$(date +%s%N); pigz -d < gz | pv -i 0.2 | $op > t.$fn || break; t=$(date +%s%N); td=$((t-t0)); printf '%-6s %5d.%2s %d\n' $fn "${td::${#td}-9}" "${td:${#td}-9:2}" $(stat -c%s t.$fn); done
# decompress
for op in "pigz -dck" "lbzip2 -dck" "pixz -dtk" "gzip -dck" "bzip2 -dck" "xz -dck"; do fn=${op%% *}; echo "[$fn]"; t0=$(date +%s%N); $op < t.$fn | pv -i 0.2 | md5sum || break; t=$(date +%s%N); td=$((t-t0)); printf '%-6s %5d.%2s\n' $fn "${td::${#td}-9}" "${td:${#td}-9:2}"; done
# i386 debian rootfs with ableton 9 installed in wine gives
command     cmp  decmp  size
cat        7.93         2069391360  100.00%
pigz      34.19  11.59  1112818130   53.78%
lbzip2    30.53  17.67  1030905447   49.82%
pixz     175.46  14.59   829139668   40.07%
gzip     182.43  17.05  1114689080   53.87%
bzip2    199.18  75.02  1028314467   49.69%
xz       307.78  56.86   828040336   40.01%


performance throttling

# simulate slow storage media through nbd (hdd-over-tcp) and network throttlinlg

apk add nbd nbd-client
modprobe nbd
head -c $((64*1024*1024)) /dev/zero > /dev/shm/vdisk
mkfs.ext2 /dev/shm/vdisk
nbd-server 127.0.0.1:10809 /dev/shm/vdisk -C ''
nbd-client 127.0.0.1 10809 /dev/nbd0
mount -o sync /dev/nbd0 /media/slow/
# limit speed of port 10809 on device lo to 1mbit, see https://wiki.archlinux.org/index.php/Advanced_traffic_control
tc qdisc del dev lo root
tc qdisc add dev lo root handle 1: htb default 10
tc class add dev lo parent 1: classid 1:1 htb rate 1mbit burst 15k
tc filter add dev lo parent 1: protocol ip prio 1 u32 match ip sport 10809 0xffff flowid 1:1
tc filter add dev lo parent 1: protocol ip prio 1 u32 match ip dport 10809 0xffff flowid 1:1
# unmount and delete when done
umount /media/slow
killall nbd-client
qemu-nbd -d /dev/nbd0
rm /dev/shm/vdisk


text transformation

# escape variables for safe use in sed

# kw = original search term,  kwe = sed-safe
# rp = original replace term, rpe = sed-safe
kw="a[s]d\f\\\\g/h$j*j.l^m&n"; rp="$kw"; kwe="$(printf '%s\n' "$kw" | sed -e 's/[]\/$*.^|?+(){}[]/\\&/g')"; rpe="$(printf '%s\n' "$rp" | sed -e 's/[\/&]/\\&/g')";


# scan for list of URLs ($kw) in files and download them locally to ./mir, replacing references with a half-sha1 hash pointing to local file

baseurl=/mc/map; mkdir -p mir; baseurle="$(printf '%s\n' "$baseurl" | sed -e 's/[\/&]/\\&/g')"; for kw in cdnjs.cloudflare.com/ajax/libs ajax.googleapis.com/ajax/libs ; do kwe="$(printf '%s\n' "$kw" | sed -e 's/[]\/$*.^[]/\\&/g')"; grep -iRF "$kw" . | sed -r 's/.*"([^"]*'"$kwe"'[^"]*)".*/\1/'; done | sort | uniq | while IFS= read url; do hash="$(printf '0: %s\n' "$(printf '%s' "$url" | sha1sum | cut -c-20)" | xxd -r -p | base64 -w0 | tr -d = | tr '+/' '-_')"; printf '%s %s\n' "$hash" "$url"; done | while IFS=' ' read -r hash url; do loc="$hash.$(printf '%s\n' "$url" | sed -r 's/[#?].*//;s/.*\.//')"; [ -e "mir/$loc" ] || wget "$url" -O "mir/$loc"; urle="$(printf '%s\n' "$url" | sed -e 's/[]\/$*.^[]/\\&/g')"; sedcmd="s/\"$urle\"/\"$baseurle\/mir\/$loc\"/"; printf '\033[35m%s\033[0m\n' "$sedcmd"; grep -lRF "\"$url\"" . | while IFS= read -r fn; do sed -ri "$sedcmd" "$fn"; done; done


# scan for references to google fonts and redirect those too

grep -lRF 'https://fonts.googleapis.com/css?family=' | while IFS= read -r x; do sed -ri 's@(href=")https://fonts.googleapis.com/css\?family=[^"]*(")@\1/mc/map/mir/fonts.css\2@' "$x"; done 


file discovery

# find all digicam pics on an hdd

find /mnt/ef/ -type f -iname '*.jpg' | sort | awk '{fn=$0; sub(/.*\//,"",fn); printf "\033[33m%-60s \033[36m%s\n", fn, $0}' | less -R


# list all flacs in the format: bitrate size duration filename

printf '\n%8s %9s %8s\n\n' bitrate size seconds; while IFS= read -r fn; do printf '\033[A%27s %s\033[J\n' "" "$fn"; len="$(ffprobe -show_streams -select_streams a:0 "$fn" 2>/dev/null | grep -E '^duration=' | awk -F= '{print $2}')"; sz="$(stat -c%s "$fn")"; printf '%s\n' "$len" | grep -qE '^N|^[0-9]{0,1}($|\.)' && continue;  rate="$(awk 'BEGIN {sz='$sz';len='$len';printf "%.3f", (sz/len)/128;}')"; printf '\033[A%8.3f %9d %8.3f %s\n\n' "$rate" "$sz" "$len" "$fn"; done < <( find -iname \*.flac 2>/dev/null | sort) | tee t.flacs


file analysis

# kks line count

cat kks-v106-script.txt  | grep '\.JP> ' | sed 's/   */\n/g' | grep -E '\.JP> [^a-zA-Z0-9_-% ]{2}' | wc -l


# diff images

composite webm1.png webm2.png -compose difference zcomp.png; mogrify -level 0%,4% zcomp.png; eog zcomp.png


# count unique visits in one or more nginx access logs

echo; x=1; for year in 2016 2017 2018; do for month in Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec; do printf '\033[%sG%s %s %s ' "$x" "$url" "$year" "$month"; { pigz -d < /backup/ene/logs/nginx/best.website.access.log; cat /backup/kou/logs/nginx/best.website.access.log; cat /var/log/nginx/best.website.access.log; } | grep -E 'GET /subfolder[/ ]' | grep -vE '^(list-of-bot-ips)\.[0-9]+ ' | grep -E '^[^[]+\[../'"$month/$year" | sed -r 's/ /            /' | sort | uniq -cw15 | wc -l; done; done; printf '\033[24A'; printf '\033[25B'


# iterate through JDownloader2 log files, list each new link as it was seen for the first time chronologically

for logfile in ~/jd/logs/*/"1fichier.com_jd.plugins.decrypter.OneFichierComFolder.log."*; do printf '\n\033[36mbegin %s\033[0m\n' "$logfile"; grep -E '^https?://.*;.*;[0-9]+$' "$logfile"; printf '\033[35m  end %s\033[0m\n' "$logfile"; done | awk '$0==""; !s[$0]++'


# v2; prefix lines with scrape start timestamp

for logfile in ~/jd/logs/*/"1fichier.com_jd.plugins.decrypter.OneFichierComFolder.log."*; do printf '\n\033[36mbegin %s\033[0m\n' "$logfile"; grep -E '^https?://.*;.*;[0-9]+$| -> Crawling: https?://' "$logfile"; printf '\033[35m  end %s\033[0m\n' "$logfile"; done | awk '/ -> Crawling: http/{t=$1"\033[34m"$3"\033[0m"$2;sub(/.*-/,"",t);sub(/.* Crawling: /,"\033[1;33m")} /_jd\.plugins\.decrypter\./{t=""} $0==""; !s[$0]++{print t, $0}'


# sort list of keywords by occurence using awk

# example of:  sorting awk array by value
# example input:  L 12/07/2015 - 02:39:13: Loading map "vsh_streets_v2"
awk '/^L [0-9]{2}\/[0-9]{2}\/[0-9]{4} - [0-9]{2}:[0-9]{2}:[0-9]{2}: Loading map "[^"]*".$/ {sub(/".$/,"");sub(/.*Loading map "/,"");v[$0]++} END {for (i in v) tv[sprintf("%12s", v[i]),i]=i; n=asorti(tv); j=0; for (i=1;i<=n;i++) {split(tv[i],t,SUBSEP);ind[++j]=t[2]} for (i=1;i<=n;i++) {print v[ind[i]], ind[i]}}' atmux.log


# sort list of keywords by occurence using grep/sort/uniq

# example input:  L 12/07/2015 - 02:39:13: Loading map "vsh_streets_v2"
grep -E '^L [0-9]{2}\/[0-9]{2}\/[0-9]{4} - [0-9]{2}:[0-9]{2}:[0-9]{2}: Loading map "[^"]*".$' atmux.log | sed -r 's/".$//;s/.*Loading map "//' | sort | uniq -c | sort -n


# show occurences of each value using awk

# example of:  sorting awk array by key, showing partial results while processing
# example input:  L 06/16/2018 - 23:27:11: "ayase<177><[U:1:33010969]><Blue>" say "pls respond"
grep -aE '^L [0-9]{2}/[0-9]{2}/[0-9]{4} - [0-9]{2}:[0-9]{2}:[0-9]{2}: "[^"]*<[0-9]+><[^>]*><(Blue|Red)>" ' /v/srcds/atmux.log | cut -c-26 | sed -r 's/[^:]* - (..):.*/\1/' | awk 'function show() {print "";n=asorti(v,iv); for (i=1;i<=n;i++) {print iv[i], v[iv[i]]}}  v[$0]++ {} NR%10000==1 {show()} END {show()}'


# show occurences of timestamps grouped by day-of-week and hour

# example of:  date conversion from american to iso8601, getting day-of-week
# example input:  L 06/16/2018 - 23:27:11: "ayase<177><[U:1:33010969]><Blue>" say "pls respond"
perl -e 'use strict; use Time::Piece; my %hist; my $dow = "x"; my $ldate = "x"; while (<>) { next unless /^L ([0-9]{2}\/[0-9]{2}\/[0-9]{4}) - ([0-9]{2}):[0-9]{2}:[0-9]{2}: "[^"]*<[0-9]+><[^>]*><(Blue|Red)>/; if ($1 ne $ldate) {my $tp = Time::Piece->strptime("$1", "%m/%d/%Y"); $dow = $tp->strftime("%u%a"); $ldate = $1; print "$ldate $dow\n"} $hist{"$dow $2"}+=1 } foreach my $k (sort keys %hist) {printf "%s %d\n", substr($k, 1), $hist{$k}}' < /v/srcds/atmux.log


# hexdump 32 bytes wide

hexdump -e '"%06.6_ao  " 8/4 "%08x "' -e '"  " 32/1 "%_p" "\n"' some.bin


# hexdump 32 bytes wide, not affected by system endian

fa='4/1 "%02x" " " '; fb='4/1 "%02x" "  " '; hexdump -e '"%06.6_ao  "'" $fa$fa$fa$fb$fa$fa$fa$fb" -e '" " 16/1 "%_p" "  " 16/1 "%_p" "\n"' some.bin  | less
# or in 6x 3byte groups
h='3/1 "%02x" " " ';  t='" " 9/1 "%_p"';  hexdump -e '"%06.6_ao  "'" $h$h$h $h$h$h" -e "$t$t"' "\n"'


# diff two binary files, show warning (hex=0) or hexdiff (hex=1)

hex=1; f1=setup1.msi; f2=setup2.msi; fs=$({ stat -c%s "$f1"; stat -c%s "$f2"; } | sort -n | head -n 1); bs=$((1024*1024)); tch=$((fs/bs)); nch=0; while [ $nch -le $tch ]; do nch=$((nch+1)); dd if="$f1" bs=$bs skip=$nch count=1 2>/dev/null > /dev/shm/b1; dd if="$f2" bs=$bs skip=$nch count=1 2>/dev/null > /dev/shm/b2; cmp /dev/shm/b{1,2} >/dev/null && echo $nch ok && continue; [ $hex -ne 0 ] && { { hexdump -C </dev/shm/b1 >/dev/shm/h1 & } 2>/dev/null; { hexdump -C </dev/shm/b2 >/dev/shm/h2 & } 2>/dev/null; wait; cdiff /dev/shm/h{1,2} || break; }; ofs=$((bs*nch)); printf '\033[1;35m%d / %d  --  %d to %d\033[0m\n' $nch $tch $ofs $((ofs+bs)); done


# side-by-side machine code disassembly from clipboard

# assuming you have another terminal showing a hexdump diff between two binary files (exe/dll/elf/...),
# run this, select chunk 1, hit enter, select chunk 2, hit enter
while true; do disas() { ofs=$1; printf '\033[H'; xclip -o | awk '{printf "%x ",r; r+=16; sub(/./,""); for(n=2;n<=17;n++) printf "%s",$n; printf "\n"}' | xxd -r > /dev/shm/binclip; radare2 -qc 'pD '$(stat -c%s /dev/shm/binclip) /dev/shm/binclip 2>&1 | sed -r 's/^/'$'\033''['${ofs}'G/;s/$/'$'\033''[K/'; }; read -u1 -r; clear; disas 1; read -u1 -r; disas 91; done


# find all images with a particular RGB value at a given coordinate

# 1) find images in folder, get color at (w-16), (h/2)
find -maxdepth 1 -type f -exec file -i '{}' \+ | grep -E '^[^:]+: +image/(jpeg|gif|png)(;|$)' | sed -r 's/:.*//' | while IFS= read -r fn; do printf '%s\n' "$fn" > /dev/shm/x; read w h <<<$(identify -format "%[fx:w] %[fx:h]\n" "$fn" | head -n 1); x=$((w-16)); y=$((h/2)); printf '\n%s ' "$fn"; convert "$fn[0]" -crop 1x1+$x+$y -format "%[fx:floor(255*u.r)] %[fx:floor(255*u.g)] %[fx:floor(255*u.b)]" info: || { echo ERROR; break; }; done | tee colors.txt
#
# 2) filter to given color with tolerance=8
r=222; g=225; b=244; t=8; mkdir hits; cat colors.txt | awk '$NF<4 {next} $(NF-2)>'$r-$t' && $(NF-2)<'$r+$t' && $(NF-1)>'$g-$t' && $(NF-1)<'$g+$t' && $NF>'$b-$t'  && $NF<'$b+$t | tee /dev/stderr | sed -r 's/( [0-9]+){3}//' | while IFS= read -r x; do ln -s -- "../$x" hits/; done


# check whether all hashes, listed in multiple files, occur in a master hash table

# master file: ~ed/doc/mdw.zq1.2019-1103
# other hashfiles: mdw.*
cd /dev/shm/x/ && rm -f * && pv ~ed/doc/mdw.zq1.2019-1103 | gzip -d | awk '{printf "%s %s\n", substr($5, 1, 4), $5}' | while read fn v; do echo "$v" >> $fn; done
mdws="mdw.1t mdw.bismuth mdw.inb4 mdw.iv mdw.jackhammer.real.mk2 mdw.javs mdw.porta3t mdw.shm mdw.t5 mdw.tome mdw.va mdw.va2 mdw.wark"
# for mdw in $mdws; do pv $mdw | awk '{printf "%s %s\n", substr($5, 1, 4), $5}' | while read fn v; do grep -qF -- "$v" /dev/shm/x/$fn || echo "$v $mdw"; done; done | tee /dev/shm/missing
for mdw in $mdws; do export mdw; pv $mdw | awk '{printf "%s %s\n", substr($5, 1, 4), $5}' | perl -e 'use strict; use warnings; my $mdw = $ENV{mdw}; while (<>) { next unless m/^([0-9a-f]{4}) ([0-9a-f]+)$/; my $fn = $1; my $hash = $2; my $hit = 0; open(my $fh, "<", "/dev/shm/x/$fn"); while (my $ln = <$fh>) {chomp $ln; if ($ln eq $hash) {$hit=1; last}} close $fh; if ($hit == 0) {print $hash, " ", $mdw, "\n"}}'; done | tee /dev/shm/missing
cat /dev/shm/missing | while IFS=' ' read -r hash mdw; do grep -F "$hash" -- "$mdw"; done | tee /dev/shm/missing2



file statistics

# list all folders with ftm files, show count(ftm) count(non-ftm) filepath

find -iname \*.ftm | sort | sed -r 's@/[^/]*$@@' | uniq | while IFS= read -r x; do find "$x" -mindepth 1 -maxdepth 1 > /dev/shm/fds; nf=$(cat /dev/shm/fds | grep -iE '\.ftm$' | wc -l); nn=$(cat /dev/shm/fds | grep -viE '\.ftm$' | wc -l); printf '%4d %4d %s\n' "$nf" "$nn" "$x"; done 


file management

# copy all new ftm files to checksum-origname.ftm, skip newer remote, touch newer local

find -type f -iname \*.ftm | while IFS= read -r x; do sum=$(md5sum "$x" | cut -c-32); fn="$sum-${x##*/}"; loc="$HOME/ftm/$fn"; printf '        %s\n\033[A' "$fn"; [[ -e "$loc" ]] && { [[ "$loc" -nt "$x" ]] || { printf '\033[1;34mskip\033[0m\n'; continue; }; printf '\033[1;33mtouch\033[0m\n'; touch -r "$x" -- "$loc"; continue; }; printf '\033[1;32mnew\033[0m\n'; cp -p -- "$x" "$loc"; done 


# for each file in current directory, copy to d2 if not exist, warn if exist and size mismatch

d2=/mnt/zq1/hd/ef/n5x/vids/; find -type f -printf '%s %p\n' | while IFS=' ' read -r s1 fp; do s2=$(stat -c%s -- "$d2/$fp" 2>/dev/null || echo 0); [ $s1 -eq $s2 ] && continue; [ $s2 -gt 0 ] && { printf '\033[1;31msize warn: %s / %s @ %s\033[0m\n' $s1 $s2 "$fp"; continue; }; dir="${fp%/*}"; mkdir -p -- "$d2/$dir"; cp -pv -- "$fp" "$d2/$dir/"; done


# remove all symlinks - delete files smaller than 5MB - find and sort by time

find -type l -exec rm '{}' \+;  find -type f -size -5M -exec rm '{}' \+;  find -type f -printf '%T@ %s   %p\n' | sort -n


# manual file merge: monitor two files for differences

osum=x; height=$(tput lines); while true; do sleep 0.5; sum=$(cdiff file1 file2 | tail -n $height | tee /dev/shm/tmp.diffmon | md5sum | cut -c-32); [ "x$sum" == "x$osum" ] && continue; osum="$sum"; cat /dev/shm/tmp.diffmon | sed -r "s/\t/  /g;s/$/$(echo -ne "\033[K")/"; echo -e '\033[H'; done


# compare keepass csv

dbs="ip710.csv phone.csv thicc.csv"; cat $dbs | sort | uniq | grep -E '^".....' | while IFS= read -r ln; do grep -F "$ln" $dbs; echo; done


# compare keepass csv v2: hilight entries not identical in all files

cd /dev/shm; ncsv=$(printf "%s\n" *.csv | wc -l); cat -- *.csv | sort | uniq | grep -E '^".....' | while IFS= read -r ln; do grep -lF "$ln" /dev/shm/*.csv > /dev/shm/khits; [ $(wc -l < /dev/shm/khits) -eq $ncsv ] && c=0 || c=3; printf '\033[1;37;4%sm..\033[0;36m %s\033[33m ' $c "$ln"; cat /dev/shm/khits | sed -r 's@.*/@@;s@\.csv$@@' | tr '\n' ' '; echo; done


# working with corrupted filenames

ls -ali
# 1481785 -rw-rw-r-- 1 ed ed 0 Jun 30 23:32 ??[]???v???
find -maxdepth 1 -inum 1481785
find -maxdepth 1 -inum 1481785 -print0 | xargs -0 stat --
find -maxdepth 1 -inum 1481785 -print0 | xargs -0I '{}' mv -- '{}' something.mp3
find -maxdepth 1 -inum 1481785 -delete


# recursive md5sum

find -xdev -type f -exec md5sum '{}' \+


# extract iso without root

iso='../your.iso'; mkdir asdf && cd asdf && isoinfo -R -l -i "$iso" | grep -E '^Directory listing of ' | sed -r 's_^Directory listing of /?__' | grep -vE '^$' | while IFS= read x; do mkdir -p "$x"; done; isoinfo -R -f -i "$iso" | sed -r 's/.//' | while IFS= read x; do [ -d "$x" ] || isoinfo -R -x "/$x" -i "$iso" > "$x" || IFS= echo "$x"; echo -n '.'; done


# list iso contents, sorted by iso-offset

# replace -R with -J to use joliet(windows) index instead of rockridge(linux)
# iso_ord generates a -sort file for mkisofs/cdrkit or xorriso to rebuild the iso in an xdelta3-friendly way
iso_ls() { LC_ALL=en_US.UTF-8 isoinfo -Rl -i "$1" | awk '{sub(/\[/,"[ ");sub(/ $/,"")} /^Directory list/{d=substr($0,23);s=2;next} s-->0&&/\] +\.{1,2}$/{next} /^[drwx-]{10} /{a=$0;b=$0;sub(/\] .*/,"",a);sub(/[^]]+\] +/,"",b);printf "%s] %s%s\n",a,d,b}' | sort -k10,10; }
iso_ord() { iso_ls "$1" | awk '!/^d/{sub(/[^]]+\] +/,"");printf "-%s %s\n",NR,$0}'; }


# turn back last-modified timestamp of files more recently modified than 2 hours ago

newest="$(find -mmin -121 -printf '%T@ %P\n' | sort -r | tail -n 1 | sed 's/\..*//')"; [ "x$newest" == "x" ] || { now=$(date +%s); now=$(( now - 7200 )); diff=$((  ( $now - $newest ) * -1  )); echo "Shifting back by $diff seconds ..."; find -mmin -3 | while IFS= read -r x; do fage=$( date +%s -r "$x"; ); nfage=$(( $fage - $diff ));  echo "$fage - $diff = $nfage   $x"; touch --date=@$nfage "$x"; done ; }


# check file fragmentation

find -type f -exec filefrag '{}' \+  | grep -vE ': [01] extents? found$' | sed -r 's/(.*): ([0-9]*) extents? found$/\2 \1/' | sort -n | while IFS=' ' read -r frags filename; do printf '%4d  %12d  %s\n' "$frags" "$(stat -c%s -- "$filename")" "$filename"; done


# scan for dupes of file

find / -iname ultra.sh -printf '%TY-%Tm-%Td %TH:%TM:%TS %9s\r%p\n' 2>/dev/null | while IFS=$'\r' read -r a b ; do printf '%s %s %s\n' "$(md5sum < "$b")" "$a" "$b"; done | sort 


# repack all tgz to txz (TODO: paralleli.sh)

cores=4; busy=0; while IFS= read -r x; do busy=$((busy+1)); [[ $busy -gt $cores ]] && { wait; busy=1; }; txz="$(printf '%s\n' "$x" | sed -r 's/gz$/xz/')"; printf '[%s]\n[%s]\n\n' "$x" "$txz"; { gzip -d < "$x" | xz -c -e > "$txz"; touch -r "$x" "$txz"; } & done < <( find -iname \*gz -printf '%s %p\n' | sort -n | tail -n 10 | sed -r 's/^[^ ]* //' | tac ); wait


# compare resuls: size(gz) size(xz) diff% diffBytes filename

find -iname \*gz -printf '%s %p\n' | sort -n | sed -r 's/^[^ ]* //' | while IFS= read -r x; do txz="$(printf '%s\n' "$x" | sed -r 's/gz$/xz/')"; [[ -e "$txz" ]] || continue; sz1=$(stat -c%s "$x"); sz2=$(stat -c%s "$txz"); perc=$(( (sz2*100)/sz1 )); printf '%s %s %s %12s %s\n' $sz1 $sz2 $perc $((sz1-sz2)) "$x"; done 


# compare a folder with a tarchive of said folder

# assumes `tar -tvf` output of the form:
# -rw-r--r-- ed/ed        507456 2016-12-24 15:34 home/ed/mouse.mp3
tar --utc --full-time -tvf dootnode-2018-0930-02-ed.txz | sed -r '/\/$/d; s/([^ ]+ +){2}//; s/(([^ ]+ +){3})(.*)/\3 \1/' | sort | sed -r 's/(.*[^ ])(( +[^ ]+){3}) */            \2 \1/; s/ *([^-]{14})( [0-9]+-)/\1\2/' | sed -r 's/^ +[0-9]+ (.* -> .)/\1/' > list.tar
TZ=UTC find /home/ed '(' -not -type d ')' -printf '%p -> %l %s %TY-%Tm-%Td %TH:%TM:%TS\n' | sed -r 's/.//;s/\.[0-9]+$//' | sort | sed -r 's/(.*)(( +[^ ]+){3})$/            \2 \1/; s/ *([^-]{14})( [0-9]+-)/\1\2/; s/ -> $//' | sed -r 's/^ +[0-9]+ (.* -> .)/\1/' > list.dir
cdiff list.tar list.dir


# compress everything in a folder to separate .txz files

for x in *; do printf '\033[33m%s\033[0m\n' "$x"; tar -c -- "$x" | xz -cze8T4 > "$x.txz"; done


# compare size before/after

find -mindepth 1 -maxdepth 1 | grep -vE '\.txz$' | while IFS= read -r x; do sz1=$(du -sBK -- "$x" | sed 's/K.*//'); sz2=$(du -sBK -- "$x.txz" | sed 's/K.*//'); printf '%10s %10s %3s%% %s\n' $sz1 $sz2 $((sz2*100/sz1)) "$x"; done


# remove everything that is not a .txz file

find -mindepth 1 -maxdepth 1 | grep -vE '\.txz$' | while IFS= read -r x; do printf '\033[33m%s\033[0m\n' "$x"; read -u1 -rp 'k?'; rm -rf "./$x"; done


# for the brave: automate the previous three

# prepare /dev/shm/compression-items (list of directories or files; each item in list gets its own .txz)
# prepare /dev/shm/compression-dirs  (list of directories; each element inside each directory gets its own .txz)
#
# step 1) generate /dev/shm/compression-queue from the other /dev/shm files,
# should not produce any output -- STOP and reconsider if it does
cat /dev/shm/compression-items /dev/shm/compression-dirs | sed -r 's@/*$@@' | while IFS= read -r x; do [ -e "$x" ] || printf '\033[31m404: %s\033[0m\n' "$x"; done; sed -r 's@/*$@@' /dev/shm/compression-items > /dev/shm/compression-queue || echo ERROR; sed -r 's@/*$@@' /dev/shm/compression-dirs | grep -E ... | while IFS= read -r x; do find "$x" -mindepth 1 -maxdepth 1 | grep -vE '\.txz$'; done | sed -r 's@/*$@@' >> /dev/shm/compression-queue; [ $(sort /dev/shm/compression-queue | uniq | wc -l) -eq $(wc -l < /dev/shm/compression-queue) ] || echo DUPE_ITEMS; cat /dev/shm/compression-queue | while IFS= read -r x; do sed -r 's@$@/@' /dev/shm/compression-queue | grep -F -- "$x/" | wc -l | grep -qE '^1$' || printf 'child recompress: %s\n' "$x"; done
#
# step 2) compress and delete each entry in /dev/shm/compression-queue
# use du -sBK instead of du -sb if your du can't -b (at the cost of less precision)
grep -E ... /dev/shm/compression-queue | while IFS= read -r x; do dir="${x%/*}"; fn="${x##*/}"; printf '[\033[34m%s\033[0m] [\033[36m%s\033[0m] \033[31m' "$dir" "$fn"; XZ_OPT=-e8T4 ionice -c 3 nice tar -C"$dir" -cJf "$x.txz" "$fn" || { echo COMP_ERR; break; }; sz1=$(du -sb -- "$x" | sed 's/[^0-9].*//') || { echo sz_err; break; }; sz2=$(du -sb -- "$x.txz" | sed 's/[^0-9].*//') || { echo sz_err; break; }; [ $sz2 -lt 1024 ] && { echo bad_output; rm "$x.txz"; continue; }; perc=$((sz2*100/sz1)); printf '\033[32m%s%%\033[0m ' $perc; [ $perc -ge 94 ] && { echo insufficient_ratio; rm "$x.txz"; continue; }; echo; ionice -c 3 nice rm -rf "$x"; done


# find all folders named "_", ensure they have no siblings, then move all their contents up one level and delete the _ directory

find -name _ | sed -r 's@/[^/]*$@@' | while IFS= read -r x; do find "$x" -mindepth 1 -maxdepth 1 | wc -l | grep -qE '^1$' || continue; printf '\033[36m%s\033[0m\n' "$x"; mv -- "$x/_/"{.[!.]*,..?*,*} "$x"; rmdir "$x/_"; done 


# create an m3u8 playlist of all files within a directory

{ printf '%s\n' "#EXTM3U"; find -type f | sort | while IFS= read -r x; do ffprobe -hide_banner -- "$x" </dev/null 2>&1 | awk '/^  Duration: / {sub(/,$/,"",$2);split($2,a,":");dur=60*(60*a[1]+a[2])+a[3]} /^      [a-zA-Z]+ *: / {k=tolower($1);v=$0;sub(/[^:]*: /,"",v);t[k]=v} END {if (!t["artist"]) {t["artist"]=t["album_artist"]} if (!t["artist"]) {t["artist"]=t["albumartist"]} if (!t["artist"]) {t["artist"]="x"} if (!t["title"]) {t["title"]="x"} if (!dur) {exit} printf("#EXTINF:%d,%s - %s\n",dur,t["artist"],t["title"])}'; printf '%s\n' "$x"; done; } | tee the.m3u8


# monitor folder for new png files and convert to jpg

while true; do png="$(find -type f -iname \*.png -printf '%T@ %p\n' | sort -n | tail -n 1 | sed -r 's/[^ ]* //')"; sleep 0.5;  [ -e "$png.jpg" ] && continue; printf '%s\n' "$png"; convert -quality 94 "$png" "$png.jpg"; done 


# extract tar file without clobbering/overwriting/replacing existing files, logging stdout and stderr to separate files

cat /mnt/sda1_ov/some.tar | tar -zxvk > >( tee exlog.out ) 2> >( tee exlog.err >&2 )


# extract the colliding files into a subfolder

rm -rf exerr; mkdir exerr; cd exerr; cat ../exlog.err | grep -E ': Cannot open: File exists$' | sed -r 's/^tar: //;s/: Cannot open: File exists$//' > .list
cat /mnt/sda1_ov/some.tar | tar -zxvkT.list > >( tee exlog.out ) 2> >( tee exlog.err >&2 )


# normalize md5sum files for diffing

for k in sums1 sums2; do cat "$k" | sed -r 's/(.{32}) .(\.\/)?(.*)/\3\1/' | sort | sed -r 's/(.*)(.{32})/\2  \1/' | grep -vE '\.pyc$' > "${k}s"; done 


# adjust last-modified of all files in a folder (dsc*) against a known-good timestamp in another (img*)

# get the bad timestamp (output = 1410636434)
stat -c%Y 2018-12-dcim/DSC02735.JPG 
#
# get the good timestamp from IMG_20181228_104008.jpg (output = 1545961208)
date +%s --date='Fri Dec 28 10:40:08 JST 2018'
#
# update the last-modified of all DSC* files
find -maxdepth 1 -name DSC\*.JPG | sort | while IFS= read -r x; do t=$(stat -c%Y -- "$x"); t=$((t+(1545961208-1410636434))); d="$(date -u --date=@$t)"; touch -d "$d" -- "$x"; done 


# append last-modified to filename; YYYY-MMDD-HHMMSS in JST (Tokyo)

find -maxdepth 1 -name DSC\*.JPG | sort | grep DSC02517 -A100000 | grep DSC03218 -B100000 | sed -r 's/(.*)\.(.*)/\1 \2/' | while IFS=' ' read -r fn ext; do t=$(stat -c%Y -- "$fn.$ext"); d=$(TZ=:Asia/Tokyo date --date=@$t +%Y-%m%d-%H%M%S); printf '[%s] [%s] [%s]' "$fn" "$d" "$ext"; mv -v "$fn.$ext" "$fn-$d.$ext"; done


# rewrite symlink destinations from sdb_ov to sda_ov

inval() { find -maxdepth 1 -xtype l -printf '%p // %l\n'; }; inval | grep -F ' // /mnt/sdb_ov/' | sort | sed -r 's#/sdb_ov/#/sda_ov/#' | while IFS= read -r x; do fn="$(printf '%s/n' "$x" | sed -r 's# // .*##')"; fp="$(printf '%s\n' "$x" | sed -r 's#.* // ##')"; [ -e "$fp" ] || continue; printf '[%s] [%s]\n' "$fn" "$fp"; ln -sfn -- "$fp" "$fn"; done; echo; inval | grep -E . && echo error


# find duplicate files and replace with hardlinks

# assumes no sha512 collisions, hmu if you got one
oh=; op=; find -xdev -type f -size +255c -exec sha512sum '{}' \+ | LC_ALL=en_US.UTF-8 sort | while read -r h p; do [ "$h" = "$oh" ] || { oh="$h"; op="$p"; continue; }; ln -vf -- "$op" "$p"; done


# kill apple dotfiles

# currently identified and purged: DS_Store, AppleSingle, AppleDouble
# -- disable DS_Store on OSX: defaults write com.apple.desktopservices DSDontWriteNetworkStores true
# -- disable AppleDouble on OSX: not possible
# (why aren't these xattrs, what the fuck apple)
find -type f \( -name .DS_Store -or -name ._.DS_Store \) -delete
find -type f -name ._\* | while IFS= read -r f; do cmp <(printf '\x00\x05\x16') <(head -c 3 -- "$f") && rm -f -- "$f"; done


data visualization

# graph dependency tree of package

apk dot protobuf | dot -Tps /dev/stdin -o tmp.ps && convert -density 96 tmp.ps  tmp.png && feh -B white tmp.png


# retrieve unix epoch timestamps from a logfile and print them human-readable, sub(.{6}) converts from nanosecond

cat some.log | awk '/utctime: / {t=$0; sub(/.* /, "", t); sub(/.{6}$/, "", t); printf "%s  %s\n", $0, strftime("%Y-%m-%d, %H:%M:%S", t); next}'


# print unix epoch timestamps as human-readable, 83% faster than the awk version

perl -lne 'print scalar localtime $_'


data validation

# find fucked conversions

find -type f | sed -r 's/.mp3$//' | while IFS= read -r x; do t1=0; t2=0; t1=$(stat -c%Y ~/rt-red/"$x"); t2=$(stat -c%Y "$x.mp3"); printf '%5s  %s\n' $((t2-t1)) "$x"; done | less


# verify a column in a file is strictly increasing by one

cat $(ls -1tra | grep tsv | tail -n 1) | awk '$1 ~ /^[0-9]{5}/ {v=$1; vd=v-ov; ov=v; if(vd != 1) {print vd}}'


data conversion

# find last occurence of "L YYYY/DD/MM - HH:MM:SS:", convert to unix epoch, print age in seconds

last_iso() { tac /v/srcds/atmux.log | head -n 10000 | grep -E '^L [0-9]{2}\/[0-9]{2}\/[0-9]{4} - [0-9]{2}:[0-9]{2}:[0-9]{2}:' | head -n 1 | sed -r 's/^L (..)\/(..)\/(....) - (..:..:..).*/\3\/\1\/\2 \4/'; };
last_epoch() { ts="$(last_iso)"; [ -z "$ts" ] && return 1; date -u +%s --date="$ts"; };
idle_sec() { last=$(last_epoch); [ -z "$last" ] && return 1; now=$(date -u +%s); echo $((now-last)); }; idle_sec


# base-10 to base-36, csv file with 3 columns (numeric, numeric, text)

perl -e 'use strict; use warnings; my @sym = split(//, join("", "0".."9", "a".."z")); sub b36 { my ($v) = @_; my $rv = ""; while ($v) { $rv = @sym[$v % 36] . $rv; $v = int $v / 36 } return $rv || "0" }   while (<>) { my @v = split ",", $_; print(join(",", b36($v[0]), b36($v[1]), $v[2]), "\n") }' < base10.csv > base36.csv


# jq: convert twitch chat from chat_downloader into plaintext

# example input: [{"timestamp": 1616912818525000, "message": "DANZAI", "author": { "display_name": "XShn" }}]
# example output: 2021-03-28 06:26:58 <XShn> DANZAI
jq -r '.[]|select(.timestamp)|(.timestamp|./1000000|strftime("%Y-%m-%d %H:%M:%S")) + " <" + .author.display_name + "> " + .message'


# jq: print selection of keys (size,artist,bpm) if the keys are non-null

# example input: {"files": [{"href": "0001.mp3", "sz": 1075520, "tags": {"artist": "nervous_testpilot"}}]}
# example output: {"sz": 1075520, "ta": "nervous_testpilot"}
jq -C '.files[]|{sz:.sz,ta:.tags.artist,tb:.tags.".bpm"}|del(.[]|select(.==null))'


# jq/awk: dump youtube chatlog (from chat_downloader) with time-difference tracking

# example output: 3698.186 1611943297.375   1.013    1.06 l UQKztzkAZgA King                ZDhzd01MdzA 7 pm gang
# format: video-time, unix-time, delta-video, delta-unix, author-id, author-name, msg-id, msg-body
# intermediate format between jq and awk, "\r" replaced with " | ":
#   3698.186 | 1611943297375348 | UC9VM3-sfX... | King | CjkKGkNMQz... | 7 pm gang
fun() { jq -r '.[]|[.time_in_seconds, .timestamp, .author.id, .author.name, .message_id, .message]|join("\r")' < "$1" | awk -F'\r' '{gsub(/%3D/,"",$5);printf "%9.3f %10.3f %7.3f %7.2f l \033[36m%s %s \033[33m\033[76G%s %s\033[0m\n",$1,$2/1000000,$1-o1,($2-o2)/1000000,substr($3,length($3)-10),$4,substr($5,length($5)-10),$6; o1=$1;o2=$2}'; }


data manipulation

# urldecode (wow please don't actually use this)

v='%22%37...'; v="${v//+/ }"; printf '%b' "${v//%/\x}"


# sort md5sums by file, excluding first folder

cat md5-bin | sed -r 's/(.{32})[^/]*(.*)/\2\1/' | sort | sed -r 's/(.*)(.{32})/\2 \1/'


# convert chrome evaluated css clipboard layout to regular css (join every 3 lines into one line)

awk '{ printf "%s ", $0 } $0 == ";" { print "" }' < discord-text-display > discord-text-display-2


# dos2unix alternatives

sed -r 's/\r$//'
sed -r "s/$(printf "\r")$//"
tr -d '\r'


# unix2dos alternatives

cat COPYING | sed 's/$/\r/' > COPYING.txt


data unborking

# search for correct character encoding (TODO: figure out which of these three to keep)

./iconv -l | sed 's/ .*//' | while read enc1; do echo "$enc1"; ./iconv -l | sed 's/ .*//' | while read enc2; do cat ~/Music/retrosound-ska-kyoto/flac/cdinfo | grep SKAllhead | head -n 1 | ./iconv -f "$enc1" -t "$enc2" > /dev/shm/iconv.tmp || continue; ./iconv -l | sed 's/ .*//' | while read enc3; do cat /dev/shm/iconv.tmp | ./iconv -f "$enc3" | grep -qF 'のSKA' && echo ". . . . . . . . . . . . $enc1 -- $enc2 -- $enc3" ; done ; done ; done 2>/dev/null


# alternatively

alias liconv=~/pe/iconv/bin/iconv; liconv -l | sed 's/ .*//' | while read enc1; do echo "$enc1"; liconv -l | sed 's/ .*//' | while read enc2; do cat cdinfo | grep SKAllhead | head -n 1 | liconv -f "$enc1" -t "$enc2" > /dev/shm/iconv.tmp || continue; liconv -l | sed 's/ .*//' | while read enc3; do cat /dev/shm/iconv.tmp | liconv -f "$enc3" | grep -qF 'のSKA' && echo ". . . . . . . . . . . . $enc1 -- $enc2 -- $enc3" ; done ; done ; done 2>/dev/null


# or even

iconv -l | sed 's/ .*//' | while read enc1; do echo "$enc1"; iconv -l | sed 's/ .*//' | while read enc2; do cat cdinfo | grep SKAllhead | head -n 1 | iconv -f "$enc1" -t "$enc2" > /dev/shm/iconv.tmp || continue; iconv -l | sed 's/ .*//' | while read enc3; do cat /dev/shm/iconv.tmp | iconv -f "$enc3" | grep -qF 'のSKA' && echo ". . . . . . . . . . . . $enc1 -- $enc2 -- $enc3" ; done ; done ; done 2>/dev/null


data migration

# migrate server

rsync -axHAX -e 'ssh -p 752' /mnt/r/ 198.27.67.33:/mnt/r/


# monitor process with strace and print timestamps also when idle

echo yes > run ; while true; do [ -f run ] || break; printf '\033[A'; date +%H:%M:%S.%N; sleep 0.25; done & strace -ttp 15883 -e getdents64


# move processed mkv files to other server once next file is created, delete file once size has been verified

tf=$(mktemp); while true; do find -maxdepth 1 -type f -iname crf\*.mkv -printf '%T@ %p\n' | grep -E ... | sort -n | head -n -1 | sed -r 's/[^ ]* //' | while IFS= read -r fn; do scp "$fn" otherbox:~; f1=$(stat -c%s "$fn"); f2=$(ssh otherbox "stat -c%s \"$fn\""); [[ "x$f1" == "x$f2" ]] && { echo "remove $fn"; rm "$fn"; continue; }; printf "ERROR: %s = %s / %s\n" "$fn" "$f1" "$f2"; continue; sleep 10; done


# transfer rar files from other server, unpack and delete each archive as it finishes downloading

# note: all extracted files will be moved directly to $destdir, no per-archive subfolders
# note: for files with duplicate names, only the first will be kept
# note: tempfiles in $workdir can be to 4x the largest archive's size
# note: getpid() returns the pid of the process writing files in terminal 2 for the purpose of suspension during processing in termianl 1
#
# terminal 1: start monitoring for incoming archives, unpack and move files to external drive, then delete archives 
destdir="/media/extdrive/unpacked"; workdir="$HOME/inc/archives"; getpid() { ps aux | grep -E 'tar[ ]-xvC' | awk '{print $2}'; }; unpack() { arc="$1"; kill -STOP $(getpid); [ -z "$workdir" ] && return 200; [ -z "$destdir" ] && return 201; rm -rf "$workdir/t"; mkdir -p "$workdir/t"; cd "$workdir/t" || return 202; ionice -c 3 nice -20 7z x ../"$arc" || return $?; echo md5sum; ionice -c 3 nice -20 find -type f -exec md5sum '{}' \+ >> "$destdir/sums.md5" || return $?; echo mv; ionice -c 3 nice -20 mv -- * "$destdir" || return $?; printf '\033[31m'; find -type f | sed -r 's/^/SKIPPING /'; printf '\033[0m'; rm -f ../"$arc"; kill -CONT $(getpid); }; last=''; mkdir -p "$workdir"; cd "$workdir"; while true; do find -maxdepth 1 -type f -iname \*.rar -printf '%T@ %p\n' | sort -n | sed -r 's/[^ ]* //'; sleep 1; done | awk '!s[$0]++{print;system("")}' | while IFS= read -r next; do printf '\n\033[36m  curr: %s\n  next: %s\n\033[0m' "$last" "$next"; [ -z "$last" ] && { last="$next"; continue; }; unpack "$last" || { echo error $?; break; }; last="$next"; printf '\n\033[36mif that was the last archive, hit ^C and manually run the following once the transfer has finished:\n  unpack "%s"\n  rm -rf "%s"\n\n\033[0m' "$next" "$workdir"; done
#
# terminal 2: copy archives from "~/Downloads/archives" on server "vaio" to "~/inc/archives"
ssh vaio 'tar -cC ~/Downloads archives' | ionice -c 3 nice -20 tar -xvC ~/inc


# tar files into split archives (for burning to CD/DVD/Blurei/DNA)

tar -c folder1/ folder2/ | pigz -c | split -b 690M - archive.tgz.


# extract split archive which spans multiple CDs/DVDs (see above)

# note: expects you to mount/unmount the media and check that things look OK before resuming
# note: on any read error, this falls apart
while true; do (cd /media/cdrom/; for x in *.tgz.a*; do grep -F -- "  $x" checksum.md5 | md5sum -c >&2; cat -- "$x"; done); y=; n=; while true; do printf '\n\n\n*** insert next disk and press Y, or abort with N ***\n' >&2; read -n1 -r r; echo; [[ $r =~ ^[yY]$ ]] && y=1; [[ $r =~ ^[nN]$ ]] && n=1; [ $y ] || [ $n ] && break; done; [ $n ] && break; done | tar -zxv >/dev/shm/exlog


# share a blockdevice with another machine over network/tcp

# for example if an fde server is getting decommissioned, boot it from a rescue env and share the encrypted disk directly with the new server over the net and luks-open it over there
#
# 192.168.123.7 is the whitelisted client, 192.168.123.1 is the server (this machine with the disk to share)
echo 192.168.123.7/32 > /dev/shm/nbd-nodes
chown nbd. /dev/sdb; while true; do nbd-server -M 1 -l /dev/shm/nbd-nodes -d 192.168.123.1:5952 /dev/sdb; sleep 0.2; done
ss -lt
#
# connect to the server from the client
sudo rmmod nbd
sudo modprobe nbd max_part=8
sudo nbd-client 192.168.123.1 5952 /dev/nbd0 -b 4096
mount /dev/nbd0p1 /media/remotedisk


data hoarding

# record an icecast stream with inband metadata, will need a program to strip that before the file can be played

while true; do wget -U 'MPlayer' --header "Icy-MetaData: 1" -St1 http://stream.r-a-d.io/main.mp3; sleep 5; done


data destruction

# fill a hdd with random data (much faster than /dev/urandom) and show a progressbar in another terminal

apk add bash libressl util-linux tmux
openssl enc -aes-256-ctr -iter 1 -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | pv -i 0.2 > garbo
while true; do cat /proc/$(ps aux | grep -E 'cat[ ]/dev/zero|openssl[ ]enc' | awk '{print $1}' | head -n 1)/fdinfo/1 | awk '/^pos:/ {print $2}'; sleep 60; done


# provoke SMART errors by overwriting a dying drive repeatedly so the server provider actually swaps it out like they should

openssl enc -aes-256-ctr -iter 1 -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | while true; do date -u; dd iflag=fullblock of=/dev/sda bs=8k conv=sync,noerror; sleep 10; done


data undestruction

# undelete a folder from a btrfs disk

# step 0: immediately stop using the disk! maybe emergency remount read-only with sysrq+s sysrq+u
# step 1: find correct root;  256 = volume ID from this command:  btrfs subvol list /
btrfs-find-root -a /dev/nvme0n1p2 | sort -nrk4,4 | sed -r 's/\(.*//;s/.* //' | while read rb; do echo; echo $rb...; btrfs restore -r 256 -t $rb -Divv /dev/nvme0n1p2 . 2>&1 | grep the/folder/you/deleted && echo $rb && break; done
# step 2: rescue the folder;  v = the root number
v=5318008; d=/mnt/otherdisk/$v; mkdir $d; cd $d; btrfs restore -r 256 -t $v -iv /dev/nvme0n1p2


# also see "rescuing a mounted disk after you just now accidentally overwrote the partition header"



photo/video management (TODO: cleanup)

# compare file sizes in one directory to another, figuring out which files to keep based on size

printf 'mp4 vids\njpg pics\n' | while read ext dir; do find -maxdepth 1 -iname \*.$ext | while IFS= read -r fn; do this=$(stat -c%s "$fn" || echo 1); other=$(stat -c%s "/mnt/n5x/$dir/$fn" || echo 1); printf '%8s vs %8s ? ' "$this" "$other"; [[ $this -eq $other ]] && { echo "identical"; continue; }; [[ $this -gt $other ]] && echo "keep me" || echo "keep other"; done; done


# compress phone videos (retarded useless output filtering, kept for reference)

mkdir comp; find -maxdepth 1 -iname vid_\*.mp4 | sed -r 's@^\./@@' | sort -r | grep -vE comp | while IFS= read -r fn; do [[ -e comp/"$fn" ]] && continue; printf '%s\n' "$fn"; done > /dev/shm/comp.vid.list ; n=0; nfiles=$(cat /dev/shm/comp.vid.list | wc -l); while IFS= read -r fn; do n=$((n+1)); printf '\n\n\n%s / %s\n\n' "$n" "$nfiles"; AV_LOG_FORCE_COLOR=1 stdbuf -o0 ffmpeg -i "./$fn" -map 0 -vcodec libx264 -acodec copy -vf "scale=iw*.5:-8" -crf 26 "comp/$fn.mkv" || { rm "comp/$fn"; break; }; done < <( cat /dev/shm/comp.vid.list ) 2>&1 | sed -r "$(printf 's/\\r/\\r\\n\\033[A/g')" | grep -vE '\[libx264 @ [^]]*\][^a-ln-zA-Z]*frame=|Clipping frame in rate conversion|\[h264 @ [^]]*[^a-ln-zA-Z]*nal_unit_type: |cur_dts is invalid \(this is harmless' 


# compress phone videos (the real deal)

mkdir comp; find -maxdepth 1 -iname vid_\*.mp4 | cut -c3- | sort -r | grep -v comp | while IFS= read -r fn; do mkv="${fn%.*}.mkv"; [[ -e comp/"$mkv" ]] && continue; printf '%s\n' "$fn"; done > /dev/shm/comp.vid.list ; n=0; nfiles=$(cat /dev/shm/comp.vid.list | wc -l); while IFS= read -r fn; do n=$((n+1)); printf '\n\n\n%s / %s\n\n' "$n" "$nfiles"; mkv="${fn%.*}.mkv"; csp="-color_primaries bt709 -color_trc bt709 -colorspace bt709"; AV_LOG_FORCE_COLOR=1 ffmpeg -nostdin $csp -async 0 -vsync 0 -i "./$fn" -map 0 -vcodec libx264 -acodec copy -vf "scale=iw*.5:-2" -crf 32 -preset slower $csp "comp/$mkv" </dev/null || { rm "comp/$mkv"; break; }; done < <( cat /dev/shm/comp.vid.list )


# move images out of burst galleries into parent folder, renamed/renumbered to avoid collisions

# ensure you are in the folder which contains burst-folders, and each burst-folder has 1 or more images
find -name med-res-frame-\* -delete
find -name .nomedia -delete
find -name .medresframes -delete
find -name feature_table.bin -delete
find -name metadatastore.bin -delete
find -mindepth 2 -maxdepth 2 | grep _BURST | sort | sed -r 's@..@@;s@/@ @' | while IFS=' ' read -r dir fn; do trimdir="${dir:0:19}"; n=0; while true; do n=$((n+1)); nn=$(printf '%02d\n' $n); ofn="${trimdir}_$nn.jpg"; [[ -e "$ofn" ]] || break; done; printf '%s << %s/%s\n' "$ofn" "$dir" "$fn"; mv -n "$dir/$fn" "$ofn"; done; find -mindepth 2 -maxdepth 2; find -maxdepth 1 -type d -iname img_\* -exec rmdir '{}' \+;


# check for collisions before moving out of temp dir

find -iname img_\* | cut -c3- | sort | while IFS= read -r fn; do [[ -e ../pics/"$fn" ]] && printf 'collision: %s\n' "$fn"; done 
find -iname pano_\* | cut -c3- | sort | while IFS= read -r fn; do [[ -e ../pics/"$fn" ]] && printf 'collision: %s\n' "$fn"; done 
find -iname vid_\* | cut -c3- | sort | while IFS= read -r fn; do [[ -e ../vids/"$fn" ]] && printf 'collision: %s\n' "$fn"; done
# after making sure:
mv -- IMG_* ../pics/
mv -- PANO_* ../pics/
mv -- VID_* ../vids/
# the temp dir should now be empty


# compressing images before passing them back to the phone

mkdir ~/comp; cores=4; used=0; while IFS= read -r path; do identify -format '%w\n%h' "$path" | grep -qE '^[4-9]...$' || continue; [[ $used -gt $cores ]] && { wait; used=1; }; fn="${path%.*}"; ext="${path##*.}"; fno="$fn.comp.jpg"; tpath="$HOME/comp/$fno"; epath="comp/$fno"; [[ -s "$tpath" ]] && continue; [[ -s "$epath" ]] && continue; used=$((used+1)); printf '[%s]  [%s]  [%s]\n' "$fn" "$ext" "$fno"; convert -scale 50% -quality 50 "$fn.$ext" "$tpath" & done < <( find -maxdepth 1 -type f | sort | cut -c3- )


# transfer phone images to external disk: skip existing at target, delete the transferred ones

# (add argument -n for a dry-run)
function s2ext() {
oth=/mnt/sda_ov/n5x/pics; rsync -mW --ignore-existing --remove-source-files -av --progress --include='*/' --include='*.[jJ][pP][gG]' --include='*.[jJ][pP][eE][gG]' --exclude='*' . "$oth"/
oth=/mnt/sda_ov/n5x/vids; rsync -W --ignore-existing --remove-source-files -av --progress --include='*.[mM][pP]4' --exclude='*' . "$oth"/
}


# rsync from the phone over ssh too

ip=192.168.1.140; rsync -mW --ignore-existing --remove-source-files -av --progress --include='*/' --include='*.[jJ][pP][gG]' --include='*.[jJ][pP][eE][gG]' --exclude='*' -e 'ssh -p8022' ed@$ip:/data/data/com.termux/files/home/sd/DCIM/Camera/ pics/
ip=192.168.1.140; rsync -W --ignore-existing --remove-source-files -av --progress --include='*.[mM][pP]4' --exclude='*' -e 'ssh -p8022' ed@$ip:/data/data/com.termux/files/home/sd/DCIM/Camera/ vids/


# sync folders: MOVE missing files from current to oth, leave dupes alone

oth=/home/ed/n5x-2018-0221/Download; rsync -W --ignore-existing --remove-source-files -av --progress . "$oth"/


# sync folders: copy missing files from current to oth, leave dupes alone

oth=/mnt/sda_ov/n5x/pics; rsync -mW --ignore-existing -av --progress --include '*/' --include='*.[jJ][pP][gG]' --include='*.[jJ][pP][eE][gG]' --exclude='*' . "$oth"/


# move all files named IMG_YYYYMMDD_ into subfolders of YYYYMM

find -maxdepth 1 -type f | sed -r 's/[^_-]*[_-]//;s/..[_-].*//' | uniq | sort | uniq | grep -E '^[0-9]{6}$' | while IFS= read x; do mkdir -p -- "$x"; mv -n -- {IMG,PANO}_$x* "$x"; done


# compare md5sums made on phone with md5sums at destination after the above two

find -maxdepth 1 -type f -exec md5sum '{}' \+ | tee sums-phone
cat ~/Desktop/sums-phone | sed -r 's/  ../  /' | while IFS=' ' read -r rsum fn; do find -name "$fn" -exec md5sum '{}' \+ ; printf '%s  %s\n' "$rsum" "./$fn"; done | tee ~/Desktop/sums-comp2
cat ~/Desktop/sums-comp2 | uniq -uw32


# compare files that exist in both locations, print warning if file in current dir is twice as large as oth

# add any argument for quick compare (skip data comparison on same size)
function c2ext() {
oth=/mnt/sda_ov/n5x/pics; printf '\033[1;30m'; find -type f \( -iname \*.jpg -or -iname \*.jpeg \) | sort | while IFS= read -r x; do l=$(stat -c%s "$x"); r=$(stat -c%s "$oth/$x"); (( l * 2 < r )) && { echo -n /; continue; }; (( l > r * 2 )) && { printf '\n\033[1;31munthumb: \033[37m%s\033[1;30m\n' "$x"; continue; };  ok=''; [ -z "$1" ] && [ $l -eq $r ] && ok=1 || ok='';  [ -z "$1" ] || { cmp "$x" "$oth/$x" >/dev/null 2>/dev/null && ok=1 || ok=''; };  [ $ok ] && echo -n . || printf '\n\033[0;33m%s (%d / %d )\033[0m\n' "$x" "$l" "$r"; done; printf '\033[0m'
}


# for each file $x in $dir, either symlink file from $oth/pics/$x to $oth/gallery/$dir, or copy file to $oth/pics/$x if 404

# gray = link exists,  green = link made,  yellow = file copied,  red = fuck fuck fuck
mkgallery() { oth=/mnt/sda1_ov/n5x/; gal=$oth/gallery/"$1"; mkdir -p "$gal"; kind=pics; filter=.jp; for n in 1 2; do find "$1" -maxdepth 1 -type f -iname \*$filter\* | sed -r 's@.*/@@' | while IFS= read -r x; do [ -e "$gal/$x" ] && { printf '\033[1;30m%s\033[0m\n' "$x"; continue; }; [ -e "$oth/$kind/$x" ] || { printf '\033[33m%s\033[0m\n' "$x"; cp -pv "$1/$x" "$oth/$kind/"; }; [ -e "$oth/$kind/$x" ] && { printf '\033[32m%s\033[0m\n' "$x"; (cd "$gal"; ln -s ../../$kind/"$x" .); continue; }; printf '\033[1;31m%s\033[0m\n' "$x"; done; kind=vids; filter=.mp; done; }


# compress phone pics (TODO: paralleli.sh)

find -maxdepth 1 -iname img_\*.jpg | sort | grep -E 'IMG_20170816_191828.jpg' -B100000 | while IFS= read -r fn; do mv -- "$fn" comp2/ ; done
cores=4; used=0; mkdir comp; while IFS= read -r x; do echo "$cores/$used  $x"; convert "$x" -scale 25% -quality 90 comp/"$x" & used=$((used+1)); [[ $used -ge $cores ]] && { wait; used=0; }; done < <( find -maxdepth 1 -type f -iname \*.jpg | sed -r 's@^\./@@' | grep -E '^IMG_|^PANO_' | sort )


# compress screenshots (TODO: paralleli.sh)

mkdir comp; find -mindepth 1 -maxdepth 1 -iname \*.png | sort | while IFS= read -r x; do jpg="comp/${x%.*}.jpg"; pn2="comp/$x"; [ -e "$jpg" ] || [ -e "$pn2" ] && continue; printf '%s -> %s\n' "$x" "$jpg"; dim=$(identify -format "%w\n%h\n" "$x" | sort -n | tail -n 1); [ $dim -lt 1024 ] && xc="$x" || { xc=".50p.png"; convert -scale 50%x50% "$x" "$xc"; }; convert -quality 80 "$xc" "$jpg" & pngquant --output "$pn2" --skip-if-larger "$xc"; wait; [ -e "$pn2" ] || cp -pv "$x" "$pn2"; touch "$jpg" -r "$x"; touch "$pn2" -r "$x"; s1=$(stat -c%s "$pn2"); s2=$(stat -c%s "$jpg"); s1=$((s1*2)); s2=$((s2*3)); [[ $s2 -gt $s1 ]] && rm "$jpg" || rm "$pn2"; done


unorganized

# monitor nginx log for unique visitors

tail -Fn 1000 /var/log/nginx/best.website.access.log | awk '{ a=$1; b=$4; c=$0; sub(/^[^\]]*\] "/, "", c); sub(/".*/, "", c); printf "%17s %s %s\n", a, b, c }' | uniq -w 17


# generate jpegs until base64 sha1sum matches (really don't recall the purpose of this one)

touch run; for y in {20..200}; do for x in {32..320}; do [[ -e run ]] || break; convert tmp.png -scale ${x}x${y}\! -quality 90 /dev/shm/file.jpg; sha1sum < /dev/shm/file.jpg | cut -c-40 | xxd -ps -r | base64 | cut -c-4 | tee /dev/stderr | grep -qE '/|\+' && cp /dev/shm/file.jpg b64-${x}x${y}.jpg; done; done
for fn in b64-*.png; do echo -ne "$fn  "; sha1sum < "$fn" | cut -c-40 | xxd -ps -r | base64 | cut -c-4; done | grep -F + | grep -F /


# store all of latin1 and cp437 (msdos codepage) to a text file which windows 7 notepad can read

{ printf '\xef\xbb\xbf'; for cp in latin1 cp437; do printf "%02x" {32..255} | xxd -r -p | iconv -f $cp; echo; done; } | tee latin1-and-cp437.txt


# read file from random point

tail -c +$(perl -e 'print int(rand('$(stat -c%s doot.opus)'))') doot.opus | ffplay -f ogg -


# split opus file by OggS magic

ov=0; while IFS= read -r v; do echo "$v"; cat doot.opus | tail -c +$((ov+1)) | head -c $((v-ov)) > opus.part/$ov; ov=$v; done < <( bgrep 4f6767530000 doot.opus | sed 's/.* //' | while IFS= read -r x; do printf '%d\n' 0x$x; done )


# play opus file from random segment, needs to be OggS aligned, prefixedwith the first two OggS atoms from start of file (47+701 byte)

{ cat 0 47 ; ls -1 | sort -n | tail -n +$(perl -e 'print int(rand('$(ls -1 | wc -l)'))') | while IFS= read -r fn; do cat "$fn"; done; } | opusdec - - | ffplay -f s32le -ar 48k -


# dump local ldap in columns

# supports base64 dn but just dn
slapcat -n1 -oldif-wrap=no | perl -e 'use strict; use warnings; use MIME::Base64; my $dn=""; my $c=""; while(<>) {chomp; if(/^dn:/) {$c="1;44";$dn=$_;$dn=~s/^dn:+ //;if(/^dn::/) {$dn=decode_base64($dn)}next} if(!/([^:]+):+ (.*)/){next} printf("\033[%sm%10s %20s %30s \033[0m\n",$c,substr($1,0,10),substr($2,0,20),substr($dn,0,30));$c="0"}'


# serialize specific values from local ldap

# for each entry below cn=users, print "uid cn homedir" if at least two of them are present in that entry
slapcat -n1 -oldif-wrap=no | perl -e 'use strict; use warnings; use MIME::Base64; my $dn=""; my @keys=qw/uid cn homedir/; my %lkey = map{$_=>1} @keys; my %vals=(); sub p {if (scalar keys %vals < 2) {return} my @ln=(); foreach(@keys) {if(exists $vals{$_}) {push @ln,"$_:$vals{$_}"}} print((join " ",@ln)."\n")} while(<>) {chomp; if(/^dn:/) {p;$dn=$_;$dn=~s/^dn:+ //;if(/^dn::/) {$dn=decode_base64($dn)} %vals=();next} if($dn!~/,ou=users,/){next} if(!/([^:]+):+ (.*)/){next} if(!exists $lkey{$1}){next} $vals{$1}=$2} END {p}'


# display pictures in sixel terminals

# adjust rh to your row height
f=74754459_p0.png; max=720x720; rh=13; rm -f sixel-*.ppm; convert -resize $max\> -crop x$rh "$f" sixel-%d.ppm; n=0; while true; do f=sixel-$n.ppm; [ -e $f ] || break; (ppmquant 256 <$f | ppmtosixel) 2>/dev/null | grep -E '^[#-]' | tr '\n' '\r'; echo; n=$((n+1)); done | while IFS= read -r ln; do printf '\033Pq%s\033\\' "$(printf '%s\n' "$ln" | tr '\r' '\n')"; done


ffmpeg garbo

# add tracknumber suffix to all files with a tracknumber tag

for f in *; do tn="$(ffprobe -hide_banner -show_format -- "$f" 2>&1 | awk '/^TAG:(tracknumber|trck|trkn|track)=/{sub(/.*=[^0-9]*/,"");sub(/[^0-9]+$/,"");printf "%02d", $1;exit}')"; [ -z "$tn" ] || mv -- "$f" "$tn. $f"; done 


# rename all files from bandcamp to sane format

# from: "artist - album - trackno title"
# to: "trackno. artist - title"
~/bin/rename -v 's/(.*) - (.*) - ([0-9]{2}) (.*)\./$3. $1 - $4./' *


# convert all wav to flac with conditional downsampling

# max sample depth 16 bit, max sample rate 48000 hz
find -iname '*.wav' | while IFS= read -r x; do flac="${x%.*}.flac"; [ -e "$flac" ] && continue; printf '\033[36m[ %s ]\033[0m\n' "$x"; cargs="$(ffprobe -hide_banner -show_streams -select_streams a:0 -- "$x" 2>/dev/null | awk '{sub(/=flt/,"=f32")} !/^sample_(fmt|rate)=[suf]?[0-9]+$/{next} {sub(/sample_/,"");sub(/=[suf]?/," ");v[$1]=$2} END {if (v["rate"]>48000) printf "%s ", "-af aresample=48000:resampler=soxr"; if (v["fmt"]>16) printf "%s", "-sample_fmt s16"}')"; ffmpeg -nostdin -v warning -i "$x" -map 0 $cargs -c copy -c:a flac "$flac" && echo ok || echo ERR; done 2>&1 | tee conv.log; echo -n "errors: "; grep -E ^ERR conv.log | wc -l 
#
# and optionally copy over the lastmod timestamp afterwards
find -iname '*.flac' | while IFS= read -r x; do wav="${x%.*}.wav"; [ -e "$wav" ] && touch -r "$wav" -- "$x"; done 


# grab metadata for all wav files

find -iname '*.wav' | while IFS= read -r x; do printf '// %s\n' "$x"; ffprobe -hide_banner -show_format -show_streams -- "$x" 2>&1 && echo ok || echo ERROR $?; echo; done | tee original-wav-metadata.txt 


# convert all flac/wav/mp3 files below $PWD into $base/relpath/file.ogg

# filter out /ed/*.mp3 and replace all occurences of . in relpath with _
rm -rf /web/www/ocv.me/musicpreview; base=/web/www/ocv.me/musicpreview; find -type f | grep -aiE '\.(flac|wav|mp3)$' | grep -aviE '/ed/.*\.mp3$' | while IFS= read -r flac; do dir="${flac%/*}"; fn="${flac##*/}"; fn="${fn%.*}"; pdir="${dir//\./_}"; printf '[%s] [%s]\n' "$dir" "$fn"; ogg="$base/$pdir/$fn.ogg"; [ -e "$ogg" ] && continue; mkdir -p "$base/$pdir"; LD_LIBRARY_PATH=/home/ed/pe/aotuv/lib/ nice -n 20 ionice -c 3 ffmpeg -i "$flac" -map 0:a -acodec libvorbis -q:a 2 "$ogg" < /dev/null; done


# find all zip files with audio files in them, unpack to /dev/shm/zt and convert to $base/zip-path/relpath/file.ogg

# check if you have enough space in /dev/shm:  find -iname \*.zip -printf '%s %p\n' | sort -n
base=/web/www/ocv.me/musicpreview; exts='m4a|mp3|m2a|ogg|oga|opus|flac|alac|ac3|ape|mka|mpc|tta|tak|ape|wv|aiff?|amr|au|wav|gsm'; find -type f -iname \*.zip | while IFS= read -r zip; do printf '[%s]\n' "$zip"; unzip -l "$zip" | grep -aqiE "\.($exts)$" || continue; rm -rf /dev/shm/zt; mkdir -p /dev/shm/zt; azip="$(realpath "$zip")"; (cd /dev/shm/zt; unzip "$azip"; find -type f | grep -aiE "\.($exts)$" | while IFS= read -r flac; do dir="$zip/${flac%/*}"; fn="${flac##*/}"; fn="${fn%.*}"; pdir="${dir//\./_}"; printf '[%s] [%s]\n' "$dir" "$fn"; ogg="$base/$pdir/$fn.ogg"; [ -e "$ogg" ] && continue; mkdir -p "$base/$pdir"; LD_LIBRARY_PATH=/home/ed/pe/aotuv/lib/ nice -n 20 ionice -c 3 ffmpeg -i "$flac" -map 0:a -acodec libvorbis -q:a 2 "$ogg" < /dev/null; done); rm -rf /dev/shm/zt; done


# ffprobe all audio files to a log file

find -xdev | grep -E '\.(m4a|mp3|m2a|ogg|oga|opus|flac|alac|ac3|ape|mka|mpc|tta|tak|ape|wv|aiff?|amr|au|wav|gsm)$' | while IFS= read -r x; do printf '\n%s\n\n' $(head -c 80 /dev/zero | tr '\0' '-'); ffprobe -hide_banner -- "$x" 2>&1 >/dev/null || sleep 0.05; done | tee /dev/stderr | pigz -c > /dev/shm/ffprobes


# collect all unique metadata keys from the ffprobe log file and show each with an example value

pigz -d < ffprobes | awk 'BEGIN {mind=-1} {ind=$0;sub(/[^ ].*/,"",ind);ind=length(ind)} ind<mind {mind=-1} $1=="Metadata:" {mind=ind+2} ind==mind {v=$0;sub(/[^:]*:/,"",v);printf "%32s %s\n", $1, v}' | sort -r | uniq -cw33 | sort -n


# for all m4a files: show size, length, kbps, codec config, filename, and finally sort the list by kbps

find -iname \*.m4a | while IFS= read -r x; do ffprobe -hide_banner "$x" 2> /dev/shm/ffinfo < /dev/null || break; sz=$(stat -c%s -- "$x"); awk '/Duration: [0-9:\.]+, start: / {t=$0;sub(/.*Duration: /,"",t);sub(/, .*/,"",t);h=t;m=t;s=t;sub(/:.*/,"",h);sub(/[^:]+:/,"",m);sub(/:.*/,"",m);sub(/.*:/,"",s);t=(h*60+m)*60+s;kbps=('$sz'/t)/128;printf "%7.3f kbps, %8d byte, %7.2f sec, ",kbps,'$sz',t}' < /dev/shm/ffinfo; grep -E '^[^a-zA-Z0-9]+encoder[^a-zA-Z0-9]*: ' < /dev/shm/ffinfo | sed -r 's/[^:]+: //' | tr '\n' ', '; printf ' %s\n' "$x"; done | tee /dev/shm/ffsum; sort -n < /dev/shm/ffsum


# fmd5 - grab frame checksums from a set of media files for comparison with another copy (i think?? todo)

nfiles=$(find -maxdepth 1 -type f | wc -l); nleft=$nfiles; printf '\n\n'; while IFS= read -r fn; do printf '\033[A%s files total, %s files left\n' $nfiles $nleft; nleft=$((nleft-1)); nice -n 19 ionice -c 3 ffmpeg -y -i "$fn" -f framemd5 - </dev/null 2>/dev/null >/dev/shm/fmd5 && ext=fmd5.ok || ext=fmd5.fuck; cat /dev/shm/fmd5 | awk '{print $6}' | cut -c-16 | xxd -r -p > ~/"fmd5/$fn.$ext"; done < <( find -maxdepth 1 -type f )


# compare fmd5

numfiles=$(find -maxdepth 1 -type f | wc -l); while IFS= read -r x; do printf '\033[A%s \n' $numfiles; numfiles=$((numfiles-1)); [[ -e ../fmd5.tower/"$x" ]] && cmp ../fmd5.tower/"$x" "$x" ; done < <( find -maxdepth 1 -type f )


# hardsub blureis with embedded pgs image subs

ffmpeg -ss 600 -i jall.mkv -filter_complex "[0:v][0:s]overlay[v1]; [v1]scale=512:trunc(ow/a/2)*2 [v]" -map "[v]" -vcodec libx264 -crf 32 -t 30 /web/www/ocv.me/hirune.mkv


# discard all subtitles in mkv file then extract the first subtitle track as a PGS (blurei imagesub)

ffmpeg -analyzeduration 100M -probesize 100M -i some.mkv -codec copy -map 0 -map -0:s -f matroska some-nosub.mkv
ffmpeg -analyzeduration 100M -probesize 100M -i some.mkv -codec copy -map 0:s:0 some.en.sup


# list supported demuxers and their common file extensions

ffmpeg -demuxers | awk '{print $2}' | while IFS= read -r x; do printf '\033[1;40m%24s \033[0m ' "$x"; ffmpeg -h demuxer="$x" </dev/null 2>&1 | grep -E 'Common extensions:' | sed -r 's/.*Common extensions: //;s/\.$//' | grep -E .. --color=never || echo; done


# draw frame-type, number and timestamp on top of video for slicing purposes

ttf="$(fc-match monospace -f '%{file}\n')"; ffmpeg -hide_banner -nostdin -y -i MDG_1373_MP4 -vf "drawtext=fontfile=$ttf: text='%{pict_type} %{n} %{pts}': x=(w-text_w)/2: y=8: fontcolor=white: fontsize=20: box=1: boxcolor=black@0.3: boxborderw=5" -vcodec libx264 -crf 24 -preset veryfast tmp.mkv


# locate and extract video segments with motion

ffmpeg -hide_banner -i "2020-08-04 08-49-58.mkv" -map 0:v -c:v copy -f nut - | ffmpeg -y -hide_banner -v warning -i - -vf "mpdecimate, drawtext=fontfile=the.ttf: text='%{pict_type} %{n} %{pts\:hms}': x=(w-text_w)/2: y=8: fontcolor=white: fontsize=20: box=1: boxcolor=black@0.3: boxborderw=5, setpts=N/(15*TB)" -vcodec libx264 -crf 20 dedup.mkv


# get average fps

for f in *.mp4; do printf '%s ' "$f"; nice ffprobe -threads 0 -hide_banner -v fatal -select_streams v:0 -show_frames -show_entries frame=pkt_dts_time -of compact=p=0:nk=1 "$f" | awk '$1>ts{ts=$1} {n+=1} END {printf "%d frames, %.2f seconds, %.3f fps\n",n,ts,n/ts}'; done


# find out-of-order frames

# awk expects "0.125000|0.125000|__" (pts|dts|flags)
# awk output  "dts_now  pts_now  offset  delta_dts  delta_pts  flags"
# 1st/2nd column yellow = keyframe
#     3rd column green  = pts/dts offset zero
#     3rd column yellow = present in the future
#     3rd column purple = decode in the future (whoa)
#     4th column green  = dts increased
#     4th column purple = dts decreased or stuck
#     5th column ****** = same except pts
#     6th column "K"    = keyframe
ptschk() { nice ffprobe -threads 0 -hide_banner -v fatal -select_streams v:0 -show_entries packet=pts_time,dts_time,flags -of compact=p=0:nk=1 "$1" | awk -F\| 'BEGIN {printf "\033[36m\n";p2=0;d2=0} {p=$1+0.0;d=$2+0.0;f=$3;gsub(/_/," ",f)} p==d && d>d2 && p>p2 {printf "\033[A%s %s\n",f,p; d2=d;p2=p;next}  {cf=(f~/K/)?33:0; cr=(d==p)?2:(d<p)?3:5; cd=(d>d2)?2:5; cp=(p>p2)?2:5; printf "\033[A\033[%dm%12.6f dts %12.6f pts \033[3%dm%8.3f diff \033[3%dm%8.3f \033[3%dm%8.3f\033[0m%4s\033[36m\n\n", cf,d,p,cr,p-d,cd,d-d2,cp,p-p2,f; d2=d;p2=p}'; }


# scan through media files for errors

# replaced by https://github.com/9001/usr-local-bin/blob/master/ffchk
ffchk_core() { local fn="$1"; pv -- "$fn" | { nice ffmpeg -y -hide_banner -nostdin -loglevel error -err_detect explode -xerror -i - -vcodec rawvideo -acodec pcm_s16le -pix_fmt yuv420p -f matroska /dev/null 2>&1 || echo err; } | sed '/./q' | grep -E . && return 1 || return 0; }
ffchk() { fn="$1"; printf '\n\033[36m/// %s\033[J\n\033[2A\033[33m' "$fn"; ffchk_core "$fn" && { printf '\033[0;30;42m o \033[0m \033[36m%s\033[0m\n' "$fn"; of=ok; } || { printf '\033[1;37;41m x \033[0m \033[36m%s\033[0m\n' "$fn"; of=err; }; printf '%s\n' "$fn" >> /dev/shm/ffchk.$of; }
# testcase: intentionally corrupt an mkv file
{ head -c 8000000 60sec.mkv; printf A; tail -c +8000002 60sec.mkv; } > 60sec-bad.mkv
rm /dev/shm/ffchk.*; for x in 60sec.mkv 60sec-bad.mkv; do ffchk "$x"; done | tee /dev/shm/ffchk.log
# full scan and verification of all media files inside and below current directory
rm /dev/shm/ffchk.*; find -type f | grep -iE '\.(mkv|webm|mp4|m4v|mov|h?264|avc|hevc|h?265|avi|ts|mpe?g2?|mpegts|mts2?|wmv|asf|rm|3gp|flv|hlv|vc1|m4a|mp3|m2a|ogg|oga|opus|flac|alac|ac3|ape|mka|mpc|tta|tak|ape|wv|aiff?|amr|au|wav|gsm|apng|webp|gifv)$' | sort | while IFS= read -r x; do ffchk "$x"; done | tee /dev/shm/ffchk.log
# NOTE: certain errors MAY be harmless such as  "Truncating packet of size %d to %d"  or even  "[h264] no frame!"


# scan through media files for errors (turbo edition)

# replaced by https://github.com/9001/usr-local-bin/blob/master/ffchk
# will only detect severe corruption -- replaces first line in the above
ffchk_core() { printf '\033[K'; { nice ffmpeg -y -hide_banner -nostdin -loglevel error -err_detect explode -xerror -i "$fn" -codec copy -f matroska /dev/null 2>&1 || echo err; } | sed '/./q' | grep -E . && return 1 || return 0; }
# create a testcase which this core should successfully detect
{ head -c 8000000 60sec.mkv; head -c 1024 /dev/zero; tail -c +8001026 60sec.mkv; } > 60sec-bad.mkv


# get max and mean volume

levels() { ffmpeg -hide_banner -nostdin -i "$1" -af volumedetect -c:a pcm_s16le -f null - 2>&1 | awk 'BEGIN {vmax=0;vmean=0} !/^\[Parsed_volumedetect_0 @ [0-9a-fx]+\] / {next} $4=="max_volume:" {vmax=$(NF-1)} $4=="mean_volume:" {vmean=$(NF-1)} END {print vmax, vmean}'; }
# do it for all files below current folder
find -type f | while IFS= read -r x; do printf '%5s %5s %s\n' $(levels "$x") "$x"; done


# normalize audio with rms

# fast, tries to hit a perceived loudness of tgt=-14 LUFS, guarantees no clipping
# does not affect dynamics so a single loud sample will make it miss
# uses the levels() function above
norm() { tgt=-14; read vmax vmean <<<$(levels "$1"); read emax emean gain <<<$(awk -v tgt=$tgt -v vmax=$vmax -v vmean=$vmean 'END {d=tgt-vmean; if (vmax+d>=0) {d=(vmax>=-0.1)?0:(-1*vmax)-0.1} printf "%s %s %s\n", vmax+d, vmean+d, d}' </dev/null); printf '\n%5s %5s %s\n%5s %5s %5s\n' $vmax $vmean "$1" $emax $emean $gain; ffmpeg -hide_banner -nostdin -v warning -i "$1" -map 0:a:0 -af volume=${gain}dB -c:a libopus -b:a 128k "$1.opus"; }
# normalize all files in current folder
find -maxdepth 1 -type f | while IFS= read -r x; do norm "$x"; done


# normalize audio with ebur128

# slow, broadcast standard, MAY COMPRESS DYNAMICS
# some tracks are destroyed: https://ocv.me/doc/unix/oneliners/normalize-ebur128.png
# target perceived volume I=-14 LUFS, compress dynamic range to LRA=11 LU, permit peaks up to TP=0 dBTP
# some (most?) broadcast studios require: I=-22, TP=-3 (-10 if brickwalled), LRA=18 or lower
# ebu_levels():
#   mandatory arg1: input filename, outputs FFmpeg filter arguments
#   ffmpeg prints json ("input_i" : "-13.53",), we need filter-args (measured_i=-13.53)
#   need: input_i, input_tp, input_lra, input_thresh, target_offset
# ebu_norm():
#   mandatory arg1: input filename,  optional arg2: output from ebu_levels()
#   note: if output is any other codec than opus, add -ar 44100 -sample_fmt s16
ebu_args="I=-14:TP=0:LRA=11"
ebu_levels() { f="$1"; ffmpeg -hide_banner -nostdin -i "$f" -map 0:a:0 -af loudnorm=print_format=json:$ebu_args -c:a pcm_s16le -f null - 2>&1 | awk -F\" 'BEGIN {printf "loudnorm=linear=true"} !/^[ \t]*"(input|target)_/ {next} {sub(/input/,"measured");sub(/target_/,"");printf ":%s=%s",$2,$4} END {print ""}'; }
ebu_norm() { f="$1"; filt="$2"; [ -z "$filt" ] && filt="$(ebu_levels "$f" | tee /dev/stderr)"; ffmpeg -v warning -hide_banner -nostdin -i "$f" -map 0:a:0 -af $filt:print_format=summary:$ebu_args -c:a libopus -b:a 128k "$f.opus"; }



# visualize audio with mpv

mpv --force-window --demuxer=lavf --lavfi-complex='[aid1] asplit [ao], afifo, dynaudnorm=25, showcqt=fps=60:size=1080x1920:bar_h=128:axis_h=0:basefreq=40:endfreq=10000, transpose=1, vflip, format=yuv420p [vo]' https://ocv.me/stuff/railgun.mkv


# create spectrum image using ffmpeg

# outputs a 1920x1080 png (trim 70x50 topleft, 48x49 bottomright)
spek() { ffmpeg -hide_banner -y -v warning -i "$1" -filter_complex '[0:a:0]showspectrumpic=s=1756x1024,crop=1920:1080:70:50[o]' -map '[o]' -c:v png "${1%.*}.spec.png"; };


# realtime spectrogram

# make sure the showspectrum dimensions are powers of two otherwise performance takes a nosedive
# choose your OS below and use this to generate some noise:
ffplay -f lavfi sine=1000:2 -af volume=8,volumedetect,volume=0.5


# realtime spectrogram on linux

# optionally replace 'default' with a device from here: pactl list short sources
# use ffmpeg:
ffmpeg -f pulse -i default -filter_complex "[a:0]showspectrum=s=1024x576:fps=30:legend=1:slide=scroll:color=intensity:fscale=lin:orientation=horizontal:stop=3200,crop=1280:640,format=yuv420p[vo]" -map "[vo]" -f sdl -
# or use mpv:
mpv --force-window --lavfi-complex="[aid1]asetpts=PTS-STARTPTS,showspectrum=s=1024x576:fps=30:legend=1:slide=scroll:color=intensity:fscale=lin:orientation=horizontal:stop=3200,crop=1280:640,format=yuv420p[vo]" "av://pulse:default" 


# realtime spectrogram on windows

# note: more laggy and has higher latency than the linux ver -- compensate by reducing audio_buffer_size until it breaks
# first decide on a device to listen on:
ffmpeg -f dshow -list_devices 1 -i dummy
# then either use ffmpeg, recording from "Microphone (8- RODE NT-USB)"
ffmpeg -flush_packets 1 -flags low_delay -fflags nobuffer+fastseek+flush_packets -probesize 32 -analyzeduration 0 -f dshow -audio_buffer_size 50 -i "audio=Microphone (8- RODE NT-USB)" -filter_complex "[a:0]showspectrum=s=1024x192:fps=60:legend=1:slide=scroll:color=intensity:fscale=lin:orientation=horizontal:stop=6400,crop=1280:224,scale=h=ih*3,setsar=1/1,format=yuv420p[vo]" -map "[vo]" -f sdl -
# or use mpv, recording from "virtual-audio-capturer"
mpv --force-window "--lavfi-complex=[aid1]asetpts=PTS-STARTPTS,showspectrum=s=1024x192:fps=60:legend=1:slide=scroll:color=intensity:fscale=lin:orientation=horizontal:stop=6400,crop=1280:224,scale=h=ih*3,setsar=1/1,format=yuv420p[vo]" --demuxer-lavf-o=audio_buffer_size=50 "av://dshow:audio=virtual-audio-capturer" --framedrop=no --speed=1.1 --demuxer-thread=no --demuxer-cache-wait=no --cache-pause=no


# plot audio level

# (good for detecting muted regions in VODs)
# `aplot foo.mkv` shows the highest audio level within each d=10 seconds
# writes the resulting plot to foo.mkv.audioplot
# -90 (big bar) = audio fully muted
#  -0 (tiny bar) = audio full blast
aplot() { d=10; f="$1"; t=0; while true; do ffmpeg -hide_banner -v fatal -ss $t -i "$f" -map 0:a -c:a flac -t $d -f nut - | ffmpeg -hide_banner -i - -filter:a volumedetect -f null - 2>&1 | awk -v s=$t 'BEGIN {h=int(s/(60*60)); s-=h*60*60; m=int(s/60); s-=m*60; printf "%02d:%02d:%02d ",h,m,s; e1=1} !/^\[Parsed_volumedetect_[0-9] @ / {next} / n_samples: 0$/ {e2=1} / max_volume: / {e1=0; db=$(NF-1); n=int(-1*db); printf "%5s \033[1;46m%" n "s\033[0m\n", $(NF-1), ""} END {if (e1||e2) {print "eof";exit 1}}' || break; t=$((t+d)); done | tee "$f.audioplot"; }


# plot audio level v2

# ~2x faster, works on windows (msys2), uses ~7 MiB of /dev/shm/ (or $TMPDIR if not linux (or ./ if shell broken))
# https://ocv.me/doc/unix/oneliners/ffmpeg-audio-graph.png
# -90 (big bar) = audio fully muted,
#  -0 (tiny bar) = audio full blast
aplot() { d=10; bd=8; f="$1"; td=/dev/shm; touch $td/aplot.nut || td="${TMPDIR:-.}"; f1="$td/aplot.nut"; f2="$td/aplot2.nut"; rm -f "$f1" "$f2"; t=$((-1*d*bd)); printf 'reading %s\n' "$f"; while true; do wait; mv -f "$f2" "$f1" 2>/dev/null || true; t2=$((t+d*bd)); nice ffmpeg -y -hide_banner -v fatal -ss $t2 -i "$f" -map 0:a -c:a pcm_s8 -sample_rate 22050 -t $((d*bd)) "$f2" & [ $t -lt 0 ] && { t=$t2; continue; }; for ((t2=0; t2<$bd; t2++)); do ffmpeg -hide_banner -v fatal -ss $((t2*d)) -i "$f1" -c:a pcm_s8 -t $d -f nut - | ffmpeg -hide_banner -i - -filter:a volumedetect -f null - 2>&1 | awk -v s=$t 'BEGIN {h=int(s/(60*60)); s-=h*60*60; m=int(s/60); s-=m*60; printf "%02d:%02d:%02d ",h,m,s; e1=1; e2=1} !/^\[Parsed_volumedetect_[0-9] @ / {next} / n_samples: / {e2=0} / n_samples: 0$/ {e2=1} / max_volume: / {e1=0; db=$(NF-1); n=int(-1*db); c=n>40?1:6; printf "%5s \033[1;4%sm%" n "s\033[0m\n",$(NF-1),c,""} END {if (e1||e2) {print "eof";exit 1}}' || { rm -f "$f1"; break; }; t=$((t+d)); done; [ -e "$f1" ] || break; done | tee "$f.audioplot"; rm -f "$f2" "$f1"; }


# plot audio level v3

# pros: ~2x faster, less memory usage, less random access (better for remote files)
# cons: buggy! sporadically incorrect values (way too low) and the timestamps are a bit off
aplot() { d=10; attr=Peak_level; f="$1"; nice ffmpeg -y -hide_banner -i "$f" -map 0:a -af volumedetect,astats=metadata=1:length=1:reset=100:measure_perchannel=none:measure_overall=none+$attr,ametadata=print -c:a pcm_s8 -f null - 2>&1 | awk -v d=$d 'BEGIN {t0=-1; v=-100} function p(s,v) {h=int(s/(60*60)); s-=h*60*60; m=int(s/60); s-=m*60; w=(v>0)?0:-v; c=w<60?6:1; printf "%02d:%02d:%02d %6.1f \033[1;4%sm%" w "s\033[0m\n",h,m,s,v,c,""} /'$attr'=/ {sub(/.*[=]/,"");cv=($0=="-inf")?-100:$0; v=(cv>v)?cv:v; next} /pts_time:/ {sub(/.*:/,""); t=$0; if (t0==-1) {t0=t} t-=t0; if (t<nt) {next} while (t-nt>d/3) {p(nt,100); nt+=d} p(nt,v); nt+=d; v=-100; fflush()}' | tee "$f.audioplot"; }


# bitrate graph

# f = media file,  csz = chunksize (seconds),  sc = graph width (bigger=smaller)
# https://ocv.me/doc/unix/oneliners/ffmpeg-bitrate-graph.png
bitgr() { f="$1"; csz=60; sc=50; ffprobe -hide_banner -v warning -show_packets -show_entries packet=pts_time,size -of compact=p=0:nk=1 "$f" | awk -v csz=$csz -v sc=$sc -F\| 'BEGIN {nt=csz} function hts(s) {h=int(s/(60*60)); s-=h*60*60; m=int(s/60); s-=m*60; return sprintf("%02d:%02d:%02d",h,m,s)} function p() {sz/=128*csz; printf "%s %5d kbps \033[46m%" (sz/sc) "s\033[0m\n",hts(t),sz,""; t0=t; sz=0; while (nt<=t) {nt+=csz}} $1>t {t=$1} t>=nt {p()} {sz+=$2} END {p()}'; }


# check if mono or stereo

ffmpeg -i mono-as-stereo.mp3 -filter_complex '[0:a]channelsplit=channel_layout=stereo[L][R]; [R]volume=-1[invR]; [L][invR]amix=inputs=2,astats=measure_perchannel=none:measure_overall=none+Peak_level+RMS_level[out]' -map '[out]' -f null - 2>&1 | awk 'BEGIN {ret="bad-input"} !/RMS level dB:/{next} {ret="stereo"} $NF<-70{ret="mono"} END {print ret}'


# find big embedded album covers

find -type f | while IFS= read -r f; do s=$(ffmpeg -i "$f" -map 0:v:0 -c copy -f nut - 2>/dev/null | wc -c); printf '%d %s\n' "$s" "$f"; done | tee /dev/stderr | sort -n
#
# suggested replacement compression (f=filename)
ffmpeg -i "$f" -map 0:v:0 -c copy a.png &&
magick a.png -sampling-factor 4:4:4 -interlace Plane -quality 85% a.jpg &&
mv "$f" .a.mp3 && ffmpeg -i .a.mp3 -i a.jpg -map 0 -map -0:v -map 1:v -c copy "$f" && touch -r .a.mp3 "$f" && rm .a.mp3
#
# or just remove it
ffmpeg -i "$f" -map 0 -map -0:v -c copy new.mp3


# find files with excessive samplerate or sampledepth

find -type f | while IFS= read -r f; do inf="$(ffprobe -show_streams -select_streams a:0 "$f" 2>/dev/null | awk -F= 'BEGIN{r=0;f=0} /^sample_rate=/{r=$2} /^sample_fmt=/{f=$2} END{print r,f}')"; printf '%s %s\n' "$inf" "$f"; done | awk '$1>48000 || ($2!="s16" && $2!="fltp" && $2!=0)'
#
# and transcode it (example)
ffmpeg -nostdin -hide_banner -i "$f" -c copy -c:a flac -sample_fmt s16 -compression_level 8 .a.flac && touch -r "$f" .a.flac && mv -- .a.flac "$f"


# compare two videos

vcmp() { crop=':v] setpts=PTS-STARTPTS, crop=400:600:640:210'; ffmpeg -i "$1" -i "$2" -filter_complex "[0$crop [v1]; [1$crop [v2]; [v1][v2] hstack=inputs=2" -vcodec rawvideo -f nut - | mpv -fs -; stty echo; }
# vcmp vp9-cpu0-crf50.webm vp9-q160.nut
# vcmp vp9-q160{,-hwscale}.nut 


# vp9 encoding with libvpx-vp9 (software)

# i7-8700T (libvpx 1.8.1): 4.6 fps, 2280 kbps
# increase to 20 fps, 2750 kbps, similar visual quality for animated content: replace '-cpu-used 0 -crf 50' with '-cpu-used 5 -crf 48'
# note: yuv444p is possible but not recommended; speed and efficiency takes a nosedive
ffmpeg -hide_banner -y -i nostro-2016-10-umbrella-corp-10bit.mp4 -map 0:v -vcodec libvpx-vp9 -row-mt 1 -pix_fmt yuv420p -b:v 0 -cpu-used 0 -crf 50 vp9-cpu0-crf50.webm


# vp9 encoding with vp9_vaapi (intel gpu)

# i7-8700T: 101 fps, 4818 kbps
# roughly similar results to the libvpx encode; same amount of detail for animated content but vp9_vaapi is sharper (more contrast, more noise)
# for colorspace resampling in hw, replace 'format=nv12|vaapi,hwupload' with 'hwupload,scale_vaapi=format=nv12' (reduce cpu load on some realtime transcodes, lower quality, pointless when gpu is saturated)
ffmpeg -hide_banner -y -init_hw_device vaapi=foo:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device foo -i nostro-2016-10-umbrella-corp-10bit.mp4 -map 0:v -filter_hw_device foo -vf 'format=nv12|vaapi,hwupload' -vcodec vp9_vaapi -bsf:v vp9_raw_reorder,vp9_superframe -compression_level 1 -bf 1 -global_quality 160 -f matroska vp9-vaapi.mkv 


# vp8 encoding wtih libvpx (software)

# i7-8700T (libvpx 1.8.1): 19 fps, 2056 kbps
# increase to 95 fps, 3618 kbps, similar visual quality for animated content: replace '-cpu-used 0 -crf 60' with '-cpu-used 5 -crf 45'
ffmpeg -hide_banner -y -i nostro-2016-10-umbrella-corp-10bit.mp4 -t 60 -acodec libopus -b:a 128k -vcodec libvpx -b:v 0 -cpu-used 0 -crf 60 vp8-libvpx.webm


# vp8 encoding with vp8_vaapi (intel gpu)

# i7-8700T: 92 fps, 7464 kbps, similar visual quality to libvpx for animated content
ffmpeg -hide_banner -y -init_hw_device vaapi=foo:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device foo -i nostro-2016-10-umbrella-corp-10bit.mp4 -map 0:v -filter_hw_device foo -vf 'format=nv12|vaapi,hwupload' -vcodec vp8_vaapi -compression_level 1 -global_quality 60 -f matroska vp8-vaapi.mkv 


# screen capture

ffmpeg -f x11grab -r 30 -s 800x600 -i :0.0+32,53 -vf mpdecimate=lo=9001:hi=512 -pix_fmt yuv444p -vcodec libx264 -tune animation -preset veryfast -qp 0 -threads 0 lsw-$(date +%Y%m%d-%H%M%S).mkv; mpv $(ls -1tr lsw*.mkv | tail -n 1)
# `32,53` is the top-left offset (horizontal,vertical)
# `-qp 0` can be `-qp 20` to save space (some quality loss)
# `-vf mpdecimate` drops frames with difference <512 within any 8x8 block (eg. +/-128 in 4 pixels)
# preset ultrafast: 1.00x cpu, 1.00x filesize
# preset superfast: 1.49x cpu, 0.63x filesize
# preset veryfast:  1.71x cpu, 0.40x filesize
# preset faster:    2.55x cpu, 0.29x filesize
# preset fast:      2.58x cpu, 0.27x filesize
# preset medium:    3.14x cpu, 0.22x filesize
# on windows:
#   use `gdigrab -i desktop` instead of `x11grab`
#   if you need some screen activity to prevent excessive runs of no frames (which breaks mpv seeking), open another cmd and run `timeout 99999`


# screen capture with h264_vaapi

# i7-4770: decrease cpu load from 60% to 40% with vaapi colorspace conversion (single-sampled, less accurate, bad for text): replace 'format=nv12,hwupload' with 'hwupload,scale_vaapi=format=nv12', replace '-color_range tv' with '-color_range pc'
# replace '30' with fps
# replace '+0,0' with screen offset +x,y
# replace '1920x1080' with video resolution
# replace '25' with quality, lower=better/bigger
ffmpeg -hide_banner -y -vaapi_device /dev/dri/renderD128 -f x11grab -r 30 -s 1920x1080 -i :0.0+0,0 -vf format=nv12,hwupload -c:v h264_vaapi -qp 25 -color_range tv -f matroska x11grab-h264-vaapi.mkv


# screen casting

# start receiving on a box named main-screen
mpv --profile=low-latency --untimed udp://main-screen:4321/
# cast using hw-encoding on a macos laptop (~0.06sec latency)
ffmpeg -f avfoundation -pixel_format nv12 -capture_cursor 1 -framerate 60 -i 'Capture screen 0' -c:v h264_videotoolbox -realtime 1 -b:v 4M -coder 0 -f mpegts udp://main-screen:4321/
# or using x264 (slower, better colors)
ffmpeg -f avfoundation -pixel_format bgr0 -capture_cursor 1 -framerate 60 -i 'Capture screen 0' -c:v libx264 -tune zerolatency -preset ultrafast -crf 20 -pix_fmt yuv444p -f mpegts udp://main-screen:4321/


# screen casting on linux

mpv --profile=low-latency tcp://10.1.2.223:3737/ --untimed --opengl-glfinish=yes --opengl-swapinterval=0 --cache=no
# 0.034 sec latency
while true; do ffmpeg -nostdin -hide_banner -flush_packets 1 -fflags +nobuffer -flags +low_delay -f x11grab -framerate 60 -s 1920x1080 -i :0.0+0,0 -pix_fmt yuv444p -vcodec libx264 -tune zerolatency -preset ultrafast -g 30 -crf 32 -coder 0 -bf 0 -flags -loop -wpredp 0 -listen 1 -f nut tcp://0.0.0.0:3737/; sleep 0.2; done


# screen casting on linux over udp

# since a recent ffmpeg build made tcp super slow
while true; do nice ffmpeg -nostdin -hide_banner -flush_packets 1 -fflags +nobuffer -flags +low_delay -flags2 fast -f x11grab -framerate 60 -s $(xrandr | awk '/ connected /{sub(/\+.*/,"");print$NF}') -i :0.0+0,0 -flags2 fast -pix_fmt yuv444p -vcodec libx264 -tune zerolatency -preset ultrafast -g 30 -crf 32 -coder 0 -bf 0 -flags -loop -wpredp 0 -flags2 fast -f mpegts udp://10.1.2.157:3737 ; sleep 0.2; done
# receive on windows:
mpv --profile=low-latency udp://10.1.2.223:3737/ --untimed --opengl-glfinish=yes --opengl-swapinterval=0 --cache=no


# record webcam

# see alternatives below (keeping this to avoid killing links)


# record webcam on linux

# live test: figure out what audio device to use: try hw:0 hw:1 hw:2 etc, maybe hw:0,1 hw:1,1 and so on
ffplay -f alsa -i hw:0
#
# then get a list of formats offered by the camera
ffmpeg -f v4l2 -list_formats all -i /dev/video0
# which goes something like
#   Compressed:       mjpeg :          Motion-JPEG : 1280x720 800x600 640x480 320x240 160x120 640x360
#   Raw       :     yuyv422 :           YUYV 4:2:2 : 1280x720 800x600 640x480 320x240 160x120 640x360
# 
# record to mkv and show a live preview
ffmpeg -y -hide_banner -f v4l2 -framerate 30 -video_size 1280x720 -input_format yuyv422 -i /dev/video0 -f alsa -i hw:0 -map 0 -pix_fmt yuv420p -vcodec libx264 -preset ultrafast -crf 5 -acodec flac -f matroska webcam.mkv -map 0:v -pix_fmt yuv420p -f yuv4mpegpipe - | mpv -profile=low-latency -
# WEBCAM: -f v4l2 -framerate 30 -video_size 1280x720 -input_format yuyv422 -i /dev/video0
# AUDIO:  -f alsa -i hw:0
# CODEC:  -vcodec libx264 -preset ultrafast -crf 5 -acodec flac -f matroska webcam.mkv
# MPV:    -map 0:v -pix_fmt yuv420p -f yuv4mpegpipe -
#
# then compress the recording
ffmpeg -i recording.mkv -vcodec libx264 -crf 20 -preset slower -acodec libvorbis -q:a 5 recording-compressed.mkv


# record webcam on mac/osx

# live test: figure out what devices to use
ffmpeg -f avfoundation -list_devices 1 -i ''
#
# permit camera access for Terminal inside System Prefrences » Security & Privacy » Privacy » Camera
# then record to mkv and show a live preview, src=video:audio, 0 is the first audio device (usually Microphone)
# (note the preview may have low FPS but the recording will be smooth)
# (note that audio disappears or becomes a mess if your computer can't keep up so use a smaller resolution if necessary, also https://trac.ffmpeg.org/ticket/4514 )
src="FaceTime:0"; ffmpeg -y -f avfoundation -video_size 1280x720 -framerate 30 -pix_fmt bgr0 -i "$src" -pix_fmt yuv420p -map 0:v -vcodec rawvideo -f sdl webcam -map 0:v -map 0:a -vcodec libx264 -preset ultrafast -crf 5 -acodec flac -sample_fmt s16 recording.mkv
#
# then compress the recording
ffmpeg -i recording.mkv -vcodec libx264 -crf 20 -preset slower -acodec libvorbis -q:a 5 recording-compressed.mkv


# encode h264

ffmpeg -y -i VID_20161231_232751.mp4 -vcodec libx264 -preset slow -crf 25 -vf hqdn3d=luma_spatial=2,scale=848:480 -acodec copy korsk-live-1.mp4


# encode webm (TODO: this is bad, fix it)

ffmpeg -y -i VID_20161231_232751.mp4 -vcodec libvpx -deadline good -speed 1 -b:v 2M -crf 36 -vf hqdn3d=luma_spatial=2,scale=848:480 -acodec libvorbis -q:a 3 korsk-live-1.webm


# stabilize videos (TODO: this multithreading is awful, fix with paralleli.sh or something)

rm -- *.webm; rm job*.sh; export jobs=2; job=0; ls -1 VID_20161105_* | while IFS= read -r fn; do job=$((job+1)); [ $job -gt $jobs ] && job=1; printf 'export LD_LIBRARY_PATH=/home/ed/pe/ffmpeg/lib/;\n/home/ed/pe/ffmpeg/bin/ffmpeg -y -i "%s" -vf vidstabdetect=result="'trf$job.trf'" -f null -\n/home/ed/pe/ffmpeg/bin/ffmpeg -y -i "%s" -vf vidstabtransform=input="'trf$job.trf'",unsharp=5:5:0.8:3:3:0.4,scale=848:480 -vcodec libvpx -acodec libvorbis -speed 0 -b:v 2M -crf 45 -q:a 4 -threads 3 "%s.stab.webm"\n/home/ed/pe/ffmpeg/bin/ffmpeg -y -i "%s" -vf scale=848:480 -vcodec libvpx -acodec libvorbis -speed 0 -b:v 2M -crf 45 -q:a 4 "%s.webm"\n' "$fn" "$fn" "$fn" "$fn" "$fn" >> job$job.sh; done ; chmod 755 job*.sh


# encode cdparanoia to flac and mp3

rename 's/.cdda//' -- *; ls -1 -- *.wav | while IFS= read -r x; do flac "$x"; done ;  find -iname \*.flac | while IFS= read -r x; do printf "ffmpeg -i '%s' -vcodec libmp3lame -q:a 0 '%s'\n" "$x" "$(printf '%s' "$x" | sed -r 's/flac$/mp3/')"; done | sort > encode.sh ; chmod 755 encode.sh ; ./encode.sh ; mkdir flac ; mv -- *.flac flac ; mkdir mp3 ; mv -- *.mp3 mp3 ; rm -- *.wav encode.sh


# compare lame qualities by generating spectrograms of the audio difference (i think?? todo)

mkdir dec delta; rm d* e* m* o* dec/* delta/*; { printf "ffmpeg -y -i track09.flac orig.wav; "; for x in 320 {00..10}; do xn="$(echo "$x" | sed 's/^0//')"; [ "x$x" == "x320" ] && arg="" || arg="-V $xn"; printf "lame --preset insane -q 0 $arg --noreplaygain orig.wav e$x.mp3; ffmpeg -y -i e$x.mp3 d$x.wav; sox d$x.wav -c 1 -t sox - gain -n -B | sox -V -t sox - -n spectrogram -x 1820 -y 1025; mv spectrogram.png dec/$x.png; sox -v -1 d$x.wav d${x}i.wav ; sox -v 1 d${x}i.wav -m -v 1 o$x.wav m$x.wav ; sox m$x.wav -c 1 -t sox - gain -n -B | sox -V -t sox - -n spectrogram -x 1820 -y 1025; mv spectrogram.png delta/$x.png\n"; done; } > enc.sh ; chmod 755 enc.sh ; ./enc.sh


# extract all scene changes to jpg files, and print timestamps / byte offsets

ffmpeg -i some.mp4 -filter_complex "select='gt(scene\,0.3)',showinfo" -qscale:v 2 -vsync 0 %03d.png


# print position, frame number, and timestamp (dts and absolute) of every I-frame (keyframe)

keyframes() { ffprobe -threads 0 -select_streams v -show_frames -show_entries frame=pict_type,best_effort_timestamp_time,coded_picture_number,pkt_pos "$1" | awk -F= 'BEGIN{t0="a"} t0=="a"&&$1=="best_effort_timestamp_time"{t0=$2} {e=0} $0=="[/FRAME]"{e=1} e&&v["pict_type"]=="I"{t=v["best_effort_timestamp_time"]; m=int(t/60);s=t-m*60; t2=t-t0;m2=int(t2/60);s2=t2-m2*60; printf "%15s %6s %11.3f %4d:%06.3f %11.3f %4d:%06.3f\n",v["pkt_pos"],v["coded_picture_number"],t,m,s,t2,m2,s2;fflush(stdout)} e{delete v;next} {v[$1]=$2}'; }
# old version, will fail if there is side data due to longstanding ffmpeg bug in the csv output
keyframes() { ffprobe -threads 0 -select_streams v -show_frames -show_entries frame=pict_type,best_effort_timestamp_time,coded_picture_number -of csv "$1" | awk -F, '$3=="I" {print $2, $4}'; }


# screenshot every I-frame (keyframe)

for fn in x264.mp4 x265.mp4; do ffmpeg -y -i "$fn" -vf 'select=eq(pict_type,PICT_TYPE_I)' -vsync vfr "$fn.%04d.png"; done


# screenshot every second of a 1080p letterboxed 2.39:1 video from 1 minute on

ffmpeg -ss 1:00 -i some.mkv -vf 'crop=1920:803:0:138' -r 1 -q:v 3 "some.%4d.jpg"


# alternatively if your ffmpeg build segfaults when creating jpg screenshots but not png screenshots:

# terminal 1)
mkdir -p /dev/shm/sspng; rm /dev/shm/sspng; rm ss/*; for x in some.mkv; do ffmpeg -ss 1:00 -i "$x" -r 0.1 "/dev/shm/sspng/icmp.%8d.$x.png"; done
# terminal 2)
while true; do du -sh /dev/shm/sspng/; find /dev/shm/sspng -type f -printf '%T@ %p\n' | sort -n | sed -r 's/[^ ]* //' | head -n -1 | while IFS= read -r x; do fn="${x##*/}"; printf '%s\n'  "$fn"; convert -quality 96% "$x" ss/"$fn.jpg" && rm "$x"; done; sleep 5; done


# get distance between each scene change in bytes

ffmpeg -y -i some.mp4 -filter_complex "select='gt(scene\,0.3)',showinfo" -qscale:v 2 -vsync 0 -f crc /dev/null 2>&1 | stdbuf -oL tr '\r' '\n' | stdbuf -oL grep 'plane_checksum:' | stdbuf -oL sed -r 's/.* pos: *//;s/ .*//' | while read x; do echo $((x-o)); o=$x; done


# find compatible pixel formats for codec

ffmpeg -pix_fmts 2>/dev/null | grep -E '^.O' | sed 's/^[^ ]* //;s/ .*//' | while IFS= read -r fmt; do ffmpeg -y -i whatever.mp4 -c:v libvpx-vp9 -deadline good -quality good -cpu-used 3 -frame-parallel 1 -lag-in-frames 25 -tile-columns 4 -auto-alt-ref 1 -crf 2 -b:v 2M -pix_fmt "$fmt" -t 0.5 tmp.webm 2>&1 | grep -qE 'Incompatible pixel format' || printf "FOUND COMPAT FMT %s\n" "$fmt"; done


# split video into 1-second files

fn=file.h264
ffmpeg -i $fn -vf showinfo -f null - 2>&1 | grep -E '^\[Parsed_showinfo_0 @ 0x' | sed -r 's/.* n: *([^ ]*).* pts_time: *([^ ]*).* pos: *([^ ]*).* type: *([^ ]).*/\2 \3 \1 \4/;s/^([0-9]*) /\1000000 /;s/^([0-9]+\.[0-9]+) /\1000000 /;s/^([0-9]+)\.([0-9]{6})[0-9]* /\1\2 /;s/^-[^ ]* /0000000 /' | grep -E ' [IP]$' | tee $fn.tab
max=3; origin=1400000000000000; rm -rf segs; mkdir segs; epoch=$origin; lpos=0; lnth=0; lseg='x'; cat $fn.tab | sed 's/^0*//' | while read time pos nth type; do time=$((time+origin)); [ $((nth-lnth)) -gt $max ] || [ $type == I ] && { echo "writing $epoch frames $lnth-$nth ($((nth-lnth))) bytes $lpos-$pos ($((pos-lpos))) ... next frame is $type"; seg=$epoch; [ $((nth-lnth)) -lt $((max-1)) ] && seg=$lseg; dd if=$fn of=segs/$seg skip=$lpos count=$((pos-lpos)) iflag=skip_bytes,count_bytes 2>/dev/null ; lseg=$seg; epoch=$time; lpos=$pos; lnth=$nth; }; done


# get crc32, dts and size of all video packets in stream...

fn="some.mkv"; t0=$(date +%s%N); sz=$(stat -c%s "$fn"); cat "$fn" | ffprobe -show_packets -show_data_hash crc32 -select_streams 0:v - | awk -F= 'BEGIN {expectsize='$sz'} /^\[\/PACKET\]$/ {printf "%s %s %s\n", hash, pts, size; tsize += size; nth = nth + 1; if (nth % 1024 == 0) {perc = tsize * 100.0 / expectsize; printf "%6.2f%%, %s packets, %s bytes\n", perc, nth, tsize > "/dev/stderr"}; pts=0; size=0} $1=="pts" {pts=$2} $1=="size" {size=$2} $1=="data_hash" {hash=substr($2,length($2)-7,length($2))} END {printf "processed %s packets, %s bytes\n", nth, tsize > "/dev/stderr"}' | pigz -c > "$fn.pkts"; t=$(date +%s%N); echo $t end; echo $t0 start; echo 000000$((t-t0)) | sed -r 's/0*(.*)(.{9})$/\1.\2 sec/'


# ...and count duplicate packets

pigz -d < "$fn".pkts | cut -c-8 | sort | uniq -cd | awk '{sum+=$1} END {print sum}'


# udp broadcast (superlow quality; adjust 480/ultrafast/36 if you are not streaming from tmux on a phone)

ffmpeg -re -i ~/sd/Movies/some.mkv -map 0:v -vf scale=480:-4 -vcodec libx264 -tune zerolatency -preset ultrafast -crf 36 -f mpegts udp://239.76.0.1:3310


# receiving the broadcast

ffplay udp://239.76.0.1:3310/


# fix videos with audio/video offset

# for example pixel3a videos where audio plays but video is frozen until the end of the file
ffprobe -hide_banner -show_packets -print_format csv -show_entries packet=pos,codec_type,pts_time,dts_time -- VID_20190616_015422.mp4
# spot the first packet from the delayed stream, for example: "packet,video,35.566633,35.566633,226311"
ffmpeg -itsoffset -35.566633 -i VID_20190616_015422.mp4 -i VID_20190616_015422.mp4 -map 0:v -map 1:a -codec copy VID_20190616_015422-fixed.mp4


# record vimeo livestreams (and other services where the m3u times out after 60min)

# run this in two terminals to record 59min into each file, with 1min overlap
while true; do find ts -newermt "$(date -d '57 minute ago')" | grep -q . && echo -n . && sleep 3 && continue; (d=$(date +%Y-%m%d-%H%M%S); echo $d > ts; mkdir $d; cd $d; timeout 59m python3 ../ytdl-tui.py https://player.vimeo.com/video/736727937); done
# TODO strip overlap and merge the parts into one file, maybe by dts/pts or keyframe checksums



udp multicast

# note: udp multicast group 224.0.0.1 is a special case

#       where every machine on the network is subscribed
#       and unsubscribing is impossible  (use iptables)
#
# also you might need to
ip route add 224.0.0.0/4 dev eth0


# udp multicast server: takes datagrams and replies with hostname

mask=192.168.0; group=224.1.0.1; group=239.255.0.1
socat -vvv STDIO UDP4-DATAGRAM:$group:4321,range=$mask.0/24


# udp multicast client: takes message on stdin and prints replies

mask=192.168.0; group=224.1.0.1; group=239.255.0.1
ip=$(ip addr | grep -F "inet $mask." | sed -r 's@[^\.]* ([^/]*)/.*@\1@' | head -n 1)
socat UDP4-RECVFROM:4321,ip-add-membership=$group:$ip,fork EXEC:hostname


# udp broadcast server: takes datagrams and replies with hostname

socat -vvv UDP4-RECVFROM:4321,broadcast,fork EXEC:hostname


# udp broadcast client: takes message on stdin and prints replies

mask=192.168.0
socat STDIO UDP4-DATAGRAM:$mask.255:4321,broadcast,range=$mask.0/24


# udp multicast full-duplex

mask=192.168.0; group=224.1.0.1; group=239.255.0.1
ip=$(ip addr | grep -F "inet $mask." | sed -r 's@[^\.]* ([^/]*)/.*@\1@' | head -n 1)
socat -vvv STDIO UDP4-DATAGRAM:$group:4321,bind=:4321,range=$mask.0/24,ip-add-membership=$group:$ip


# udp broadcast full-duplex

mask=192.168.0
socat -vvv STDIO UDP4-DATAGRAM:255.255.255.255:4321,bind=:4321,range=$mask.0/24,broadcast


# relay udp multicast, either from one network device to another, or from one multicast group to another -- test with the ffmpeg udp video broadcast

srcdev=eth0; dstdev=eth1; port=3310; srcgrp=239.76.0.1; dstgrp=239.141.0.1
dstaddr=$(ip addr | awk '$1=="inet" && / '"$dstdev"'$/ {print $2}')  # 192.168.1.237/26
dstprefix=${dstaddr##*/}  # 26
dstip=${dstaddr%/*}  # 192.168.1.237
dstbase=$(ipcalc -n $dstaddr | awk -F= '{print $2}')  # 192.168.1.192
printf '[%s:%s] %s => %s [%s:%s] range [%s/%s]\n' $srcgrp $port $srcdev $dstdev $dstgrp $port $dstbase $dstprefix
socat UDP4-RECV:$port,bind=$srcgrp,ip-add-membership=$srcgrp:$srcdev,reuseaddr UDP4-DATAGRAM:$dstgrp:$port,range=$dstbase/$dstprefix


# forked listen to tcp port, dump to stdout

socat TCP-LISTEN:43214,reuseaddr,fork -


# tcp client

socat - TCP:127.0.0.1:43214


# udp server

socat UDP-LISTEN:43214,reuseaddr -


# udp client

socat - UDP:127.0.0.1:43214


# minimal tcp chat client

# this expects a tcp chat server to connect to, for example:
ncat -l -p 41826
# in tmux, connect to a remote tcp socket (chat server):
ncat 192.168.0.137 41826
^b:pipe-pane -o "exec cat >>$HOME/log.tmux-$(date +%s)"
# in another window:
stdbuf -oL tail -Fn 100 log.tmux-1534058003 | while IFS= read -r ln; do printf '\033[0;34m%s\033[0m  %s\n' "$(date +%d,%H:%M:%S)" "$ln"; su -c "notify-send message" ed; done | tee -a ~/chatlog


prefix stdout lines with timestamps (different approaches sorted by performance)

# the generators: 2 million lines of "hi", and a slower one to test buffering

fgen() { t0=$(date +%s%N); yes hi | pv -i0.5 -Ss $((1000*1000*6)); t=$(date +%s%N); echo "(2*1000*1000)/(($t-$t0)/(1000*1000))" | tee /dev/stderr | bc | tee /dev/stderr; }
sgen() { for n in {1..20}; do printf 'hi\n'; read -t0.2; done; }
fgen | f >/dev/null
sgen | cat | f | cat


# gawk, int-seconds, 477 K/s unbuffered, 600 K/s BUFFERED

f() { awk '{print "\033[36m" strftime("%Y-%m%d-%H%M%S") "\033[0m " $0; system("")}'; }
# buffered: remove system("")


# python, int-seconds, 440 K/s unbuffered, 636 K/s BUFFERED

f() { python -uc 'import sys,time;[sys.stdout.write("".join(["\033[36m", time.strftime("%Y-%m%d-%H%M%S", time.localtime()), "\033[0m ", line])) for line in sys.stdin]'; }
# buffered: python -c


# python, milliseconds, 222 K/s unbuffered, 282 K/s BUFFERED

f() { python -uc 'import sys;from datetime import datetime;[sys.stdout.write("".join(["\033[36m", datetime.now().strftime("%Y-%m%d-%H\033[35m%M\033[36m%S.%f")[:-3], "\033[0m ", line])) for line in sys.stdin]'; }
# buffered: python -c


# perl, int-seconds, 225 K/s BUFFERED

f() { perl -pe 'use POSIX strftime; print strftime "\033[36m%Y-%m%d-%H%M%S\033[0m ", localtime'; }


# perl, milliseconds, 203 K/s BUFFERED

f() { perl -pe 'use POSIX strftime; use Time::HiRes gettimeofday; ($s,$ms)=gettimeofday(); $ms=substr(q(000).$ms,-3); print strftime "\033[36m%Y-%m%d-%H\033[35m%M\033[36m%S.$ms\033[0m ", localtime($s)'; }


# gnu-moreutils, int-seconds, 190 K/s unbuffered

f() { ts $'\033''[36m%Y-%m%d-%H%M%S'$'\033[0m'; }


# perl, int-seconds, 168 K/s unbuffered

f() { perl -pe 'use POSIX strftime; $|=1; select((select(STDERR), $|=1)[0]); print strftime "\033[36m%Y-%m%d-%H%M%S\033[0m ", localtime'; }
# https://unix.stackexchange.com/questions/26728/prepending-a-timestamp-to-each-line-of-output-from-a-command
# first $|=1 sets unbuffered stdout, second sets unbuffered stderr, select(select) restores stdout as write target


# perl, milliseconds, 154 K/s unbuffered

f() { perl -pe 'use POSIX strftime; use Time::HiRes gettimeofday; $|=1; select((select(STDERR), $|=1)[0]); ($s,$ms)=gettimeofday(); $ms=substr(q(000).$ms,-3); print strftime "\033[36m%Y-%m%d-%H\033[35m%M\033[36m%S.$ms\033[0m ", localtime($s)'; }


software specific

# SID-Wizard: running with dual-sid and $PWD mapped in

x64 -sidenginemodel 256 -sidstereo 1 -sidstereoaddress 0xd420 -residsamp 1 -fs8 . -device8 1 -autostartprgmode 0 -autostart SID-Wizard-2SID.prg 


# alpine qemu vm: setting up networking

cr="$(printf '\r\n')"; send="type --args 1 --delay 20 --clearmodifiers --"; xdotool sleep 0.2 search --name 'QEMU \(w7vm\)' windowactivate $send "root$cr" sleep 0.3 $send "setup-interfaces$cr${cr}192.168.17.3$cr$cr$cr${cr}service networking restart${cr}nc 192.168.17.1 43214 > /dev/null$cr"


# rtorrent: force tracker recheck for selected torrent and move to next

xdotool sleep 0.2 key alt+Tab; xdotool sleep 0.2 key Right Down Down Down Right shift+t Left Left Down alt+Tab; xdotool sleep 0.2 key Up


# upper: dump 10 largest files in directory to server over https

fn=xx; for n in {1..10}; do rm -- "$fn"; fn="$(ls -1Sr | tail -n 1)"; curl -F "f[]"=@"$fn" -H 'Connection: close' https://ocv.me/incoming/index.php | grep 'you upped' || break; sleep 0.2; done;


# mpv: watch video with hardware-accelered decoding

mpv -fs -vo vaapi -hwdec=vaapi https://ocv.me/media/tos-4k.mov -ss 200


# mpv: watch video in a linux tty

res=$(fbset | awk '$1=="geometry"{printf "%sx%s",$2,$3;exit}')  # 1920x1080 or such
ffmpeg -f lavfi -i testsrc2=$res:r=60 -c:v rawvideo -pix_fmt rgb24 -f nut - 2>/dev/null | mpv -vo drm -


# feh: convert gps coordinates to google maps format

# from: N 60, 25, 39.21, E 7, 15, 12.76
#   to: N 60°25'39.21", E 7°15'12.76"
xclip -o | sed -r "s/.*([NS])[, ]+([0-9]+)[, ]+([0-9]+)[, ]+([0-9\.]+)[, ]+([EW])[, ]+([0-9]+)[, ]+([0-9]+)[, ]+([0-9\.]+).*/\1 \2°\3'\4\", \5 \6°\7'\8\"/"


# get gps coordinates from image

# produces: N 60°25'39.21", E 7°15'12.76"
# assumes exiftool emits [GPS Position : 60 deg 25' 39.21" N, 7 deg 15' 12.76" E]
exiftool IMG_20190724_162031.jpg | awk '!/^GPS Position +:/ {next} {gsub(/[:,]| deg |GPS Position/," ");printf "%s %s°%s%s, %s %s°%s%s", $4,$1,$2,$3,$8,$5,$6,$7}'


# retroarch-android: filter out non-default config values to a new minimal config file

# input file 1, stock config, "retroarch.cfg"
# input file 2, your config, "~/sd/res/cfg/retroarch.cfg.2"
cd ~/sd/Android/data/com.retroarch.aarch64/files && diff -wNarU0 <(sort retroarch.cfg) <(sort ~/sd/res/cfg/retroarch.cfg.2) | tail -n +3 | grep -vE '^@@ ' | grep -E '^\+' | cut -c2- > ~/sd/res/cfg/retroarch.cfg.2d


git

# find commit that a (possibly modified) file is from

git log | awk '/^commit/{print$2}' | while read c;do echo $c; git checkout $c && diff -wNarU3 copyparty/httpcli.py ~/dev/copyparty/copyparty/httpcli.py.qui >diff.$c;done


virtualization: virtualbox

# change uuid

vboxmanage internalcommands sethduuid $vdi


# remove status bar

VBoxManage setextradata global GUI/Customizations noMenuBar,noStatusBar


# shrink disk

vboxmanage modifyhd test.vdi --compact


# convert dynamic/fixed

vboxmanage clonehd $old $new --variant Standard
vboxmanage clonehd $old $new --variant Fixed


# enlargen dynamic image

vboxmanage modifyhd $vdi --resize nMegabytes


# set vm time offset

vboxmanage modifyvm $vm_name --biossystemtimeoffset $((1000*60*60*24*31))


virtualization: qemu

# mount virtual disk image as physical partition

rmmod nbd
modprobe nbd max_part=8
qemu-nbd -c /dev/nbd0 hdd/sda
mount /dev/nbd0p2 /media/vm
umount /dev/nbd0p2
qemu-nbd -d /dev/nbd0


# defrag/optimize a virtual disk more efficiently than a regular fstrim

# (assumes 2nd partition is the one that requires crunching)
# (assumes partclone is not going to corrupt your filesystem)
cp orig.qcow2 opt.qcow2
qemu-nbd -c /dev/nbd0 orig.qcow2
qemu-nbd -c /dev/nbd1 opt.qcow2 --discard=unmap --detect-zeroes=unmap
blkdiscard -f /dev/nbd1p2
partclone.ntfs -b -s /dev/nbd0p2 -O /dev/nbd1p2
qemu-nbd -d /dev/nbd0
qemu-nbd -d /dev/nbd1


apkbuild stuff

# build new apk until it succeeds (superceded by vabuild, TODO)

cd "$(find ~/aports/testing/ -name APKBUILD -printf '%T@ %p\n' | sort -n | tail -n 1 | sed -r 's/[^ ]* //;s/.APKBUILD$//')"; ot=0; abuild checksum; while true; do t=$(stat -c%Y APKBUILD); read -u1 -r -t 0.2 && ot=0; [[ "$t" == "$ot" ]] && continue; ot=$t; abuild -rK && break; echo '@@@'; pwd; ls src/*; done; echo; tar -tvf "$(find ~/packages/ -type f -iname \*.apk -printf '%T@ %p\n' | grep -vE -- '-(dbg|dev|doc)-' | sort -n | sed -r 's/[^ ]* //' | tail -n 1 | tee /dev/stderr)"


# retrieve parameter for all uncommitted apkbuild files, sorted by package name

attr=pkgver; cd ~/dev/aports; git status | grep -E $'^\t.*/$' | cut -c2- | while IFS= read -r x; do printf '%-36s' "$x"; grep -E "^$attr=" "$x/APKBUILD"; done


# retrieve parameter for all uncommitted apkbuild files, sorted by parameter value

for attr in pkgname pkgver pkgrel pkgdesc url arch license depends depends_dev subpackages source builddir; do cd /rw/home/ed/aports; git status | grep -E $'^\t.*/$' | cut -c2- | while IFS= read -r x; do printf '%s %s\n' "$(grep -E "^$attr=" "$x/APKBUILD")" "$x"; done | sort | sed -r 's/(.*) (.*)/\2 \1/' | while IFS=' ' read -r x y; do printf '%-36s %s\n' "$x" "$y"; done; echo; done 


# backup uncommitted stuff

git status | grep -E $'^\t' | cut -c2- | sed -r 's/^modified: +//' | sort | while IFS= read -r x; do find "$x" -type f | grep -vE '/(src|pkg)/' | sort; done | tar -czvf ../aports-$(date +%Y-%m%d-%H%M%S).tgz -T-


# delete all packages except base and ccache

cat /etc/apk/world | grep -vE '^(alpine-|ccache)' | sed -r 's/>.*//' | while IFS= read -r x; do sudo apk del "$x"; done 


# better but untested

sudo apk del $(cat /etc/apk/world | grep -vE '^(alpine-|ccache)' | sed -r 's/>.*//' | tr '\n' ' ')


# find all APKBUILD files that depend on python and have a -dev subpackage, then...

[[ -e hits ]] || { find -name APKBUILD | sort | while IFS= read -r x; do grep -E '^[^a-zA-Z]*[a-zA-Z]*depends(_dev)?=' -HA3 "$x" | grep -iE 'python[23]?-dev' ; done | sed -r 's@/APKBUILD[:-].*@@' | uniq | while IFS= read -r x; do grep -E '^subpackages=' -HA5 "$x/APKBUILD" | grep -i dev; done | sed -r 's@/APKBUILD[:-].*@@' | uniq | tee hits; };


# ...browse results as a traversable file list (N/J to navigate, K to enter/exit vim)

{ cat ~/.vimrc; printf '%s\n' 'set nocompatible' 'nnoremap k :q!<CR>'; } > hits.vim; n=1; clear; while true; do printf '\033[H'; fn=$(head -n $n hits | tail -n 1); { head -n $((n-1)) hits | tail -n 20; printf '\033[1;33m%s\033[0m\n' $fn; tail -n +$((n+1)) hits | head -n 20; } | sed -r "$(printf 's/$/\033[K/')"; printf '\033[J'; read -u1 -n1 -r; [[ $REPLY == j ]] && n=$((n-1)); [[ $REPLY == n ]] && n=$((n+1)); [[ $REPLY == k ]] && vim -u hits.vim "$fn/APKBUILD"; done 


# compare apkbuild files between local and remote

d1=~/dev/aports; d2=~/dev/ao/rw/home/ed/aports/; : >|/dev/shm/apkhits; for x in bchunk lsmash vamp-sdk zimg py2-pgen py3-pgen py-cython; do cd $d1; p=$(find -mindepth 2 -maxdepth 2 -name $x); echo $p | tee -a /dev/shm/apkhits; cd $p; dircmp.sh $d2/$p | grep -vE '^(pkg|src)/|^\.build'; done; echo; cat /dev/shm/apkhits; echo; cd $d1; echo $d2;


# compare all uncommitted recipes to a remote

remote=~/dev/ao/rw/home/ed/aports/; local=~/dev/aports; cd $local; git status | grep -E $'^\t.*/$' | cut -c2- | while IFS= read -r x; do printf '\n\n\n%s\n' "$x"; cd "$local/$x"; dircmp.sh "$remote/$x" | grep -vE ' \.buildtime' | tee /dev/shm/memes; grep -qE ' APKBUILD$' /dev/shm/memes && cdiff "$local/$x/APKBUILD" "$remote/$x/APKBUILD"; done


# find packages where name and directory differs

find -name APKBUILD | while IFS= read -r x; do name="$(grep -E '^pkgname=' "$x" | sed -r 's/[^=]*=//;s/^"(.*)"$/\1/')"; printf '%s\n' "$x" | grep -qF "/$name/" || printf '\033[36m%s \033[33m%s\033[0m\n' "$x" "$name"; done 


# find all files with substr in its name and tar -tvf them

/find ~/packages/ -iname '*zimg*' | while IFS= read -r x; do printf '\n\n\033[35m%s\033[0m\n' "$x"; tar -tvf "$x"; done


alpine general

# list all tagged packages

apk policy \* | awk 'function chk() {if (tag&&inst) {printf "%s %s\n", tag, pkg}}   /^[^ ]/ {pkg=$1;next}   /^  [^ ]/ {tag=0;inst=0;next}   /@/ {tag=$1;chk()}   /db\/installed/ {inst=1;chk()}'


rescuing a mounted disk after you just now accidentally overwrote the partition header

# diffable recursive file listing, easy to read: modes timestamp filesize filepath -> linktarget

find /mnt/sda_ov/ -printf '%M %T@ %s %p -> %l\n' > /dev/shm/lext4


# diffable recursive file listing, sortable: filepath // linktarget [modes/size/owner:group] @timestamp

rfl() { find -type f -printf '%p // %l [%M/%s/%g:%u] @%T@\n' | sed -r 's/\.[0-9]+$//' | pv | sort; }


# use rfl to compare two folders replicated on two drives

# compare folders 1t and t5 which exist on both disks zq1 and sdg_ov
disks=(zq1 sdg_ov)
dirs=(1t t5)
for disk in "${disks[@]}"; do (cd "$disk"; for dir in "${dirs[@]}"; do (cd "$dir"; rfl > /dev/shm/rl."$disk.$dir"); done); done
for f in /dev/shm/rl.*; do sed -ri 's` \[[drwx-]{10}/([^]]+\] @[0-9]+$)` \[\1`' "$f"; done  # remove modes/
for dir in "${dirs[@]}"; do cdiff /dev/shm/rl.*."$dir"; done


# use that for a quick disk comparison (think offsite backup)

for x in sda_ov sdb1_ov; do cd /mnt/$x; find '(' -not -type d ')' -printf '%p -> %l %s @%T@\n' | pv | sort > /dev/shm/$x; done
cdiff /dev/shm/sd{a,b1}_ov | less -R


# create lists of dying disk + your last backup and compare them to find files you haven't made a backup of yet

cat tr1.2 | sed -r 's@b1_ov/tr1@a_ov@' | sort > sdb


# diff the file listings to get a list of files you haven't made a backup of yet

diff sda sdb | grep -E 'sda_ov/n5x/pics/[^/]* ->' | sed -r 's/^[<>] [^ ]* [^ ]* [^ ]* //;s/ -> .*//' | grep -vE '/comp$' | while IFS= read -r x; do cp -pvR "$x" /home/ed/sda-rescue/pics/; done 


muxing subs/fonts from tv fansub with bluray video + vorbis audio (superceded by all-ffmpeg (TODO: find) or sushi.py (TODO: find))

# extract audio track from video and transcode to vorbis

for x in *.mkv; do ffmpeg -i "$x" -map 0:a -acodec libvorbis -q:a 4 "$x.ogg" < /dev/null; done


# then mix bd video, vorbis audio, tv everything else

mkvmerge -o "output.mkv" -A -S -T -M -B --no-chapters 'bluray.mkv' 'bluray.mkv.ogg' -D -A 'fansub.mkv'


ripping audio CDs with embedded tags in CP-932 (shift-jis)

# as root, collect cd info and rip the disc

cdparanoia -B; cd-info --no-cddb --no-cddb-cache --no-device-info --no-disc-mode --no-ioctl --no-header 2>&1 | tee cdinfo; chown -R ed.ed /home/ed/Music


# as regular user,

rename 's/.cdda//' -- *
ls -1 -- *.wav | while IFS= read -r x; do flac "$x"; done
rm -- *.wav
cat cdinfo | grep CD-TEXT -A10 | iconv -t CP819 | iconv -f CP932 > /dev/shm/tmp.cdtext
ntracks=$(grep -E '^CD-ROM Track List \(' cdinfo | sed -r 's/.* //;s/.$//'); for (( n=1; n<=$ntracks; n++ )); do na=1; while [ $na -lt 99 ]; do cat /dev/shm/tmp.cdtext | grep -E "^CD-TEXT for Track *$n:" -A$na | grep '^CD-TEXT' | wc -l | grep -qE '^1$' || break; na=$((na+1)); done ; cat /dev/shm/tmp.cdtext | grep -E "^CD-TEXT for Track *$n:" -A$na | grep -E '^.PERFORMER:' | head -n 1 | sed 's/[^:]*: //' | awk 'NR > 1 { print prev } { prev=$0 } END { ORS=""; print }' > /dev/shm/tmp.artist; cat /dev/shm/tmp.cdtext | grep -E "^CD-TEXT for Track *$n:" -A$na | grep -E '^.TITLE:' | head -n 1 | sed 's/[^:]*: //' | awk 'NR > 1 { print prev } { prev=$0 } END { ORS=""; print }' > /dev/shm/tmp.title; { stat -c %s /dev/shm/tmp.artist | grep -qE '^0$' && printf '' || { cat /dev/shm/tmp.artist; printf ' !@# '; }; cat /dev/shm/tmp.title; }; echo; done > list


# convert with deadbeef, then

name=whatever withOUT trailing slash
for x in mp3 ogg; do scp -r "$x/$name" dn:/web/www/some.domain/music/$x/; done


rescuing deleted files from a unix filesystem by grepping the entire physical disk for parts of the file contents (bgrep is https://github.com/tmbinc/bgrep or one of its many forks)

# need hex strings of content that was near the top and bottom of the file

printf %s '-----END XLD SIGNATURE-----' | xxd -p
2d2d2d2d2d454e4420584c44205349474e41545552452d2d2d2d2d


# locate instances of that data on the disk

/home/ed/bin/bgrep 58204c6f73736c657373204465636f6465722076657273696f6e2032 numbergirl.dump ; echo ; /home/ed/bin/bgrep 2d2d2d2d2d454e4420584c44205349474e41545552452d2d2d2d2d numbergirl.dump
numbergirl.dump: 00000e23
numbergirl.dump: 00008745
numbergirl.dump: 00010d15

numbergirl.dump: 00005f12
numbergirl.dump: 0000f2cd
numbergirl.dump: 00017704


# print the results as start/stop ranges

printf '00000e23 00005f12\n00008745 0000f2cd\n00010d15 00017704\n0001933d 0001fb53\n00024b7a 000254e4\n'
00000e23 00005f12
00008745 0000f2cd
00010d15 00017704


# and collect those ranges into files

printf '00000e23 00005f12\n00008745 0000f2cd\n00010d15 00017704\n0001933d 0001fb53\n00024b7a 000254e4\n' | while read start end ; do o1=$(printf %d 0x$start); o2=$(printf %d 0x$end); len=$((o2+28-o1)); dd if=numbergirl.dump bs=1 skip=$o1 count=$len of=recovered.$o1 ; done


ssh from NAT box (ed) to NAT box (exci) through 3rd server (tower)

# alternative 1, if both source and destination are trusted (can ssh into tower):

# on server, set up a relay:
socat TCP-LISTEN:42827,reuseaddr,fork TCP:127.0.0.1:42826
# exci (target/connectee NAT) should run
ssh -f -N -T -R42826:127.0.0.1:22 exci@tower
# ed (source/connecting NAT) should run
ssh ed@tower -p 42827


# alternative 2, if the source/destination/both are untrusted (cannot ssh into tower):

# on public relay server, set up a relay:
while true; do socat TCP4-L:42835,reuseaddr TCP4-L:42836,reuseaddr; sleep 5; done
# exci (target/connectee NAT) should run
while true; do socat TCP4:127.0.0.1:22 TCP4:tower:42835; sleep 4; done
# ed (source/connecting NAT) should run
ssh root@tower -p 42836


memleak related

# calculate memory usage from maps file (TODO: getting ~/maps)

cat ~/maps | grep -E ' rw-p ' | sed -r 's/^([^ -]*)-([^ ]*) .*/0x\1 0x\2/' | awk --non-decimal-data '{v = $2 - $1; sum += v; print v} END {printf(fmt,sum)}' fmt="%'.f\n"


deprecated stuff (slow / unsafe / just bad)

# compress logs

ov="$(du -sh . ; date )"; ls -1 | grep -E '\.log$' | sed 's/\\/\\\\/g' | while IFS= read -r x; do echo -- "$x"; gzip -- "$x"; done ; echo "$ov"; date ; du -sh .


# compress logs in subfolders

ls -1 | grep -E '^20' | while IFS= read -r dir; do echo "$dir"; cd "/home/ed/inc/log/$dir" ; ls -1 | grep -E '\.log$' | sed 's/\\/\\\\/g' | while IFS= read -r x; do echo -n .; gzip -- "$x"; done; echo; done


# delete partial copies of folders

cat ../htc/Music/md5.htc | while IFS= read -r x;do fn="$(echo "$x" | sed -r 's/.{36}//')"; sum="$(echo "$x" | sed -r 's/(.{32}).*/\1/')"; ck="$(md5sum "$fn" | sed -r 's/(.{32}).*/\1/')"; [ "x$sum" == "x$ck" ] && rm "$fn"; echo "$fn"; done


# set up arch virt net

sleep 1; alias t='xdotool type --delay 1'; alias k='xdotool key --delay 1'; k slash d h c p Return c w s t a t i c Return Tab; t 'address 10.217.143.93'; k Return Tab; t 'netmask 255.255.255.0'; k Return Tab; t 'gateway 10.217.143.1'; k Return Tab Escape colon w q


# mumble logs to html (this was replaced by a python script which I should upload somewhere)

cat /var/log/mumble-server/mumble-server.log.1 /var/log/mumble-server/mumble-server.log | grep -E ' => <[0-9]{1,5}:.*\([0-9-]{1,4}\)> (Authenticated|Connection closed)' | sed -r 's/^<.>[^>]*=> <[0-9]*:(.*)\([0-9-]{1,6}\)> (Authenticated|Connection).*/\1                                                                \2/' | tac > status
cat status | sed 's/[()]//g' | sort | uniq -w 60 | sed -r 's/ *[ ]{50}.*//' | grep -vE '^\b*$' | while IFS= read -r nick;do cat status | grep -E "^${nick}[ ]{50}" | head -n 1; done > status2
cat status2 | sed 's/</\&lt;/g;s/>/\&gt;/g' | sed 's/Connection$/<span class="offline">/;s/Authenticated$/<span class="online">/' | sed -r 's/^(.*[^ ]) *[ ]{50}(<span.*)$/\2\1<\/span>/' > status3


# compare md5copy folders to local

find -iname \*.ftm | while IFS= read -r x; do fn="${x##*/}"; sum="${fn%%.*}"; find ~/ftm/ | grep -qE "$sum" || printf '%s\n' "$x"; done 


# copy from md5copy to local

find -iname \*.ftm | while IFS= read -r x; do tmp="${x##*/}"; sum="${tmp%%.*}"; fn="$sum-$(grep -E "^$sum" ../m2/the.md5 | head -n 1 | sed -r 's@.*/@@')"; loc="$HOME/ftm/$fn"; printf '        %s\n\033[A' "$fn"; [[ -e "$loc" ]] && { [[ "$loc" -nt "$x" ]] || { printf '\033[1;34mskip\033[0m\n'; continue; }; printf '\033[1;33mtouch\033[0m\n'; touch -r "$x" -- "$loc"; continue; }; printf '\033[1;32mnew\033[0m\n'; cp -p -- "$x" "$loc"; done 


# 3ds diff

find /home/ed/3ds -type f | sed -r 's/.{13}//' | grep -vE '^(ed|.src)/' | sort > ~/a; find /media/usb0/ -type f | sed -r 's/.{12}//' | grep -vE '^(Nintendo 3DS)/' | grep -vE '\.(bmp|world|mp3|sav|srm)$' | sort > ~/b; diff ~/a ~/b


todo

# restart busted usb controllers

cd /sys/bus/pci/drivers/xhci_hcd/; echo 0000:* > unbind; echo 0000:* > bind