Skip to content

Instantly share code, notes, and snippets.

@henri
Last active September 12, 2024 04:14
Show Gist options
  • Save henri/a345a6007503685b943aaafcee78943c to your computer and use it in GitHub Desktop.
Save henri/a345a6007503685b943aaafcee78943c to your computer and use it in GitHub Desktop.
rsync cheatsheet
# Sometimes when moving files around, you do not have the ability to directly
# preserve permissions, the approach below uses two commands : getfacl and setfacl
# to save and restore permissions once your rsync is compled.
# Alterativly, you can also explore using the fake-super options in rsync.
# The commands below are if you have a small number of files, there are also some
# scripts below to split up larger files in to chunks for batch processing.
# save permissions for a direcotry tree to a file using ACL tools
getfacl --recursive ./ > permissions.data
# restore permissions for a diectory tree from a file using ACL tools
setfacl --restore=permissions.data || setfacl --restore=- < permissions.data
#!/usr/bin/env bash
# simple script to assist with dealing with lots of files (which takes a while to process)
# first split them us using the other bash script called split_chunks.bash
# then run this script (modify as needed)
for chunk in perm_chunk_* ; do
echo "processing $chunk"
setfacl --restore="$chunk"
if [ $? -ne 0 ]; then
echo "Error processing $chunk"
exit 1
fi
echo "completed $chunk"
done
#!/usr/bin/env bash
# are you running setfacl on millions of files... is that taking a long time... well this is the script for you!
# simple script to split up a permissions file generated by getfacl
# into multiple smaller files so you can process them as chunks
# define the input ACL file - replace with your ACL file - also a good idea to do all this in a subdirectory
# if you are going to end up processing lots of files.
acl_file="permissions.data"
# Define the output prefix for the chunks
output_prefix="perm_chunk_"
# initialize additional tracking variables
chunk_number=1
temp_chunk=$(mktemp)
line_counter=0
num_entries_per_file=100 # number of files to process per chunk
num_counter=0
# Read the file line by line
while IFS= read -r line ; do
# if current line is blank and we have reached the num_entries_per_file
if [[ "$line" == "" ]] && [[ $num_counter -ge $num_entries_per_file ]] ; then
# if the temp file is not empty
if [[ -s "$temp_chunk" ]]; then
mv -i "$temp_chunk" "${output_prefix}${chunk_number}"
echo "created chunk ${output_prefix}${chunk_number}"
chunk_number=$((chunk_number + 1))
# create a new temp file
temp_chunk=$(mktemp)
# reset the counter
num_counter=0
fi
else
if [[ "$line" == "" ]] ; then
num_counter=$((num_counter + 1))
fi
# write the line to the temp file
echo "$line" >> "$temp_chunk"
fi
done < "$acl_file"
# Process any remaining lines
if [[ -s "$temp_chunk" ]]; then
mv "$temp_chunk" "${output_prefix}${chunk_number}"
echo "created final chunk ${output_prefix}${chunk_number}"
fi
# clean up
rm -f "$temp_chunk"
echo "splitting complete."
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment