Skip to content

Instantly share code, notes, and snippets.

@olivertappin
Last active July 29, 2024 17:23
Show Gist options
  • Save olivertappin/0fc0b6a76e6555e51f884f9638249de2 to your computer and use it in GitHub Desktop.
Save olivertappin/0fc0b6a76e6555e51f884f9638249de2 to your computer and use it in GitHub Desktop.
Create a large file for testing
# Please note, the commands below will create unreadable files and should be
# used for testing file size only. If you're looking for something that has
# lines in it, use /dev/urandom instead of /dev/zero. You'll then be able to
# read the number of lines in that file using `wc -l large-file.1mb.txt`
# Create a 1MB file
dd if=/dev/zero of=large-file-1mb.txt count=1024 bs=1024
# Create a 10MB file
dd if=/dev/zero of=large-file-10mb.txt count=1024 bs=10240
# Create a 100MB file
dd if=/dev/zero of=large-file-100mb.txt count=1024 bs=102400
# Create a 1GB file
dd if=/dev/zero of=large-file-1gb.txt count=1024 bs=1048576
# Create a 10GB file
dd if=/dev/zero of=large-file-10gb.txt count=1024 bs=10485760
# Create a 100GB file
dd if=/dev/zero of=large-file-100gb.txt count=1024 bs=104857600
# Create a 1TB file (careful now...)
dd if=/dev/zero of=large-file-1tb.txt count=1024 bs=1073741824
@tandersn
Copy link

tandersn commented Aug 4, 2020

Depending on what your are testing, this may not give you what you want. If the filesystem has inherent and enabled compression (like zfs), these files made with /dev/zero will be compresssed to almost nothing. Use /dev/random in that case.

@rafael-57
Copy link

Nice thanks

@Jip-Hop
Copy link

Jip-Hop commented Dec 29, 2021

Thanks :) As @tandersn mentioned, on a ZFS dataset with compression enabled files written with /dev/zero will compress to almost nothing. But using /dev/random turned out to be really slow. For testing purposes I turned off compression for the ZFS dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment