7zip and nanozip

Best data on different compressors:
http://compressionratings.com/

check best compression programs/algorithms based on many variables such different arguments or the result your looking for such as compression ratio which directly relates to final file size, also compression speed and decompression speed and time: http://compressionratings.com/sort.cgi?rating_sum.brief+4n

7zip compress

more on compression and command line options: http://7zip.bugaco.com/7zip/MANUAL/switches/method.htm

pro: can apt-get
pro: common and known
pro: fast
con: not as good of compression

to install

 to compress

NOTE: in the example below I will be 7zipping up a tar file, but you could zip up folders if you want, or a list of folders and a list of files. (remember tar files arent zipped up at all, they are concattenation of files essentially, with their metadata included like path of the file etc.) In fact this might be important to remember, sometimes when you feed a list of files and folders to 7zip it might fail if the a couple of the folders you picked have the same subfiles/subfolders. 7zip doesnt do absolute folder paths like tar. So I recommend first tarring your list of files and folders, and then 7ziping the tar file. Dont forget to remove the tar file after.

Here is an example, this same list of files might fail with 7zip, but it wont with tar, so first tar it up and then use 7zip on it:

Now lets take that /root/test.tar and 7zip with different methods that have different compression settings (the higher the compression settings the more ram they use – if you only have 512MB of ram, I would use -mx3, if you have more then use -mx5 or -mx9.. Equally -mx=3 and -mx=5 and -mx=9 have the same results – so you can include the = sign or not)

7zip has simple syntax:

ex1: default compression – (mx5 is the default compression)

ex2: ultra compression – more ram

if not enough ram, the process will be killed mid way

ex3: alot less ram (use -mx3, it will offer less compression but less ram will be used)

For complete list of options that will be set with different -mx settings, look at the -m help page on 7zip online manual, then scroll down to 7z (as 7z support many formats, you need to look at the default one, 7z):

http://sevenzip.sourceforge.jp/chm/cmdline/switches/method.htm

to compress without any compression (copy/store)

mx=0 copy method also called store in the GUI. This is useful to compress/store files which are already compressed (pictures, videos, other archives)

Here we compress into testmx0.7z every 7z file in the test folder. this procedure will be as quick as your drives read and write. minimal cpu and memory usage.

To uncompress to current dir

ex1: your in /root, archive is /root/test.7z, you want to extract to /root/ (get into /root)
To uncompress elsewhere
Make sure directory is touching the -o

ex2: your in /root, archive is /root/test.7z, you want to extrat to /root/inside/folder/ (folder will be made if its missing)

relative paths for extraction (notice end slash is optional):

absolute path for extraction (notice end slash is optional):

 

any of the 4 above methods will work

 

nanozip

pro: higher compression
pro: popular can apt get
con: much slower
con: in alpha

to install

if you want to use it. you have to be in the directory /root/src and call upon nz with ./nz. To install it into your system, just copy it nz to one of your system paths (to see your sys paths: echo $PATH). ex: cp /root/src/nz /bin/

to compress using highest settings

its recommended to specify amount of ram to use for the operation. if you dont have enough ram it will try, but it will complain, and i assume get killed if runs out of ram

256mb or 1/4g gig ram to use (less ram, less compression):

512mb or half gig ram to use:

NOTE: if you use to much it will complain

NOTE: similar use to 7z

NOTE: if your over writing answer question with “yes” not “Yes”

to extract to current dir

Note: decompression/extraction is much faster

 


 Update: compressing folders recursively with nanozip and 7zip & list

Above cases looked at just zipping up 1 file, what about a folder and all of its contents? with nanozip you have to tell it to -r recurse thru a folder to grab all of the subfolders, with 7zip it automatically does it (even though 7zip has the option for recursion it seems to be on all the time). To list the contents of 7zip and nanozip files you use the “l” (As in lima as in lowercase L) options.

Also here is some more information on nanozip (note its highest compression based on many benchmarks is -cc option which i have been using throughout this article, and also just to use 512mb of ram, which doesnt seem like that much compared to what I see with freearc 2gb and how 7zip can ask for high ram values like 768mb – of course you can manually ask nanozip to use as much ram as you want, its winning scores are achieved only with 512mb of ram) – look into those links below

Sites benchmarking compressors

http://www.maximumcompression.com/data/summary_mf2.php#data

http://heartofcomp.altervista.org/MOC/APP1/MOCA.htm
http://compressionratings.com/rating_sum.html
http://www.squeezechart.com/

Theses sites rank nanozip as best compressosr when final size is needed to be smallest. Also freearc is nice (look into it here: FREEARC – freearc can use up to 2 gig of ram for its highest compression) Where as Nanozip in most exmaples and benchmarks is only asked to use max 512 mb of ram, and with that nanozip achieves its high marks.

7zip is all round best for speed and compressibily and decompression

NANOZIPPING RECURSIVE AND LISTING

About recursive zipping nanozip doesnt add files recursively. So when compressing directories with subfolders (and you intend the subfolders to be compressed) use this:

NOTE: even though we ask to use 256 or 512 mb of ram, if we zip only little amount of stuff it might not need to use that much ram

without -r it will only compress whats on the root of /folder-to-compress. so always use -r (unless you really only need the stuff on the root of the folder)

To look at whats in the zipped nanozip file:
# nz l archive.nz

Here is how it looks like when you compress with and without -r, looking into contents with l option

 

 

7ZIP RECUSRIVE AND LISTING

7zip will recurse into folders even if you dont tell it to, its strange that it has the option to be told to be recursive, it works the same with it on or off (maybe it has other uses…)
For highest compression with 7zip, which uses alot of ram ( if your zipping not alot of stuff doesnt get to using alot of ram )

To have highest 7zip compress (-mz9) with recursion (-r):

To have highest 7zip compress (-mz9) without recursion option (omit -r) – note it still does recursion:

To list contents:

examples below im using highest compression, but notice that with -r option there or not the results are the same

notice how the subfolders hi and hey are captured with and without -r, thus showing me that by default 7zip uses recursion

also from other times/experiences using 7zi in command line i know that it uses recursion by default without asking form it

 

 

3 thoughts on “Top 10 compressors – 7zip and nanozip – compressing with low and high ram use & highest compression algorithm for both

  1. Anyone know if it is possible to add files to an existing nanozip archive? When I use the add (a) option, it asks me if I want to overwrite the archive. Am I doing something wrong?

  2. Nanozip is a LOT FASTER than 7z and Winrar and ALWAYS BEST. You should not use the max compression -cc if you need speed.
    If you have a quad core with HT (like an I7) use: nz.exe a -r -v -cO -m1g -t6 -p4
    If you have a quad core without HT (like an I5) use: nz.exe a -r -v -cO -m1g -p4
    If you have a dual core with HT use: nz.exe a -r -v -cO -m1g -t3 -p2
    If you have a dual core without HT use: nz.exe a -r -v -cO -m1g -p2
    using -p1 will increase the compression and use only a core (you can use more than one process in parallel)
    nz.exe a -r -v -cO -m1g -p1 (add -t6 if a HT 4 core)

  3. nz_optimum1 (option -co) and nz_optimum2 (option -cO) are too slow for my workload (DB backups).

    I’m currently using plain nz_lzhd (option -cd) with -m1g and -p2 to not stress our servers too much. This way it’s extremely cheap on CPU but with very high compression results on our multi-GB backups.

    Recently have been testing pzstd for speed vs CPU usage vs compression ratio but couldn’t beat nz_lzhd compression with whatever options I tried. Though pzstd decompression is extremely fast — reaching about 2GB/s w/ output to /dev/nul

Leave a Reply

Your email address will not be published. Required fields are marked *