[bareos-users] File compression

2020-01-21 Thread Udo Ullrich

Dear group, dear development team,

thank you for helping others with this very amazing piece of software!



Running the community version (18.2.5), entirely on Linux System (Client, 
too).

I want to conserve disk space, because of that I would like to use file 
compression, however, no matter which algorithm I choose from (LZ4, GZIP, 
...) the volume size does 

not differ. Now I suspect that there is no file compression at all. How can 
I find out, whether, the file backup is done with compression? Are there 
any logs telling this?

In bconsole I do not get any error messages, the backup Jobs work very fine 
(result OK).

What is the usual effect (ratio, e. g. using gzip) when using file 
compression?


For clarification this is the configuration file:


-

FileSet {
  Name = "BackupRoot"
  Description = "Backup all regular filesystems"
  Include {
Options {
  signature = MD5
  fstype = ext4
  compression = gzip
}
File = /
  }
  Exclude {
File = /var/lib/bareos
File = /var/lib/bareos/storage
File = /tmp
File = /var/tmp
  }
}

-


Could it be that the option "compression" is ignored? Could it be there is 
a syntax error? I am puzzled, in the documentation I see the options with 
and without capital letters, all in capital letter, and again with no 
capital letters, with spaces and with quotation, e. g. compression, 
Compression, gzip, GZIP, compression=gzip, compression = gzip, etc.



Thank you for your help,

Udo Ullrich



-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7656aca0-d310-4a97-8508-3d8192435feb%40googlegroups.com.


Re: [bareos-users] File compression

2020-01-21 Thread Spadajspadaj
The question is whether the job output shows Compression. A volume is a 
storage unit. It may be a fixed file, it may be a tape. We don't know 
your configuration here. I, for example, have fixed-size 40G file-based 
volumes so the volumes size don't change but the jobs can be bigger or 
smaller depending on a compression used.


Sample output from a job utilizing compression from my installation:

*list joblog jobid=2403
 2020-01-21 02:07:05 backup1-dir JobId 2403: Start Backup JobId 2403, 
Job=backup_srv2_MySQL.2020-01-21_01.00.00_11
 2020-01-21 02:07:05 backup1-dir JobId 2403: Connected Storage daemon 
at backup1:9103, encryption: PSK-AES256-CBC-SHA
 2020-01-21 02:07:05 backup1-dir JobId 2403: Using Device 
"vchanger-1-0" to write.
 2020-01-21 02:07:05 backup1-dir JobId 2403: Connected Client: srv2-fd 
at 172.16.2.193:9102, encryption: PSK-AES256-CBC-SHA
 2020-01-21 02:07:05 backup1-dir JobId 2403:  Handshake: Immediate TLS  
2020-01-21 02:07:05 backup1-dir JobId 2403:  Encryption: PSK-AES256-CBC-SHA
 2020-01-21 02:07:08 srv2-fd JobId 2403: Extended attribute support is 
enabled

 2020-01-21 02:07:08 srv2-fd JobId 2403: ACL support is enabled
 2020-01-21 02:07:06 bareos-sd JobId 2403: Connected File Daemon at 
172.16.2.193:9102, encryption: PSK-AES256-CBC-SHA
 2020-01-21 02:07:08 bareos-sd JobId 2403: Volume "vchanger-1_2_0076" 
previously written, moving to end of data.
 2020-01-21 02:07:08 bareos-sd JobId 2403: Ready to append to end of 
Volume "vchanger-1_2_0076" size=19859852960
 2020-01-21 02:07:08 srv2-fd JobId 2403: python-fd: Starting backup of 
/_mysqlbackups_/mysql.sql
 2020-01-21 02:07:09 srv2-fd JobId 2403: python-fd: Starting backup of 
/_mysqlbackups_/mail.sql
 2020-01-21 02:07:09 srv2-fd JobId 2403: python-fd: Starting backup of 
/_mysqlbackups_/gts.sql
 2020-01-21 02:07:19 srv2-fd JobId 2403: python-fd: Starting backup of 
/_mysqlbackups_/epsilone_rcube.sql
 2020-01-21 02:07:20 bareos-sd JobId 2403: Releasing device 
"vchanger-1-0" (/var/spool/vchanger/vchanger-1/0).
 2020-01-21 02:07:20 bareos-sd JobId 2403: Elapsed time=00:00:12, 
Transfer rate=1.065 M Bytes/second
 2020-01-21 02:07:20 backup1-dir JobId 2403: Insert of attributes batch 
table with 4 entries start
 2020-01-21 02:07:20 backup1-dir JobId 2403: Insert of attributes batch 
table done
 2020-01-21 02:07:20 backup1-dir JobId 2403: Bareos backup1-dir 
19.2.4~rc1 (19Dec19):
  Build OS:   Linux-5.3.14-200.fc30.x86_64 redhat CentOS 
Linux release 7.7.1908 (Core)

  JobId:  2403
  Job:    backup_srv2_MySQL.2020-01-21_01.00.00_11
  Backup Level:   Incremental, since=2020-01-20 01:00:05
  Client: "srv2-fd" 18.2.5 (30Jan19) 
Linux-4.4.92-6.18-default,redhat,CentOS Linux release 7.6.1810 (Core) 
,CentOS_7,x86_64

  FileSet:    "MySQL - all databases" 2019-04-10 01:00:00
  Pool:   "Offsite-eSATA" (From Job resource)
  Catalog:    "MyCatalog" (From Client resource)
  Storage:    "vchanger-1-changer" (From Pool resource)
  Scheduled time: 21-Jan-2020 01:00:00
  Start time: 21-Jan-2020 02:07:08
  End time:   21-Jan-2020 02:07:20
  Elapsed time:   12 secs
  Priority:   10
  FD Files Written:   4
  SD Files Written:   4
  FD Bytes Written:   12,790,416 (12.79 MB)
  SD Bytes Written:   12,791,390 (12.79 MB)
  Rate:   1065.9 KB/s
  Software Compression:   82.7 % (gzip)
  VSS:    no
  Encryption: no
  Accurate:   no
  Volume name(s): vchanger-1_2_0076
  Volume Session Id:  35
  Volume Session Time:    1579171330
  Last Volume Bytes:  19,872,665,728 (19.87 GB)
  Non-fatal FD errors:    0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Bareos binary info: pre-release version: Get official binaries 
and vendor support on bareos.com

  Termination:    Backup OK

Remember that compression takes place on FD on a per-file basis so to 
verify that a job is indeed compressed, apart from the compression: 
field of job log is to just create a file with a known given size (let's 
say - 1GB), fill it up with zeros and then create a job to back up just 
this one file with compression. If it does work as it should, you should 
see a job with a very low "FD bytes written" and "SD bytes written" 
values since empty files compress very well.


And about the compression ratio - it all depends on the entropy of the 
input data. It's impossible to know without making some assumptions 
regarding the input data to tell how much said data will compress. As 
you can see, my job has about 83% of compression ratio and that's pretty 
typical for text data. Other types of data may compress much worse or 
even grow a bit (already compressed data).


But most important thing is that the job size doesn't have to (unless 
you're creating a volume-per-job media) correspon

[bareos-users] How to cure chronic pains with Cannabis and Cannabis oil visit....weedsmj.com

2020-01-21 Thread Jason Wolfgang
Medical Cannabis buy and sell'
TEXT: (..+16197860443.. High Grade Medical Marijuana Sativa and 
Indica strains, Hash, (RSO), BHO, HEMP OILS, THC OILS, Cannabis Oil and 
Edibles FOR SALE.
website Weedsmj.com
High Grade Medical Marijuana Sativa and Indica strains, Hash, (RSO), BHO, 
HEMP OILS, THC OILS, Cannabis Oil and Edibles
FOR SALE . These Strains helps people with the following: 
*Insomnia,Cancer,HIV/AIDS,Anxiety Disorders,Major Depression,Back Pain,Back 
Sprain,Bipolar Disorder(Nightmares),Cancer Chronic 
Pain,Seizures,Diabetes,Epilepsy,Fibromyalgia,Glaucoma E.T.C if you are 
interested do get back to us with your order. INDICA and SATIVA strains are 
available.
We are California based, we offer discrete efficient and reliable out of 
state shipping.
For more information about us and our strains, please contact us via our 
e-mail or number as below:

WE HAVE HIGH GRADE SATIVA AND INDICA STRAINS.
We grow them at our farm and sell to our customers directly. It means we 
can offer them at very good prices and we are sure of their quality.
We got some GOOD MEDICAL marijuana/weeds,hemp oil,wax and their SEEDS such 
as Sour Diesel, Moon Rock, GrandDaddy Purple,OG Kush, Sour Og Kush, Green 
Crack, Jack Dream, AK-47, Purple Kush, Bubba Kush, Bubblegum 
Kush,Blueberry,Purple-Skunk,
Master Kush, Purple Haze,Banana Kush,pineapple express, Orange kush,Night 
Queen,Big Bud, Cheese, BlueDream, White Russia, White Widow, G13,hash oils 
and seeds etc,
BUDS / FLOWERS PRICE LIST:
1 Pound = $ 1200 USD
1/2 Pound = $ 650 USD
1/4 Pound = $ 450 USD
1/8 Pound = $ 350 USD
1 Oz =$ 200 USD

OIL PRICE LIST:
10g = $250 USD
15g = $400 USD
30g = $900 USD
60g = $1600 USD

AVAILABLE OILS FOR SALE
Butane Hash Oil (BHO)
CO2 Oil
Rick Simpson Oil (RSO)
Tinctures
Hash
Black oil,
Indian oil,
BHO
red oil,
honey oil,
cherry leb oil
Afghani oil
Organic Hemp Oil 16 fl oz Liquid
Kosher Organic Cold Pressed Hemp Oil 8 OZ
Hemp Oil Essential Fatty Acid
WAX OIL
BUTANE OIL


We guarantee the safety passage of your package ,all our packages are 
customized and diplomatic sealed packages this means that they are custom 
free. We offer triple vacuum seal and stealth package on all orders so it 
can’t be scent detected by canine (dogs) or electronic sniffers, We do 
provide refunds or replace your order if there is a failure in delivering. 
Weed for sale, Where to buy weed, Hash oil for sale, Cannabis oil for sale, 
Buy marijuana online, THC oil for sale, How to buy weed online, Marijuana 
for sale, Order weed online, Buy medical marijuana online
whatsapp me.+16197860443
Text or call: +16197860443
e-mail : morgancorte...@gmail.com
website:weedsmj.com
 

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/15e74a30-75f6-4240-966c-5287f8a0d35d%40googlegroups.com.


[bareos-users] Re: File compression

2020-01-21 Thread Udo Ullrich

Dear MK,

your answer was very helpful. Now I do understand that compression works 
(at my site) and how to verify.

No questions left (issue resolved).

Martin


Am Dienstag, 21. Januar 2020 14:34:34 UTC+1 schrieb Udo Ullrich:

>
> Dear group, dear development team,
>
> thank you for helping others with this very amazing piece of software!
>
>
>
> Running the community version (18.2.5), entirely on Linux System (Client, 
> too).
>
> I want to conserve disk space, because of that I would like to use file 
> compression, however, no matter which algorithm I choose from (LZ4, GZIP, 
> ...) the volume size does 
>
> not differ. Now I suspect that there is no file compression at all. How 
> can I find out, whether, the file backup is done with compression? Are 
> there any logs telling this?
>
> In bconsole I do not get any error messages, the backup Jobs work very 
> fine (result OK).
>
> What is the usual effect (ratio, e. g. using gzip) when using file 
> compression?
>
>
> For clarification this is the configuration file:
>
>
> -
>
> FileSet {
>   Name = "BackupRoot"
>   Description = "Backup all regular filesystems"
>   Include {
> Options {
>   signature = MD5
>   fstype = ext4
>   compression = gzip
> }
> File = /
>   }
>   Exclude {
> File = /var/lib/bareos
> File = /var/lib/bareos/storage
> File = /tmp
> File = /var/tmp
>   }
> }
>
> -
>
>
> Could it be that the option "compression" is ignored? Could it be there is 
> a syntax error? I am puzzled, in the documentation I see the options with 
> and without capital letters, all in capital letter, and again with no 
> capital letters, with spaces and with quotation, e. g. compression, 
> Compression, gzip, GZIP, compression=gzip, compression = gzip, etc.
>
>
>
> Thank you for your help,
>
> Udo Ullrich
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/d18a0d68-202d-40b8-a19b-166fa0297586%40googlegroups.com.