Re: Production of NTUPLES with MCFM on SLC6.6
Hi Matteo, On 07/03/2015 04:39 PM, Matteo Scornajenghi wrote: Hello everyone, sorry to bother you but I've got some issues when I try to produce NTUPLES with MCFM, a Monte Carlo generation software. Whenever I try to produce an NTUPLE as output, either a segmentation error occurs or the following message pops up: RZOPEN. record length: 8192 maximum safe value (8191 words). Oh, ZEBRA. Not used very often nowadays. RZOPEN. You may have problems transferring your file to other systems or writing it to tape. locf_() Warning: changing base from 0 to 7fff!!! This may result in program crash or incorrect results and the produced NTUPLES are either corrupted or empty (every entry is set to 0). This is how I setup the software: I untar the last version of LHAPDF 6, then I configure the installation: ./configure --prefix=[INSTALLATION PATH] --with-boost=[BOOST PATH] --disable-python then I execute make make install Since I need CERNLIB to produce the NTUPLES and I was not able to find a specific version for SLC6, There is one in the SLC6 repository, should not be that difficult to find: pc.cern.ch[103] rpm -q CERNLIB CERNLIB-2006a-5.slc6.i686 pc.cern.ch[104] This is i686 only, but that might even be safer than the x86_64, not sure how well zebra can digest 64 bitness. I've downloaded the following one: PC Linux Cern x86_64-slc5-gcc43-opt(Cernlib 2006) Not far from that one there should also be a slc6 version, if you insist on x86_64. Best regards, Matthias Finally, I've untared MCFM-7.0 source code, put the path to CERNLIB and LHAPDF libraries in the Install file, setting USE_OMP to NO, NTUPLES to YES and PDFSET to LHAPDF in the makefile and executed: ./Install make I've created a symbolic link in Bin directory to LHAPDF's PDFsets and executed the program selecting process 146 in the Input.DAT file and setting .creatent. to true (in order to produce NTUPLES with: ./mcfm [relative path to the folder containing the input file] input.DAT Sorry again for bothering you with this. Cheers, Matteo
Re: RHEL 7 just hit the market place, I'm looking forward to when we can start testing SL 7
On 06/11/2014 04:12 AM, Steven Haigh wrote: On 11/06/14 12:07, Paul Robert Marino wrote: Yes a lot of us noticed. Recompiling an entire distro from scratch is not an easy proposition. Furthermore they need to strip out all of the Red Hat branding. Expect it to take a while at least a month or two if not more. I think it'll take longer than normal this time around... The build process is changing completely from previous versions. True, adapting the process to the new supply chain and source format will take a while. It seems the code is getting published on git.centos.org - but it seems nobody really knows who is putting it there. This leaves the moral quandary of 'do we all trust an anonymous source with no official ties to Red Hat?' http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/os/README says Current sources for Red Hat Enterprise Linux 7 have been moved to the following location: https://git.centos.org/project/rpms; Does this reduce your moral quandary a little? Matthias Time will tell.
Re: RHEL 7 just hit the market place, I'm looking forward to when we can start testing SL 7
On 11 Jun 2014, at 09:41, Steven Haigh net...@crc.id.au wrote: On 11/06/14 17:24, Matthias Schroeder wrote: On 06/11/2014 04:12 AM, Steven Haigh wrote: On 11/06/14 12:07, Paul Robert Marino wrote: Yes a lot of us noticed. Recompiling an entire distro from scratch is not an easy proposition. Furthermore they need to strip out all of the Red Hat branding. Expect it to take a while at least a month or two if not more. I think it'll take longer than normal this time around... The build process is changing completely from previous versions. True, adapting the process to the new supply chain and source format will take a while. It seems the code is getting published on git.centos.org - but it seems nobody really knows who is putting it there. This leaves the moral quandary of 'do we all trust an anonymous source with no official ties to Red Hat?' http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/os/README says Current sources for Red Hat Enterprise Linux 7 have been moved to the following location: https://git.centos.org/project/rpms; Does this reduce your moral quandary a little? Not at all. There is no source for this data at all. Just spec files and patches that have 'appeared'. The SRPMs provided by RedHat in the past are all signed by RedHat and are VERY difficult if not impossible to tamper with. There is no method to authenticate that the files being dumped into git.centos.org by an unknown source (hint: It isn't the CentOS guys putting them there) are unmodified or even supplied by RedHat. This is the problem. Ok, I see your point now. Seems I misinterpreted the ‘moral quandary’. Matthias -- Steven Haigh Email: net...@crc.id.au Web: http://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Re: Newer KDE for SL6?
On 17.10.2013, at 19:05, Konstantin Olchanski olcha...@triumf.ca wrote: ... P.S. Why konqueror? It is the only browser that still can run multiple copies of itself. Unlike firefox and google-chrome who refuse to start with an error this application is already running on computer X when I sit in front of computer Y and need to google something. (Computer X is in a different building and I am google something there *too*) (NIS+NFS cluster). And just creating a new dummy profile so that you can happily google away is not a viable workaround? Matthias -- Konstantin Olchanski Data Acquisition Systems: The Bytes Must Flow! Email: olchansk-at-triumf-dot-ca Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada
Re: double precision versus single/float ?
Hi, On 03/15/2013 04:59 PM, Keith Lofstrom wrote: ... Are double and float synonyms for the same double precision representation? From http://en.wikipedia.org/wiki/C_data_types: The actual size of floating point types also varies by implementation. The only guarantee is that the long double is not smaller than double, which is not smaller than float. Usually, 32-bit and 64-bit IEEE 754 floating point formats are used, if supported by hardware. The other issue could be that most calculations might be done using register values with precision far exceeding the precision of the variable in memory. That would complete 'hide' the expected effect of the float or double on the calculations. Use -ffloat-store to avoid this for your tests. HTH, Matthias