Re: Limit on input PDF file size in Tika?

2017-06-08 Thread tesm...@gmail.com
Thanks for your reply. I am calling Apache Tika in Java code like this:

 public String extractPDFText(String faInputFileName) throws
IOException,TikaException {

   //Handler for body text of the PDF article
 BodyContentHandler handler = new BodyContentHandler();

//Metadata of the article
Metadata metadata = new Metadata();

//Input file path
FileInputStream inputstream = new FileInputStream(new
File(faInputFileName));

//Parser context. It is used to parse InputStream
ParseContext pcontext = new ParseContext();

 try
{
//parsing the document using PDF parser from Tika. Case statement
will be added for handling other file types.
 PDFParser pdfparser = new PDFParser();

 //Do the parsing by calling the parse function of pdfparser
 pdfparser.parse(inputstream, handler, metadata,pcontext);

}catch(Exception e)
{
System.out.println("Exception caught:");
}
  //Convert the body handler to string and return the string to the
calling function
 return handler.toString();
  }

Regards,


On Thu, Jun 8, 2017 at 4:29 PM, Nick Burch <apa...@gagravarr.org> wrote:

> On Thu, 8 Jun 2017, tesm...@gmail.com wrote:
>
>> My tika code is not extracting full body text of larger PDF files.
>>
>> Files more than 1 MB  in size and around 20 pages are partially extracted.
>> Is there any limit on input PDF file  size in tika
>>
>
> How are you calling Apache Tika? Direct java calls to TikaConfig +
> AutoDetectParser? Using the Tika facade class? Using the Tika App on the
> command line? Tika Server? Other?
>
> Nick
>


Grobid with TXT and HTML files

2017-06-08 Thread tesm...@gmail.com
Dear Thamme,


https://grobid.readthedocs.io/en/latest/grobid-04-2015.pdf

The above presentation says that Grobid supports raw text. My input files
are in TXT and HTML formats. Do you have any idea how can this be supported
as raw text?



Regards,




On Wed, May 3, 2017 at 6:16 PM, Thamme Gowda <thammego...@apache.org> wrote:

> Hello,
>
> There is a nice project called Grobid [1] that does most of what you are
> describing.
> Tika has Grobid parser built in (it calls grobid over REST API) . checkout
> [2] for details
>
> I have a project that makes use of Tika with Grobid and NER support. It
> also builds a search index using solr.
> Check out [3] for setting up and [4] for parsing and indexing to solr if
> you like to try out my python project.
> Here I am able to extract title, author names, affiliations, and the whole
> text of articles.
> I did not extract sections within the main body of research articles.  I
> assume there should be a way to configure it in Grobid.
>
> Alternatively, if Grobid can't detect sections, you can try XHTML content
> handler which preserves the basic structure of PDF file usingand
> heading tags. So technically it should be possible to write a wrapper to
> break XHTML output from tika into sections
>
> To get it:
>
> # In bash do `pip install tika’ if tika isn’t already installed
> import tika
> tika.initVM()
> from tika import parser
>
>
> file_path = "/2538.pdf"
> data = parser.from_file(file_path, xmlContent=True)
> print(data['content'])
>
>
>
>
> Best,
> Thamme
>
> [1] http://grobid.readthedocs.io/en/latest/Introduction/
> [2] https://wiki.apache.org/tika/GrobidJournalParser
> [3] https://github.com/USCDataScience/parser-indexer-
> py/tree/master/parser-server
> [4] https://github.com/USCDataScience/parser-indexer-
> py/blob/master/docs/parser-index-journals.md
>
> *--*
> *Thamme Gowda*
> TG | @thammegowda <https://twitter.com/thammegowda>
> ~Sent via somebody's Webmail server!
>
> On Wed, May 3, 2017 at 9:34 AM, tesm...@gmail.com <tesm...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I am working with published research articles using Apache Tika. These
>> articles have distinct sections like abstract, introduction, literature
>> review, methodology, experimental setup, discussion and conclusions. Is
>> there some way to extract document sections with Apache Tika
>>
>> Regards,
>>
>
>


Reading PDF/text/word file efficiently with Spark

2017-05-19 Thread tesm...@gmail.com
Hi,
I am doing NLP (Natural Language Processing) processing on my data. The
data is in form of files that can be of type PDF/Text/Word/HTML. These
files are stored in a directory structure on my local disk, even nested
directories. My stand alone Java based NLP parser can read input files,
extract text from these and do the NLP processing on the extracted text.

I am converting my Java based NLP parser to execute it on my Spark cluster.
I know that Spark can read multiple text files from a directory and convert
into RDDs for further processing. My input data is not only in text files,
but in a multitude of different file formats. My question is: How can I
efficiently read the input files (PDF/Text/Word/HTML) in my Java based
Spark program for processing these files in Spark cluster. Regards,

Regards,


Re: Analysing a document sections with Apache Tika

2017-05-04 Thread tesm...@gmail.com
e possible to write a wrapper to
> break XHTML output from tika into sections
>
> To get it:
>
> # In bash do `pip install tika’ if tika isn’t already installed
> import tika
> tika.initVM()
> from tika import parser
>
>
> file_path = "/2538.pdf"
> data = parser.from_file(file_path, xmlContent=True)
> print(data['content'])
>
>
>
>
> Best,
> Thamme
>
> [1] http://grobid.readthedocs.io/en/latest/Introduction/
> [2] https://wiki.apache.org/tika/GrobidJournalParser
> [3] https://github.com/USCDataScience/parser-indexer-
> py/tree/master/parser-server
> [4] https://github.com/USCDataScience/parser-indexer-
> py/blob/master/docs/parser-index-journals.md
>
> *--*
> *Thamme Gowda*
> TG | @thammegowda <https://twitter.com/thammegowda>
> ~Sent via somebody's Webmail server!
>
> On Wed, May 3, 2017 at 9:34 AM, tesm...@gmail.com <tesm...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I am working with published research articles using Apache Tika. These
>> articles have distinct sections like abstract, introduction, literature
>> review, methodology, experimental setup, discussion and conclusions. Is
>> there some way to extract document sections with Apache Tika
>>
>> Regards,
>>
>
>


unsubscribe

2017-04-12 Thread tesm...@gmail.com
unsubscribe


Exception while creating a HttpSolrClinet

2016-12-15 Thread tesm...@gmail.com
Hi,

I am getting the following exception while creating a Solr client. Any help
is appreciated

=This is code snipper to create a SolrClient===

public void populate (String args) throws IOException, SolrServerException
 {
  String urlString =  "http://localhost:8983/solr;;
   SolrClient  server = new HttpSolrClient.Builder(urlString).build();
..
..
===




Exception in thread "main" java.lang.VerifyError: Bad return type
Exception Details:
  Location:

org/apache/solr/client/solrj/impl/HttpClientUtil.createClient(Lorg/apache/solr/common/params/SolrParams;)Lorg/apache/http/impl/client/CloseableHttpClient;
@57: areturn
  Reason:
Type 'org/apache/http/impl/client/SystemDefaultHttpClient' (current
frame, stack[0]) is not assignable to
'org/apache/http/impl/client/CloseableHttpClient' (from method signature)
  Current Frame:
bci: @57
flags: { }
locals: { 'org/apache/solr/common/params/SolrParams',
'org/apache/solr/common/params/ModifiableSolrParams',
'org/apache/http/impl/client/SystemDefaultHttpClient' }
stack: { 'org/apache/http/impl/client/SystemDefaultHttpClient' }
  Bytecode:
0x000: bb00 0359 2ab7 0004 4cb2 0005 b900 0601
0x010: 0099 001e b200 05bb 0007 59b7 0008 1209
0x020: b600 0a2b b600 0bb6 000c b900 0d02 00b8
0x030: 000e 4d2c 2bb8 000f 2cb0
  Stackmap Table:
append_frame(@47,Object[#143])

at
org.apache.solr.client.solrj.impl.HttpSolrClient.(HttpSolrClient.java:209)
at
org.apache.solr.client.solrj.impl.HttpSolrClient$Builder.build(HttpSolrClient.java:874)
at PDFParseExtract.populate(PDFParseExtract.java:60)
at PDFParseExtract.main(PDFParseExtract.java:53)


Solr+Solarium deployment on Azure - Best practices

2016-11-29 Thread tesm...@gmail.com
Hi,

I am deploying a search engine on Azure. The following is my configuration:

Solr server is running on Ubuntu VM (hosted on Azure)
PHP web app is hosted on Azure using the same VM hosting Solr server.


Is there any best practices/approach guidelines?

I am getting the following exception:
Fatal error: Uncaught exception 'Solarium\Exception\HttpException' with
message 'Solr HTTP error: HTTP request failed, Connection timed out after
5000 milliseconds' in
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php:195
Stack trace: #0
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php(92):
Solarium\Core\Client\Adapter\Curl->check('', Array, Resource id #3) #1
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php(213):
Solarium\Core\Client\Adapter\Curl->getResponse(Resource id #3, false) #2
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php(68):
Solarium\Core\Client\Adapter\Curl->getData(Object(Solarium\Core\Client\Request),
Object(Solarium\Core\Client\Endpoint)) #3
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Client.php(804):
Solarium\Core\Client\Adapter\Curl->execute(Object(Solarium\Core\Client\Request),
Object(Solarium\Core\Clie in
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php
on line 195



Regards,


HTTP Request timeout exception with Solr+Solarium on Azure

2016-11-29 Thread tesm...@gmail.com
Hi,




I am deploying Solr+PHPSolarium on Azure

Solr server is running in a Ubuntu VM on Azure. Php pages PHPSolarium are
hosted as webapp using the same VM as for Solr server.

After deployment, I am getting the following HTTP request timeout error:



Fatal error: Uncaught exception 'Solarium\Exception\HttpException' with
message 'Solr HTTP error: HTTP request failed, Connection timed out after
5016 milliseconds' in
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php:195
Stack trace: #0
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php(92):
Solarium\Core\Client\Adapter\Curl->check('', Array, Resource id #3) #1
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php(213):
Solarium\Core\Client\Adapter\Curl->getResponse(Resource id #3, false) #2
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php(68):
Solarium\Core\Client\Adapter\Curl->getData(Object(Solarium\Core\Client\Request),
Object(Solarium\Core\Client\Endpoint)) #3
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Client.php(804):
Solarium\Core\Client\Adapter\Curl->execute(Object(Solarium\Core\Client\Request),
Object(Solarium\Core\Clie in
D:\home\site\wwwroot\vendor\solarium\solarium\library\Solarium\Core\Client\Adapter\Curl.php
on line 195


Any help is much appreciated


Regards,


Re: Custom .... - Web toolkit for developing Solr Client application

2016-11-07 Thread tesm...@gmail.com
Hi,

Thanks all for providing help to my previous question. I make my question
generic to make it more clear.

I have developed index with Lucene/Solr and can search the indexed data
using Solr 'browse'. This interface provides some of the functionality for
my client application

I do understand that it is not advisable to use this interface for a web
site due to security concerns.

My question is :
Are there any web tool kits available for developing Solr based web client
applications. I need the following features:
1) User authentication
2) Search from one or more fields
3) Search term highlighting
4) Graphical view of the search results (month wise popularity index of a
hotel or alike)
5) Grouping similar search results.




Regards,


On Fri, Nov 4, 2016 at 8:53 PM, Erik Hatcher <erik.hatc...@gmail.com> wrote:

> What kind of graphical format?
>
> > On Nov 4, 2016, at 14:01, "tesm...@gmail.com" <tesm...@gmail.com> wrote:
> >
> > Hi,
> >
> > My search query comprises of more than one fields like search string,
> date
> > field and a one optional field).
> >
> > I need to represent these on the web interface to the users.
> >
> > Secondly, I need to represent the search data in graphical format.
> >
> > Is there some Solr web client that provides the above features or Is
> there
> > a way to modify the default Solr Browse interface and add above options?
> >
> >
> >
> >
> >
> > Regards,
>


Re: Custom user web interface for Solr

2016-11-07 Thread tesm...@gmail.com
Dear Erik,

Thanks for your reply.

A month wise bar graph of the popularity of a hotel from search results.
These graphs will be generated from the search results and will be
displayed on an on-demand basis.

Regards,


On Fri, Nov 4, 2016 at 8:53 PM, Erik Hatcher <erik.hatc...@gmail.com> wrote:

> What kind of graphical format?
>
> > On Nov 4, 2016, at 14:01, "tesm...@gmail.com" <tesm...@gmail.com> wrote:
> >
> > Hi,
> >
> > My search query comprises of more than one fields like search string,
> date
> > field and a one optional field).
> >
> > I need to represent these on the web interface to the users.
> >
> > Secondly, I need to represent the search data in graphical format.
> >
> > Is there some Solr web client that provides the above features or Is
> there
> > a way to modify the default Solr Browse interface and add above options?
> >
> >
> >
> >
> >
> > Regards,
>


Custom user web interface for Solr

2016-11-04 Thread tesm...@gmail.com
Hi,

My search query comprises of more than one fields like search string, date
field and a one optional field).

I need to represent these on the web interface to the users.

Secondly, I need to represent the search data in graphical format.

Is there some Solr web client that provides the above features or Is there
a way to modify the default Solr Browse interface and add above options?





Regards,


Re: Combine Data from PDF + XML

2016-10-26 Thread tesm...@gmail.com
Hi Erick,

Thanks for your reply.

Yes, XML files contain metadata about PDF files. I need to search from both
XML and PDF files and to show search results from both sources.


Regards,

On Wed, Oct 26, 2016 at 1:47 AM, Erick Erickson <erickerick...@gmail.com>
wrote:

> First you need to define the problem
>
> what do you mean by "combine"? Do the XML files
> contain, say, metadata about an associated PDF file?
>
> Or are these entirely orthogonal documents that
> you need to index into the same collection?
>
> Best,
> Erick
>
> On Tue, Oct 25, 2016 at 4:18 PM, tesm...@gmail.com <tesm...@gmail.com>
> wrote:
> > Hi,
> >
> > I ma new to Apache Solr.  Developing a search project. The source data is
> > coming from two sources:
> >
> > 1) XML Files
> >
> > 2) PDF Files
> >
> >
> > I need to combine these two sources for search.  Couldn't find example of
> > combining these two sources. Any help is appreciated.
> >
> >
> > Regards,
>


Combine Data from PDF + XML

2016-10-25 Thread tesm...@gmail.com
Hi,

I ma new to Apache Solr.  Developing a search project. The source data is
coming from two sources:

1) XML Files

2) PDF Files


I need to combine these two sources for search.  Couldn't find example of
combining these two sources. Any help is appreciated.


Regards,


Jar for Spark developement

2016-06-21 Thread tesm...@gmail.com
Hi,

Beginner in Spark development. Took time to configure Eclipse + Scala. Is
there any tutorial that can help beginners.

Still struggling to find Spark JAR files for development. There is no lib
folder in my Spark distribution (neither in pre-built nor in custom built..)


Regards,


Re: video stream as input to sequence files

2015-03-10 Thread tesm...@gmail.com
Thanks. Is there some example of this process.


Regards,



On Sat, Feb 28, 2015 at 7:11 AM, daemeon reiydelle daeme...@gmail.com
wrote:

 My thinking ... in your map step take each frame and tag it with an
 appropriate unique key. Your reducers (if used) then do the frame analysis,
 If doing frame sequences, then you need to decide the granularity vs. time
 each node spends executing. Same sort of process that is done for e.g.
 satellite images undergoing feature recognition analysis.



 *...*






 *“Life should not be a journey to the grave with the intention of arriving
 safely in apretty and well preserved body, but rather to skid in broadside
 in a cloud of smoke,thoroughly used up, totally worn out, and loudly
 proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M. ReiydelleUSA
 (+1) 415.501.0198London (+44) (0) 20 8144 9872*

 On Wed, Feb 25, 2015 at 11:54 PM, tesm...@gmail.com tesm...@gmail.com
 wrote:

 Dear Daemeon,

 Thanks for your rpely. Here is my flow.

 I am processing video frames using MapReduce. Presently, I convert the
 video files to individual framess, make a sequence file out of them and
 transfer the sequence file to HDFS.

 This flow is not optimized and I need to optimize it.

 On Thu, Feb 26, 2015 at 3:00 AM, daemeon reiydelle daeme...@gmail.com
 wrote:

 Can you explain your use case?



 *...*






 *“Life should not be a journey to the grave with the intention of
 arriving safely in apretty and well preserved body, but rather to skid in
 broadside in a cloud of smoke,thoroughly used up, totally worn out, and
 loudly proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M.
 ReiydelleUSA (+1) 415.501.0198 %28%2B1%29%20415.501.0198London (+44) (0)
 20 8144 9872 %28%2B44%29%20%280%29%2020%208144%209872*

 On Wed, Feb 25, 2015 at 4:01 PM, tesm...@gmail.com tesm...@gmail.com
 wrote:

 Hi,

 How can I make my video data files as input for sequence file or to
 HDFS directly.


 Regards,
 Tariq







Re: t2.micro on AWS; Is it enough for setting up Hadoop cluster ?

2015-03-07 Thread tesm...@gmail.com
 Dear Jonathan,

Would you please describe the process of running EMR based Hadoop for
$15.00, I tried and my cost were rocketing like $60 for one hour.

Regards


On 05/03/2015 23:57, Jonathan Aquilina wrote:

krish EMR wont cost you much with all the testing and data we ran through
the test systems as well as the large amont of data when everythign was
read we paid about 15.00 USD. I honestly do not think that the specs there
would be enough as java can be pretty ram hungry.



---
Regards,
Jonathan Aquilina
Founder Eagle Eye T

 On 2015-03-06 00:41, Krish Donald wrote:

 Hi,

I am new to AWS and would like to setup Hadoop cluster using cloudera
manager for 6-7 nodes.

t2.micro on AWS; Is it enough for setting up Hadoop cluster ?
I would like to use free service as of now.

Please advise.

Thanks
Krish


Re: How to resolve--- Unauthorized request to start container. This token is expired.

2015-02-27 Thread tesm...@gmail.com
Dear Jan,


I changed the data of the node by sudo date *newdatetimestring*

Thanks for your help



Regards,


On Thu, Feb 26, 2015 at 6:31 PM, Jan van Bemmelen j...@tokyoeye.net wrote:

 Hi Tariq,

 You seem to be using debian or ubuntu. The documentation here will guide
 you through setting up ntp:
 http://www.cyberciti.biz/faq/debian-ubuntu-linux-install-ntpd/ . When you
 have finished these steps you can check the system’s clocks using the
 ‘date’ command’. The differences between the servers should be minimal.

 Regards,
 Jan


 On 26 Feb 2015, at 19:19, tesm...@gmail.com wrote:

 Thanks Jan. I did the follwoing:

 1) Manually set the timezone of all the nodes using  sudo
  dpkg-reconfigure tzdata
 2) Re-booted the nodes

 Still having the same exception.

 How can I configure NTP?

 Regards,
 Tariq


 On Thu, Feb 26, 2015 at 5:33 PM, Jan van Bemmelen j...@tokyoeye.net
 wrote:

 Could you check for any time differences between your servers? If so,
 please install and run NTP, and retry your job.

 Regards,
 Jan


 On 26 Feb 2015, at 17:57, tesm...@gmail.com wrote:

 I am getting  Unauthorized request to start container.  This token is
 expired.
 How to resovle it. The problem is reported on different forums, but I
 could not find an solution to it.


 Below is the execution log

 15/02/26 16:41:02 INFO impl.YarnClientImpl: Submitted application
 application_1424968835929_0001
 15/02/26 16:41:02 INFO mapreduce.Job: The url to track the job:
 http://101-master15:8088/proxy/application_1424968835929_0001/
 15/02/26 16:41:02 INFO mapreduce.Job: Running job: job_1424968835929_0001
 15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 running
 in uber mode : false
 15/02/26 16:41:04 INFO mapreduce.Job:  map 0% reduce 0%
 15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 failed
 with state FAILED due to: Application application_1424968835929_0001 failed
 2 times due to Error launching appattempt_1424968835929_0001_02. Got
 exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized
 request to start container.
 This token is expired. current time is 1424969604829 found 1424969463686
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at
 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
 at
 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
 at
 org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122)
 at
 org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 . Failing the application.
 15/02/26 16:41:04 INFO mapreduce.Job: Counters: 0
 Time taken: 0 days, 0 hours, 0 minutes, 9 seconds.







Re: How to resolve--- Unauthorized request to start container. This token is expired.

2015-02-26 Thread tesm...@gmail.com
Thanks Jan. I did the follwoing:

1) Manually set the timezone of all the nodes using  sudo
 dpkg-reconfigure tzdata
2) Re-booted the nodes

Still having the same exception.

How can I configure NTP?

Regards,
Tariq


On Thu, Feb 26, 2015 at 5:33 PM, Jan van Bemmelen j...@tokyoeye.net wrote:

 Could you check for any time differences between your servers? If so,
 please install and run NTP, and retry your job.

 Regards,
 Jan


 On 26 Feb 2015, at 17:57, tesm...@gmail.com wrote:

 I am getting  Unauthorized request to start container.  This token is
 expired.
 How to resovle it. The problem is reported on different forums, but I
 could not find an solution to it.


 Below is the execution log

 15/02/26 16:41:02 INFO impl.YarnClientImpl: Submitted application
 application_1424968835929_0001
 15/02/26 16:41:02 INFO mapreduce.Job: The url to track the job:
 http://101-master15:8088/proxy/application_1424968835929_0001/
 15/02/26 16:41:02 INFO mapreduce.Job: Running job: job_1424968835929_0001
 15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 running
 in uber mode : false
 15/02/26 16:41:04 INFO mapreduce.Job:  map 0% reduce 0%
 15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 failed
 with state FAILED due to: Application application_1424968835929_0001 failed
 2 times due to Error launching appattempt_1424968835929_0001_02. Got
 exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized
 request to start container.
 This token is expired. current time is 1424969604829 found 1424969463686
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at
 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
 at
 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
 at
 org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122)
 at
 org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 . Failing the application.
 15/02/26 16:41:04 INFO mapreduce.Job: Counters: 0
 Time taken: 0 days, 0 hours, 0 minutes, 9 seconds.





Re: How to resolve--- Unauthorized request to start container. This token is expired.

2015-02-26 Thread tesm...@gmail.com
Thanks Jan,

I followed the link and re-booted the node.

Still no success.

Time on this node is about 13 minutes behind the other nodes. Any otehr
suggestion please

This node is workig as my namenode




On Thu, Feb 26, 2015 at 6:31 PM, Jan van Bemmelen j...@tokyoeye.net wrote:

 Hi Tariq,

 You seem to be using debian or ubuntu. The documentation here will guide
 you through setting up ntp:
 http://www.cyberciti.biz/faq/debian-ubuntu-linux-install-ntpd/ . When you
 have finished these steps you can check the system’s clocks using the
 ‘date’ command’. The differences between the servers should be minimal.

 Regards,
 Jan


 On 26 Feb 2015, at 19:19, tesm...@gmail.com wrote:

 Thanks Jan. I did the follwoing:

 1) Manually set the timezone of all the nodes using  sudo
  dpkg-reconfigure tzdata
 2) Re-booted the nodes

 Still having the same exception.

 How can I configure NTP?

 Regards,
 Tariq


 On Thu, Feb 26, 2015 at 5:33 PM, Jan van Bemmelen j...@tokyoeye.net
 wrote:

 Could you check for any time differences between your servers? If so,
 please install and run NTP, and retry your job.

 Regards,
 Jan


 On 26 Feb 2015, at 17:57, tesm...@gmail.com wrote:

 I am getting  Unauthorized request to start container.  This token is
 expired.
 How to resovle it. The problem is reported on different forums, but I
 could not find an solution to it.


 Below is the execution log

 15/02/26 16:41:02 INFO impl.YarnClientImpl: Submitted application
 application_1424968835929_0001
 15/02/26 16:41:02 INFO mapreduce.Job: The url to track the job:
 http://101-master15:8088/proxy/application_1424968835929_0001/
 15/02/26 16:41:02 INFO mapreduce.Job: Running job: job_1424968835929_0001
 15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 running
 in uber mode : false
 15/02/26 16:41:04 INFO mapreduce.Job:  map 0% reduce 0%
 15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 failed
 with state FAILED due to: Application application_1424968835929_0001 failed
 2 times due to Error launching appattempt_1424968835929_0001_02. Got
 exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized
 request to start container.
 This token is expired. current time is 1424969604829 found 1424969463686
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at
 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
 at
 org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
 at
 org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122)
 at
 org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 . Failing the application.
 15/02/26 16:41:04 INFO mapreduce.Job: Counters: 0
 Time taken: 0 days, 0 hours, 0 minutes, 9 seconds.







Re: java.net.UnknownHostException on one node only

2015-02-25 Thread tesm...@gmail.com
Thanks Varun,

Where shall I check to resolve it?


Regards,
Tariq

On Mon, Feb 23, 2015 at 4:07 AM, Varun Kumar varun@gmail.com wrote:

 Hi Tariq,

 Issues looks like DNS configuration issue.


 On Sun, Feb 22, 2015 at 3:51 PM, tesm...@gmail.com tesm...@gmail.com
 wrote:

 I am getting java.net.UnknownHost exception continuously on one node
 Hadoop MApReduce execution.

 That node is accessible via SSH. This node is shown in yarn node -list
 and hadfs dfsadmin -report queries.

 Below is the log from execution

 15/02/22 20:17:42 INFO mapreduce.Job: Task Id :
 attempt_1424622614381_0008_m_43_0, Status : FAILED
 Container launch failed for container_1424622614381_0008_01_16 :
 java.lang.IllegalArgumentException: *java.net.UnknownHostException:
 101-master10*
 at
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
 at
 org.apache.hadoop.security.SecurityUtil.setTokenService(SecurityUtil.java:352)
 at
 org.apache.hadoop.yarn.util.ConverterUtils.convertFromYarn(ConverterUtils.java:237)
 at
 org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:218)
 at
 org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.init(ContainerManagementProtocolProxy.java:196)
 at
 org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
 at
 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
 at
 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
 at
 org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 *Caused by: java.net.UnknownHostException: 101-master10*
 ... 12 more



 15/02/22 20:17:44 INFO

 Regards,
 Tariq




 --
 Regards,
 Varun Kumar.P



HDFS data after nodes become unavailable?

2015-02-25 Thread tesm...@gmail.com
Dear all,

I have transferred the data from local storage to HDFS in my 10 nodes
Hadoop cluster. The relication facotr is 3.

Some nodes, say 3,  are not available after some time. I can't use those
nodes for computation or storage of data.

What will happen to the data stored on HDFS of those nodes?

Do I need to remvoe all the data from HDFS and copy it again?

Regards,


video stream as input to sequence files

2015-02-25 Thread tesm...@gmail.com
Hi,

How can I make my video data files as input for sequence file or to HDFS
directly.


Regards,
Tariq


Re: video stream as input to sequence files

2015-02-25 Thread tesm...@gmail.com
Dear Daemeon,

Thanks for your rpely. Here is my flow.

I am processing video frames using MapReduce. Presently, I convert the
video files to individual framess, make a sequence file out of them and
transfer the sequence file to HDFS.

This flow is not optimized and I need to optimize it.

On Thu, Feb 26, 2015 at 3:00 AM, daemeon reiydelle daeme...@gmail.com
wrote:

 Can you explain your use case?



 *...*






 *“Life should not be a journey to the grave with the intention of arriving
 safely in apretty and well preserved body, but rather to skid in broadside
 in a cloud of smoke,thoroughly used up, totally worn out, and loudly
 proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M. ReiydelleUSA
 (+1) 415.501.0198London (+44) (0) 20 8144 9872*

 On Wed, Feb 25, 2015 at 4:01 PM, tesm...@gmail.com tesm...@gmail.com
 wrote:

 Hi,

 How can I make my video data files as input for sequence file or to HDFS
 directly.


 Regards,
 Tariq





java.net.UnknownHostException on one node only

2015-02-22 Thread tesm...@gmail.com
I am getting java.net.UnknownHost exception continuously on one node Hadoop
MApReduce execution.

That node is accessible via SSH. This node is shown in yarn node -list
and hadfs dfsadmin -report queries.

Below is the log from execution

15/02/22 20:17:42 INFO mapreduce.Job: Task Id :
attempt_1424622614381_0008_m_43_0, Status : FAILED
Container launch failed for container_1424622614381_0008_01_16 :
java.lang.IllegalArgumentException: *java.net.UnknownHostException:
101-master10*
at
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
at
org.apache.hadoop.security.SecurityUtil.setTokenService(SecurityUtil.java:352)
at
org.apache.hadoop.yarn.util.ConverterUtils.convertFromYarn(ConverterUtils.java:237)
at
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:218)
at
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.init(ContainerManagementProtocolProxy.java:196)
at
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
at
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
at
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
at
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
*Caused by: java.net.UnknownHostException: 101-master10*
... 12 more



15/02/22 20:17:44 INFO

Regards,
Tariq


Running MapReduce jobs in batch mode on different data sets

2015-02-21 Thread tesm...@gmail.com
Hi,

Is it possible to run jobs on Hadoop in batch mode?

I have 5 different datasets in HDFS and need to run the same MapReduce
application on these datasets sets one after the other.

Right now I am doing it manually How can I automate this?

How can I save the log of each execution in text files for later processing?

Regards,
Tariq


Scheduling in YARN according to available resources

2015-02-20 Thread tesm...@gmail.com
I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1
Namenode + 6 datanodes.

I followed the link from Hortonwroks [
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html
] and made calculation according to the hardware configruation on my nodes.
Added the update mapred-site and yarn-site.xml files in my question. Still
my application is crashing with the same exection

My mapreduce application has 34 input splits with a block size of 128MB.

**mapred-site.xml** has the  following properties:

mapreduce.framework.name  = yarn
mapred.child.java.opts= -Xmx2048m
mapreduce.map.memory.mb   = 4096
mapreduce.map.java.opts   = -Xmx2048m

**yarn-site.xml** has the  following properties:

yarn.resourcemanager.hostname= hadoop-master
yarn.nodemanager.aux-services= mapreduce_shuffle
yarn.nodemanager.resource.memory-mb  = 6144
yarn.scheduler.minimum-allocation-mb = 2048
yarn.scheduler.maximum-allocation-mb = 6144


 Exception from container-launch: ExitCodeException exitCode=134:
/bin/bash: line 1:  3876 Aborted  (core dumped)
/usr/lib/jvm/java-7-openjdk-amd64/bin/java
-Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx8192m
-Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_11/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11
-Dyarn.app.container.log.filesize=0
-Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild
192.168.0.12 50842 attempt_1424264025191_0002_m_05_0 11 

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stdout
2

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stderr


How can avoid this?any help is appreciated

It looks to me that YAN is trying to launch all the container
simultaneously and anot according to the available resources. Is there an
option to restrict number of containers on hadoop ndoes?

Regards,
Tariq


YARN container lauch failed exception and mapred-site.xml configuration

2015-02-20 Thread tesm...@gmail.com
I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1
Namenode + 6 datanodes.

**EDIT-1@ARNON:** I followed the link, mad calculation according to the
hardware configruation on my nodes and have added the update mapred-site
and yarn-site.xml files in my question. Still my application is crashing
with the same exection

My mapreduce application has 34 input splits with a block size of 128MB.

**mapred-site.xml** has the  following properties:

mapreduce.framework.name  = yarn
mapred.child.java.opts= -Xmx2048m
mapreduce.map.memory.mb   = 4096
mapreduce.map.java.opts   = -Xmx2048m

**yarn-site.xml** has the  following properties:

yarn.resourcemanager.hostname= hadoop-master
yarn.nodemanager.aux-services= mapreduce_shuffle
yarn.nodemanager.resource.memory-mb  = 6144
yarn.scheduler.minimum-allocation-mb = 2048
yarn.scheduler.maximum-allocation-mb = 6144


 Exception from container-launch: ExitCodeException exitCode=134:
/bin/bash: line 1:  3876 Aborted  (core dumped)
/usr/lib/jvm/java-7-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx8192m
-Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_11/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 192.168.0.12 50842
attempt_1424264025191_0002_m_05_0 11 

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stdout
2

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stderr


How can avoid this?any help is appreciated

Is there an option to restrict number of containers on hadoop ndoes?


Re: Scheduling in YARN according to available resources

2015-02-20 Thread tesm...@gmail.com
Thanks for your answer Nair,
Is installing Oracle JDK on Ubuntu is that complicated as described in this
link
http://askubuntu.com/questions/56104/how-can-i-install-sun-oracles-proprietary-java-jdk-6-7-8-or-jre

Is there an alternate?

Regards


On Sat, Feb 21, 2015 at 6:50 AM, R Nair ravishankar.n...@gmail.com wrote:

 I had an issue very similar, I changed and used Oracle JDK. There is
 nothing I see wrong with your configuration in my first look, thanks

 Regards,
 Nair

 On Sat, Feb 21, 2015 at 1:42 AM, tesm...@gmail.com tesm...@gmail.com
 wrote:

 I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1
 Namenode + 6 datanodes.

 I followed the link from Hortonwroks [
 http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html
 ] and made calculation according to the hardware configruation on my
 nodes. Added the update mapred-site and yarn-site.xml files in my question.
 Still my application is crashing with the same exection

 My mapreduce application has 34 input splits with a block size of 128MB.

 **mapred-site.xml** has the  following properties:

 mapreduce.framework.name  = yarn
 mapred.child.java.opts= -Xmx2048m
 mapreduce.map.memory.mb   = 4096
 mapreduce.map.java.opts   = -Xmx2048m

 **yarn-site.xml** has the  following properties:

 yarn.resourcemanager.hostname= hadoop-master
 yarn.nodemanager.aux-services= mapreduce_shuffle
 yarn.nodemanager.resource.memory-mb  = 6144
 yarn.scheduler.minimum-allocation-mb = 2048
 yarn.scheduler.maximum-allocation-mb = 6144


  Exception from container-launch: ExitCodeException exitCode=134:
 /bin/bash: line 1:  3876 Aborted  (core dumped)
 /usr/lib/jvm/java-7-openjdk-amd64/bin/java
 -Djava.net.preferIPv4Stack=true
 -Dhadoop.metrics.log.level=WARN -Xmx8192m
 -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_11/tmp
 -Dlog4j.configuration=container-log4j.properties
 -Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11
 -Dyarn.app.container.log.filesize=0
 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild
 192.168.0.12 50842 attempt_1424264025191_0002_m_05_0 11 

 /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stdout
 2

 /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stderr


 How can avoid this?any help is appreciated

 It looks to me that YAN is trying to launch all the container
 simultaneously and anot according to the available resources. Is there
 an option to restrict number of containers on hadoop ndoes?

 Regards,
 Tariq




 --
 Warmest Regards,

 Ravi Shankar



Fwd: YARN container lauch failed exception and mapred-site.xml configuration

2015-02-20 Thread tesm...@gmail.com
I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1
Namenode + 6 datanodes.

I followed the link o horton works [
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html]
and made  calculation according to the hardware configruation on my nodes
and have added the update mapred-site and yarn-site.xml files in my
question. Still my application is crashing with the same exection

My mapreduce application has 34 input splits with a block size of 128MB.

**mapred-site.xml** has the  following properties:

mapreduce.framework.name  = yarn
mapred.child.java.opts= -Xmx2048m
mapreduce.map.memory.mb   = 4096
mapreduce.map.java.opts   = -Xmx2048m

**yarn-site.xml** has the  following properties:

yarn.resourcemanager.hostname= hadoop-master
yarn.nodemanager.aux-services= mapreduce_shuffle
yarn.nodemanager.resource.memory-mb  = 6144
yarn.scheduler.minimum-allocation-mb = 2048
yarn.scheduler.maximum-allocation-mb = 6144


 Exception from container-launch: ExitCodeException exitCode=134:
/bin/bash: line 1:  3876 Aborted  (core dumped)
/usr/lib/jvm/java-7-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN -Xmx8192m
-Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_11/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 192.168.0.12 50842
attempt_1424264025191_0002_m_05_0 11 

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stdout
2

/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_11/stderr


How can avoid this?any help is appreciated

Is there an option to restrict number of containers on hadoop ndoes?


Re: Reconfiguration Problem

2014-08-06 Thread tesm...@gmail.com
Dear Richard,

Attached is console log for lyx.

Starting Lyx from console solved the re-configuration errors.
Reconfiguration still fails when Lyx is started directly from GUI.

I still having errors in exporting Lyx document to OpenDocument.

Regards,
Tariq




On Tue, Aug 5, 2014 at 11:49 AM, Richard Heck rgh...@lyx.org wrote:

 On 08/05/2014 11:45 AM, tesm...@gmail.com wrote:

 Dear Richard,

 I get the following error message while Lyx Reconfiguration:

 The script '/usr/share/lyx/scriptes/TexFile.py' failed

 Secondly, All the document classes are unavailable in my Lyx installation

 I have Lyx 2.1.1 (July 2014 release)  and have installed texlive-base,
 and texlive-extra


 This sounds like a larger configuration issue.

 Can you run LyX from a terminal and see what error messages you get when
 you try to reconfigure?

 Richard

 PS I'm cc'ing this back to the devel list. I expect we will need more
 help. Remember to reply to all when you do reply.



tariq@ubuntu:~$ lyx
Creating file /tmp/lyx_tmpdir.DvnJAAHY3183/Buffer_importStringUX3183.lyx
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
(./textLyx.tex
LaTeX2e 2011/06/27
Babel v3.8m and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.

support/Systemcall.cpp (292): Systemcall: 'pdflatex  textLyx.tex' finished 
with exit code 1
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
(./textLyx.tex
LaTeX2e 2011/06/27
Babel v3.8m and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.

mk4ht (2008-06-28-19:09)
/usr/share/tex4ht/htlatex textLyx.tex xhtml,ooffice ooffice/! -cmozhtf 
-cooxtpipes -coo
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
LaTeX2e 2011/06/27
Babel v3.8m and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.
(./textLyx.tex
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
LaTeX2e 2011/06/27
Babel v3.8m and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.
(./textLyx.tex
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
LaTeX2e 2011/06/27
Babel v3.8m and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.
(./textLyx.tex

tex4ht.c (2009-01-31-07:33 kpathsea)
tex4ht -f/textLyx.tex 
  -i/usr/share/texmf/tex4ht/ht-fonts/ooffice/! 
  -cmozhtf 
--- warning --- Can't find/open file `tex4ht.env | .tex4ht'
--- error --- Illegal storage address
--- warning --- Can't find/open file `textLyx.lg'

t4ht.c (2009-01-31-07:34 kpathsea)
t4ht -f/textLyx.tex 
  -cooxtpipes 
  -coo 
(/usr/share/texmf/tex4ht/tex4ht.env)
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
(./textLyx.tex
LaTeX2e 2011/06/27
Babel v3.8m and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.

textLyx.tex:2   Unknown command '\batchmode'
textLyx.tex:5   Document format IEEEtran unknown, using article format
textLyx.tex:5   Package/option 'conference' unknown.





Re: Reconfiguration Problem

2014-08-06 Thread tesm...@gmail.com
Dear Richard,

Thanks for your reply.

I would like to participate in testing of the GSOC project to
LyxToWordConversion [http://wiki.lyx.org/GSoC/LyxToWordConversion].

Would someone please add me to the list and provide the binaries/code for
testing?

Regards,
Tariq


On Wed, Aug 6, 2014 at 3:40 PM, Richard Heck rgh...@lyx.org wrote:

 On 08/06/2014 04:52 AM, tesm...@gmail.com wrote:

 Dear Richard,

 Attached is console log for lyx.

 Starting Lyx from console solved the re-configuration errors.
 Reconfiguration still fails when Lyx is started directly from GUI.


 That probably means there is some difference in the two environments,
 e.g., in the paths. It is possible that LyX is calling python 3.x from the
 GUI and python 2.x from the console. Hard to know. You could try looking in
 the file $HOME/.xsession-errors and see if there is anything useful there.


  I still having errors in exporting Lyx document to OpenDocument.


 Unfortunately, there is not a lot we can do about that. The tex4ht package
 is known to have a lot of limiations. That said, there is a GSOC project
 going on right now that is focused on converting back and forth between LyX
 and ODT. If you'd be interested in helping to test that branch, send a
 message to the devel list.

 Richard




Re: Reconfiguration Problem

2014-08-06 Thread tesm...@gmail.com
Dear Richard,

Attached is console log for lyx.

Starting Lyx from console solved the re-configuration errors.
Reconfiguration still fails when Lyx is started directly from GUI.

I still having errors in exporting Lyx document to OpenDocument.

Regards,
Tariq




On Tue, Aug 5, 2014 at 11:49 AM, Richard Heck <rgh...@lyx.org> wrote:

> On 08/05/2014 11:45 AM, tesm...@gmail.com wrote:
>
>> Dear Richard,
>>
>> I get the following error message while Lyx Reconfiguration:
>>
>> The script '/usr/share/lyx/scriptes/TexFile.py' failed
>>
>> Secondly, All the document classes are unavailable in my Lyx installation
>>
>> I have Lyx 2.1.1 (July 2014 release)  and have installed texlive-base,
>> and texlive-extra
>>
>
> This sounds like a larger configuration issue.
>
> Can you run LyX from a terminal and see what error messages you get when
> you try to reconfigure?
>
> Richard
>
> PS I'm cc'ing this back to the devel list. I expect we will need more
> help. Remember to reply to all when you do reply.
>
>
>
tariq@ubuntu:~$ lyx
Creating file /tmp/lyx_tmpdir.DvnJAAHY3183/Buffer_importStringUX3183.lyx
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
(./textLyx.tex
LaTeX2e <2011/06/27>
Babel  and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.

support/Systemcall.cpp (292): Systemcall: 'pdflatex  "textLyx.tex"' finished 
with exit code 1
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
(./textLyx.tex
LaTeX2e <2011/06/27>
Babel  and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.

mk4ht (2008-06-28-19:09)
/usr/share/tex4ht/htlatex textLyx.tex "xhtml,ooffice" "ooffice/! -cmozhtf" 
"-cooxtpipes -coo"
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
LaTeX2e <2011/06/27>
Babel  and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.
(./textLyx.tex
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
LaTeX2e <2011/06/27>
Babel  and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.
(./textLyx.tex
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
LaTeX2e <2011/06/27>
Babel  and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.
(./textLyx.tex

tex4ht.c (2009-01-31-07:33 kpathsea)
tex4ht -f/textLyx.tex 
  -i/usr/share/texmf/tex4ht/ht-fonts/ooffice/! 
  -cmozhtf 
--- warning --- Can't find/open file `tex4ht.env | .tex4ht'
--- error --- Illegal storage address
--- warning --- Can't find/open file `textLyx.lg'

t4ht.c (2009-01-31-07:34 kpathsea)
t4ht -f/textLyx.tex 
  -cooxtpipes 
  -coo 
(/usr/share/texmf/tex4ht/tex4ht.env)
This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian)
 restricted \write18 enabled.
entering extended mode
(./textLyx.tex
LaTeX2e <2011/06/27>
Babel  and hyphenation patterns for english, dumylang, nohyphenation, lo
aded.

textLyx.tex:2   Unknown command '\batchmode'
textLyx.tex:5   Document format  unknown, using article format
textLyx.tex:5   Package/option 'conference' unknown.





Re: Reconfiguration Problem

2014-08-06 Thread tesm...@gmail.com
Dear Richard,

Thanks for your reply.

I would like to participate in testing of the GSOC project to
LyxToWordConversion [http://wiki.lyx.org/GSoC/LyxToWordConversion].

Would someone please add me to the list and provide the binaries/code for
testing?

Regards,
Tariq


On Wed, Aug 6, 2014 at 3:40 PM, Richard Heck <rgh...@lyx.org> wrote:

> On 08/06/2014 04:52 AM, tesm...@gmail.com wrote:
>
>> Dear Richard,
>>
>> Attached is console log for lyx.
>>
>> Starting Lyx from console solved the re-configuration errors.
>> Reconfiguration still fails when Lyx is started directly from GUI.
>>
>
> That probably means there is some difference in the two environments,
> e.g., in the paths. It is possible that LyX is calling python 3.x from the
> GUI and python 2.x from the console. Hard to know. You could try looking in
> the file $HOME/.xsession-errors and see if there is anything useful there.
>
>
>  I still having errors in exporting Lyx document to OpenDocument.
>>
>
> Unfortunately, there is not a lot we can do about that. The tex4ht package
> is known to have a lot of limiations. That said, there is a GSOC project
> going on right now that is focused on converting back and forth between LyX
> and ODT. If you'd be interested in helping to test that branch, send a
> message to the devel list.
>
> Richard
>
>


Export to OpenOffice in Lyx2.1.1

2014-08-04 Thread tesm...@gmail.com
Hi,

I exported Lyx document to OpenOffice documents about an year ago. I
recently updated to Lyx 2.1.1. When I try to export my .lyx file to
OpenOffice formate; I get the message

NO document support is availabel for this format

Is this support discontinued in Lyx 2.1.1.?

Regards,


Export to OpenOffice in Lyx2.1.1

2014-08-04 Thread tesm...@gmail.com
Hi,

I exported Lyx document to OpenOffice documents about an year ago. I
recently updated to Lyx 2.1.1. When I try to export my .lyx file to
OpenOffice formate; I get the message

NO document support is availabel for this format

Is this support discontinued in Lyx 2.1.1.?

Regards,


Export to OpenOffice in Lyx2.1.1

2014-08-04 Thread tesm...@gmail.com
Hi,

I exported Lyx document to OpenOffice documents about an year ago. I
recently updated to Lyx 2.1.1. When I try to export my .lyx file to
OpenOffice formate; I get the message

"NO document support is availabel for this format"

Is this support discontinued in Lyx 2.1.1.?

Regards,