Re: [DBWG] 1x whois server not responding

2024-04-25 Thread Madhvi Gokool via DBWG

Dear Frank

We have taken note of the issue you have raised. The AFRINIC team are  
investigating the matter and will provide feedback in due course.


Regards

Madhvi

On 24/04/2024 08:54, Frank Habicht wrote:

Hi AfriNIC NOC,

in DNS for whois.afrinic.net IPs 196.192.115.21 and 
2001:42d0:2:601::21 are returned. (amongst others)


This might be the same host.

These IPs are responding to ping, but not to whois.

This might have started between 0:20 and 1:20 UTC

I believe the https://status.afrinic.net page is wrong to state that 
"WHOIS DB Queries" - whois.afrinic.net:43 are working.

I'd say they're working only 3/4

Please check.

Thanks,
Frank


[frank@fisi ~]$ date
Wed Apr 24 07:40:04 EAT 2024
[frank@fisi ~]$ date -u
Wed Apr 24 04:40:07 UTC 2024
[frank@fisi ~]$ ping -c 5 196.192.115.21
PING 196.192.115.21 (196.192.115.21) 56(84) bytes of data.
64 bytes from 196.192.115.21: icmp_seq=1 ttl=54 time=49.9 ms
64 bytes from 196.192.115.21: icmp_seq=2 ttl=54 time=49.9 ms
64 bytes from 196.192.115.21: icmp_seq=3 ttl=54 time=49.8 ms
64 bytes from 196.192.115.21: icmp_seq=4 ttl=54 time=49.9 ms
64 bytes from 196.192.115.21: icmp_seq=5 ttl=54 time=50.0 ms

--- 196.192.115.21 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4055ms
rtt min/avg/max/mdev = 49.879/49.968/50.067/0.062 ms
[frank@fisi ~]$ whois -h 196.192.115.21 AS37084
[Querying 196.192.115.21]
[196.192.115.21]
[frank@fisi ~]$ ping6 -c 5 2001:42d0:2:601::21
PING 2001:42d0:2:601::21(2001:42d0:2:601::21) 56 data bytes
64 bytes from 2001:42d0:2:601::21: icmp_seq=1 ttl=56 time=49.6 ms
64 bytes from 2001:42d0:2:601::21: icmp_seq=2 ttl=56 time=49.2 ms
64 bytes from 2001:42d0:2:601::21: icmp_seq=3 ttl=56 time=49.0 ms
64 bytes from 2001:42d0:2:601::21: icmp_seq=4 ttl=56 time=49.5 ms
64 bytes from 2001:42d0:2:601::21: icmp_seq=5 ttl=56 time=49.3 ms

--- 2001:42d0:2:601::21 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4054ms
rtt min/avg/max/mdev = 49.089/49.359/49.613/0.239 ms
[frank@fisi ~]$ whois -h 2001:42d0:2:601::21 AS37084
[Querying 2001:42d0:2:601::21]
[Unable to connect to remote host]
You have new mail in /var/spool/mail/frank
[frank@fisi ~]$ date -u
Wed Apr 24 04:49:13 UTC 2024
[frank@fisi ~]$ dig whois.afrinic.net. +short
whois-public.afrinic.net.
196.192.115.21
196.216.2.21
196.216.2.20
196.192.115.22
[frank@fisi ~]$ whois -h 196.192.115.22 AS37084
[Querying 196.192.115.22]
[196.192.115.22]
% This is the AfriNIC Whois server.
% The AFRINIC whois database is subject to  the following terms of 
Use. See https://afrinic.net/whois/terms


% Note: this output has been filtered.
%   To receive output for a database update, use the "-B" flag.

% Information related to 'AS37084'

% No abuse contact registered for AS37084

aut-num:    AS37084
as-name:    simbanet-tz
descr:  Simbanet (T) Ltd

___
DBWG mailing list
DBWG@afrinic.net
https://lists.afrinic.net/mailman/listinfo/dbwg___
DBWG mailing list
DBWG@afrinic.net
https://lists.afrinic.net/mailman/listinfo/dbwg


[jenkinsci/aws-codepipeline-plugin] 661198: [maven-release-plugin] prepare for next developmen...

2021-01-27 Thread 'Madhvi Dua' via Jenkins Commits
  Branch: refs/heads/master
  Home:   https://github.com/jenkinsci/aws-codepipeline-plugin
  Commit: 661198ad6f3f0aa2edb5b56c315b1f7df471e16f
  
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/661198ad6f3f0aa2edb5b56c315b1f7df471e16f
  Author: Madhvi Dua 
  Date:   2021-01-27 (Wed, 27 Jan 2021)

  Changed paths:
M pom.xml

  Log Message:
  ---
  [maven-release-plugin] prepare for next development iteration


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/aws-codepipeline-plugin/push/refs/heads/master/2fdf37-661198%40github.com.


[jenkinsci/aws-codepipeline-plugin]

2021-01-27 Thread 'Madhvi Dua' via Jenkins Commits
  Branch: refs/tags/aws-codepipeline-0.44
  Home:   https://github.com/jenkinsci/aws-codepipeline-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/aws-codepipeline-plugin/push/refs/tags/aws-codepipeline-0.44/00-ccd8f9%40github.com.


[jenkinsci/aws-codepipeline-plugin] 72f475: Add MXP/EU-SOUTH-1 as a new region

2021-01-27 Thread 'Madhvi Dua' via Jenkins Commits
  Branch: refs/heads/master
  Home:   https://github.com/jenkinsci/aws-codepipeline-plugin
  Commit: 72f475a5c946a35e0777147598ec88abf428c058
  
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/72f475a5c946a35e0777147598ec88abf428c058
  Author: Madhvi Dua 
  Date:   2021-01-27 (Wed, 27 Jan 2021)

  Changed paths:
M pom.xml
M 
src/main/java/com/amazonaws/codepipeline/jenkinsplugin/AWSCodePipelineSCM.java

  Log Message:
  ---
  Add MXP/EU-SOUTH-1 as a new region


  Commit: 2fdf3775de6929f651e57515cbc82a1b35ff2171
  
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/2fdf3775de6929f651e57515cbc82a1b35ff2171
  Author: Madhvi Dua 
  Date:   2021-01-27 (Wed, 27 Jan 2021)

  Changed paths:
M pom.xml

  Log Message:
  ---
  [maven-release-plugin] prepare release aws-codepipeline-0.44


Compare: 
https://github.com/jenkinsci/aws-codepipeline-plugin/compare/cd7a68e47b4b...2fdf3775de69

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/aws-codepipeline-plugin/push/refs/heads/master/cd7a68-2fdf37%40github.com.


[jenkinsci/aws-codepipeline-plugin] daba26: Bump junit from 4.13 to 4.13.1

2021-01-08 Thread 'Madhvi Dua' via Jenkins Commits
  Branch: refs/heads/master
  Home:   https://github.com/jenkinsci/aws-codepipeline-plugin
  Commit: daba2600570b4002200c32434d35d6155ec6831c
  
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/daba2600570b4002200c32434d35d6155ec6831c
  Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  Date:   2020-10-16 (Fri, 16 Oct 2020)

  Changed paths:
M pom.xml

  Log Message:
  ---
  Bump junit from 4.13 to 4.13.1

Bumps [junit](https://github.com/junit-team/junit4) from 4.13 to 4.13.1.
- [Release notes](https://github.com/junit-team/junit4/releases)
- 
[Changelog](https://github.com/junit-team/junit4/blob/main/doc/ReleaseNotes4.13.1.md)
- [Commits](https://github.com/junit-team/junit4/compare/r4.13...r4.13.1)

Signed-off-by: dependabot[bot] 


  Commit: 2f8269a6ab1b7b276fe6a6bf7a25bc2f105b41f7
  
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/2f8269a6ab1b7b276fe6a6bf7a25bc2f105b41f7
  Author: Madhvi Dua 
  Date:   2021-01-06 (Wed, 06 Jan 2021)

  Changed paths:
M 
src/main/java/com/amazonaws/codepipeline/jenkinsplugin/AWSCodePipelineSCM.java

  Log Message:
  ---
  Add HKG/AP-EAST-1 as a new region


  Commit: e89079112c37a5ac2e18a3873a1edd686ab858c7
  
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/e89079112c37a5ac2e18a3873a1edd686ab858c7
  Author: Madhvi Dua 
  Date:   2021-01-08 (Fri, 08 Jan 2021)

  Changed paths:
M pom.xml

  Log Message:
  ---
  [maven-release-plugin] prepare release aws-codepipeline-0.43


Compare: 
https://github.com/jenkinsci/aws-codepipeline-plugin/compare/3bc50d92778a...e89079112c37

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/aws-codepipeline-plugin/push/refs/heads/master/3bc50d-e89079%40github.com.


[jenkinsci/aws-codepipeline-plugin] cd7a68: [maven-release-plugin] prepare for next developmen...

2021-01-08 Thread 'Madhvi Dua' via Jenkins Commits
  Branch: refs/heads/master
  Home:   https://github.com/jenkinsci/aws-codepipeline-plugin
  Commit: cd7a68e47b4b2ba0827d6ea58266e634a3b0627c
  
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/cd7a68e47b4b2ba0827d6ea58266e634a3b0627c
  Author: Madhvi Dua 
  Date:   2021-01-08 (Fri, 08 Jan 2021)

  Changed paths:
M pom.xml

  Log Message:
  ---
  [maven-release-plugin] prepare for next development iteration


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/aws-codepipeline-plugin/push/refs/heads/master/e89079-cd7a68%40github.com.


[jenkinsci/aws-codepipeline-plugin]

2021-01-08 Thread 'Madhvi Dua' via Jenkins Commits
  Branch: refs/tags/aws-codepipeline-0.43
  Home:   https://github.com/jenkinsci/aws-codepipeline-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/aws-codepipeline-plugin/push/refs/tags/aws-codepipeline-0.43/00-8c47be%40github.com.


Re: [DBWG] ORG-TYPE definitions

2020-10-04 Thread Madhvi Gokool
Hello Frank

The commitment was for staff to provide a definition of the org-types to
the database working group and I further added that the definitions
should then be made available when we perform a whois query. This way,
they shall be publicly available to any person wanting to know what each
org type means .

Logically it shall happen as follows :-

1. The org-types  definition will be provided to the database working group

2. The whois database will be updated with the definitions so that they
are then visible through the query whois -v organisation

Regards

Madhvi

On 04/10/2020 9:10 PM, Frank Habicht wrote:
> Hi all,
>
> so, in order to try to start chop away from all the pending items from
> the meeting:
>
> there was a request (from SM, iirc) about the exact definitions what the
> different 'org-type:' attribute values are and what the definitions are,
> when to use which.
>
> Iirc, Madhvi responded that these can be obtained through whois queries.
>
> I just tried and reached as far as getting an (apparently authoritative)
> list of possible values. (below)
>
> I didn't find any definitions as to which value is to be used for which
> ORG. Can staff (maybe Madhvi) point us in the right direction?
> Also: if it involves the manual (which was noted to have outdated info),
> please confirm how current the referenced info/definitions are.
>
> Thanks,
> Frank
>
>
> [frank@fisi ~]$ whoisaf -- -v organisation
> [Querying whois.afrinic.net]
> [whois.afrinic.net]
> % This is the AfriNIC Whois server.
>
> The organisation class:
>
>   The organisation class provides information identifying
>   an organisation such as a company, charity or university,
>   that is a holder of a network resource whose data is stored
>   in the whois database.
>   Organisation objects are not created automatically, but are forwarded
>   to AfriNIC Database Administration (afrinic-...@rafrinic.net).
>
> organisation:   [mandatory]  [single] [primary/lookup key]
> org-name:   [mandatory]  [single] [lookup key]
> org-type:   [mandatory]  [single] [ ]
> descr:  [optional]   [multiple]   [ ]
> country:[mandatory]  [multiple]   [ ]
> remarks:[optional]   [multiple]   [ ]
> address:[mandatory]  [multiple]   [ ]
> phone:  [optional]   [multiple]   [ ]
> fax-no: [optional]   [multiple]   [ ]
> e-mail: [mandatory]  [multiple]   [lookup key]
> org:[optional]   [multiple]   [inverse key]
> admin-c:[optional]   [multiple]   [inverse key]
> tech-c: [optional]   [multiple]   [inverse key]
> ref-nfy:[optional]   [multiple]   [inverse key]
> mnt-ref:[mandatory]  [multiple]   [inverse key]
> notify: [optional]   [multiple]   [inverse key]
> abuse-mailbox:  [optional]   [multiple]   [inverse key]
> mnt-by: [mandatory]  [multiple]   [inverse key]
> changed:[mandatory]  [multiple]   [ ]
> source: [mandatory]  [single] [ ]
>
> The content of the attributes of the organisation class are defined below:
>
> .
> org-type
>
>Specifies the type of the organisation. The possible values are 'IANA'
>for Internet Assigned Numbers Authority, 'RIR' for Regional Internet
>Registries, 'NIR' for National Internet Registries and 'LIR' for Local
>Internet Registries.
>
>  org-type can have one of these values:
>
>  o 'IANA'
>  o 'RIR'
>  o 'LIR'
>  o 'EU-PI'
>  o 'EU-AS'
>  o 'MEMBER-ONLY'
>  o 'CLOSED'
>  o 'INACTIVE-MEMBER'
>  o 'NON-REGISTRY'
>  o 'OTHER'
>  o 'REGISTERED-MEMBER'
>  o 'ASSOCIATE-MEMBER'
>
>
> ___
> DBWG mailing list
> DBWG@afrinic.net
> https://lists.afrinic.net/mailman/listinfo/dbwg



___
DBWG mailing list
DBWG@afrinic.net
https://lists.afrinic.net/mailman/listinfo/dbwg


Re: [Community-Discuss] Election process of ASO-AC representative

2020-07-27 Thread Madhvi Gokool
Dear Community Members

 

The term of the community-elected AFRINIC ASO-AC representative, Noah
Maina,ends in December 2020.

 

On 16 July 2020, the CEO of AFRINIC requested feedback from the AFRINIC
community in regard to how best AFRINIC can proceed with the election to
fill the vacant seat of one community-elected ASO-AC/NRO-NC member in
the current set-up of a planned online Africa Internet Summit  in
September 2020.

 

AFRINIC community has responded to the request for feedback with *2
posts* for the period 16-23 July 2020 from*  2  contributors*.
Contributors need to be subscribed to the community-discuss mailing list
in order to post their feedback.

 

The archives of these posts are available online
at<https://lists.afrinic.net/pipermail/rpd/2020/>https://lists.afrinic.net/pipermail/community-discuss/2020-July/date.html__

 

There have been two propositions from the AFRINIC Community, notably:-

 

a)    Extend the tenure of the AFRINIC ASO-AC representative and to hold
the election at the next face-to-face AFRINIC Meeting;

b)   Hold the AFRINIC ASO-AC representative election during the Africa
Internet Summit in September 2020;

 

 

AFRINIC would like to thank its community for taking the time and effort
to make the propositions.

 

We will assess the feasibility of these propositions and will come back
with a plan next week. We will keep you informed.

 

 

Kind Regards

Madhvi Gokool

-


Chers membres de la communauté,

 

Le mandat du représentant de l'ASO-AC/NRO-NCd'AFRINIC élu par la
communauté, Noah Maina, se termine en décembre 2020.

Le 16 juillet 2020, le DG d'AFRINIC a demandé à la communauté d'AFRINIC
de lui faire part de ses commentaires sur la meilleure façon de procéder
pour cette élection pour remplir le siège vacant d'un représentant de
l'ASO-AC/NRO-NC  dans le cadre de l'organisation du Sommet Africain de
l'Internet 2020 (AIS'20) en ligne prévue en septembre 2020. 

La communauté d'AFRINIC a répondu à cette sollicitation avec 2 messages
reçus pour la période 16 - 23 juillet 2020 provenant de 2 contributeurs.
Les contributeurs doivent être inscrits à la liste de diffusion
community-discuss pour pouvoir poster leurs commentaires. 

Les archives de ces messages sont disponibles en ligne à l'adresse
suivante
:<https://lists.afrinic.net/pipermail/rpd/2020/>https://lists.afrinic.net/pipermail/community-discuss/2020-July/date.html
 

Les réactions reçues peuvent être résumées de manière très générale
comme suit :

a) Prolonger le mandat du répresentant d'AFRINIC à l'ASO-AC  et
organiser une élection lors de la prochaine réunion d’AFRINIC en
face-à-face;

b) Organiser l'élection du représentant d'AFRINIC à l'ASO-AC  lors du
Sommet Africain de l'Internet 2020 (AIS'20) en septembre 2020.

AFRINIC tient à remercier sa communauté d'avoir pris le temps et pour
les efforts consentis pour effectuer ces propositions.

Nous évaluerons la faisabilité de ces propositions et reviendrons avec
un plan la semaine prochaine. Nous vous tiendrons informés. 

Cordialement,

Madhvi Gokool



On 16/07/2020 1:40 PM, Eddy Kayihura wrote:
>
> Dear Community Members,
>
> The term of the community-elected AFRINIC ASO-AC representative, Noah
> Maina, ends in December 2020. The election to fill this vacant seat
> will be held during the Africa Internet Summit 2020 (AIS’20) scheduled
> for 14-18 September 2020. The election process for the ASO-AC/NRO-NC
> (please see https://www.afrinic.net/election-process/aso-nro for more
> details) says that voting shall be anonymous and done by secret paper
> ballot. However, this meeting shall be held online.
>
> We are therefore requesting feedback from the community on how AFRINIC
> can best proceed with the election to fill the vacant seat of one
> community-elected ASO-AC/NRO-NC member. We shall be delighted to
> receive your proposals for this important matter.
>
> Regards,
>
> Eddy
>
>    ...
>
>
> Chers membres de la Communauté,
>
> Le mandat du représentant de l'ASO-AC d'AFRINIC élu par la communauté,
> Noah Maina, se termine en décembre 2020. L'élection pour pourvoir ce
> siège vacant aura lieu lors du Sommet Africain de l'Internet 2020
> (AIS'20) prévu du 14 au 18 septembre 2020. Le processus d'élection de
> l'ASO-AC/NRO-NC (voir https://www.afrinic.net/election-process/aso-nro
> pour plus de détails) prévoit que le vote sera anonyme et se fera par
> bulletin secret. Toutefois, il se déroulera en ligne.
>
> Nous demandons donc à la communauté de nous faire part de ses
> commentaires sur la meilleure façon pour AFRINIC de procéder à
> l'élection pour pourvoir le siège vacant d'un membre de
> l'ASO-AC/NRO-NC élu par la communauté. Nous serons ravis de recevoir
> vos propositions sur cette importante question.
>
> Cordialement,
>
> Eddy
>
>  
>
> 
>
>

[FFmpeg-user] header intact

2019-01-03 Thread madhvi sahu


-Original Message-
From: ffmpeg-user [mailto:ffmpeg-user-boun...@ffmpeg.org] On Behalf Of Paul B 
Mahol
Sent: Thursday, January 03, 2019 2:22 PM
To: FFmpeg user questions
Subject: Re: [FFmpeg-user] FFmpeg Configure error, Help need

On 1/3/19, Dinesh Gupta  wrote:
> whatever that means
> Means I tried to after remove --enable-libopus. But still got the error.
>
> Thank you for highlighting the issue. But I need the libopus.
>
> Help me to resolve the issue issue

Then keep that one and remove another one?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with 
subject "unsubscribe".

Disclaimer : This e-mail transmission and any documents, files, or previous 
e-mail messages appended or attached to it, may contain information that is 
confidential or privileged. If you are not the intended recipient, or a person 
responsible for delivering it to the intended recipient, you are hereby 
notified that any disclosure, copying, printing, distribution, or use of the 
information contained or attached to this transmission is STRICTLY PROHIBITED. 
If you have received this transmission in error, please immediately notify the 
sender by return e-mail message and delete the original transmission, its 
attachments, and any copies in your possession, custody or control. The Sender 
accepts no liability for loss or damage caused by software viruses and you are 
advised to carry out a virus check on any attachments contained in this message.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Tika boilerpipe extractors

2018-06-27 Thread Arora, Madhvi
Hi All,


Note reposting my question since looks like earlier one got posted on wrong  
thread.


We are using Nutch 1.13 and Solr 6. I am trying to use one of the parsers that 
come with Tika boilerpipe support.  I am getting best result for pages where 
there are only outlinks with CanolaExtractor in a page like this:

https://support.automationdirect.com/faq/dl205.php

But checking from Solr Admin Tool, unfortunately the parser is leaving out 
several outlinks in the indexed content. I do not know why CanolaExtractor 
would leave out certain outlinks.

If I do not use boilerpipe on Nutch then all the outlink gets indexed. To not 
use tika extractor I changed the property:


  tika.extractor
  none
  
  Which text extraction algorithm to use. Valid values are: boilerpipe or none.
  


Anyone knows why CanolaExtractor cannot extract all the outlinks? Also which 
Tika Extractor should be used for the above mentioned  page example?


Any help will be great!

Thanks,

Madhvi



Tika boilerpipe extractors

2018-06-27 Thread Arora, Madhvi
Hi All,


We are using Nutch 1.13 and Solr 6. I am trying to use one of the parsers that 
come with Tika boilerpipe support.  I am getting best result for pages where 
there are only outlinks with CanolaExtractor in a page like this:

https://support.automationdirect.com/faq/dl205.php

But checking from Solr Admin Tool, unfortunately the parser is leaving out 
several outlinks in the indexed content. I do not know why CanolaExtractor 
would leave out certain outlinks.

If I do not use boilerpipe on Nutch then all the outlink gets indexed. To not 
use tika extractor I changed the property:


  tika.extractor
  none
  
  Which text extraction algorithm to use. Valid values are: boilerpipe or none.
  


Anyone knows why CanolaExtractor cannot extract all the outlinks? Also which 
Tika Extractor should be used for the above mentioned  page example?


Any help will be great!

Thanks,

Madhvi


Nutch 1.x and Solr compatible versions

2017-05-02 Thread Arora, Madhvi
Hi,

We currently use Nutch 1.10 and SOLR 4.x. We are in a process of upgrading both 
software. I wanted to find out if the latest version of Nutch 1.13 is 
compatible with SOLR 6. Also, if there is any documentation that I can use for 
upgrading Nutch that will be compatible with SOLR 6.

Thanks in advance,
Madhvi

 



Re: How to train a Named entity detection model

2017-02-27 Thread Madhvi Gupta
Hi Madhav,

My training data is not in format mentioned in [0] wiki.

It is in format generated through the following link:
http://www.clips.uantwerpen.be/conll2003/ner/000README

Its format is mentioned in the trailing mail.
I just want to know how opennlp models can be trained using that model. If
not then the how the required format can be generated?

With Regards
Madhvi Gupta
*(Senior Software Engineer)*

On Mon, Feb 27, 2017 at 12:47 PM, Madhav Sharan <msha...@usc.edu> wrote:

> Hi - Can you ensure that your training data is in format like mentioned in
> wiki ? [0]
>
> Like mentioned in wiki training should be something like this-
>
>  Pierre Vinken  61 years old , will join the board as a
> nonexecutive director Nov. 29
>
> Here Type of Entity is "person" and "Pierre Vinken" is one of the person in
> training data.
>
> I was looking at links you shared and your data looks in different format.
> Can you ensure your eng.train is in above format?
>
> I think you can write your own code to read training file and convert it
> into OpenNLP format. Also look at [1] in case you can make use of some pre
> trained model available for OpenNLP
>
> HTH
>
>
>
> [0] https://opennlp.apache.org/documentation/1.7.2/manual/opennl
> p.html#tools.namefind.training
> [1] http://opennlp.sourceforge.net/models-1.5/
>
>
> --
> Madhav Sharan
>
>
> On Sun, Feb 26, 2017 at 9:42 PM, Madhvi Gupta <mgmahi@gmail.com>
> wrote:
>
> > Please let me know if anyone have any idea about this
> >
> > With Regards
> > Madhvi Gupta
> > *(Senior Software Engineer)*
> >
> > On Tue, Feb 21, 2017 at 10:51 AM, Madhvi Gupta <mgmahi@gmail.com>
> > wrote:
> >
> > > Hi Joern,
> > >
> > > Training data generated from reuters dataset is in the following
> format.
> > > It has generated three files eng.train, eng.testa, eng.testb.
> > >
> > > A DT I-NP O
> > > rare JJ I-NP O
> > > early JJ I-NP O
> > > handwritten JJ I-NP O
> > > draft NN I-NP O
> > > of IN I-PP O
> > > a DT I-NP O
> > > song NN I-NP O
> > > by IN I-PP O
> > > U.S. NNP I-NP I-LOC
> > > guitar NN I-NP O
> > > legend NN I-NP O
> > > Jimi NNP I-NP I-PER
> > >
> > > Using this training data file when I ran the command:
> > > ./opennlp TokenNameFinderTrainer -model en-ner-person.bin -lang en
> -data
> > > /home/centos/ner/eng.train -encoding UTF-8
> > >
> > > It is giving me the following error:
> > > ERROR: Not enough training data
> > > The provided training data is not sufficient to create enough events to
> > > train a model.
> > > To resolve this error use more training data, if this doesn't help
> there
> > > might
> > > be some fundamental problem with the training data itself.
> > >
> > > The format required for training opennlp models is in the form of
> > > sentences but training data prepared from reuters dataset is in the
> baove
> > > said format. So please tell me how training data can be generated in
> the
> > > required format or how the existing training data format can be used
> for
> > > generating models.
> > >
> > > With Regards
> > > Madhvi Gupta
> > > *(Senior Software Engineer)*
> > >
> > > On Mon, Feb 20, 2017 at 5:52 PM, Joern Kottmann <kottm...@gmail.com>
> > > wrote:
> > >
> > >> Please explain to us what is not working. Any error messages or
> > >> exceptions?
> > >>
> > >> The name finder by default trains on the default format which you can
> > see
> > >> in the documentation link i shared.
> > >>
> > >> Jörn
> > >>
> > >> On Mon, Feb 20, 2017 at 6:04 AM, Madhvi Gupta <mgmahi@gmail.com>
> > >> wrote:
> > >>
> > >> > Hi Joern,
> > >> >
> > >> > I have got the data from the following link which consist of corpus
> of
> > >> new
> > >> > articles.
> > >> > https://urldefense.proofpoint.com/v2/url?u=http-3A__trec.nis
> > t.gov_data_reuters_reuters.html=DwIFaQ=clK7kQUTWtAVEOVIg
> > vi0NU5BOUHhpN0H8p7CSfnc_gI=DhBa2eLkbd4gAFB01lkNgg=lMnAkl
> > nfFkmS3IfHhJy5PgR6CHe7-61J_5MAe3U8CJI=0sEQ0deDkUi3w600Svja
> > aKSVhtlEHEGzDh-l202X76o=
> > >> >
> > >> > Following the steps given in the below link I have created training
> > and
> > >> > test data b

Re: How to train a Named entity detection model

2017-02-26 Thread Madhvi Gupta
Please let me know if anyone have any idea about this

With Regards
Madhvi Gupta
*(Senior Software Engineer)*

On Tue, Feb 21, 2017 at 10:51 AM, Madhvi Gupta <mgmahi@gmail.com> wrote:

> Hi Joern,
>
> Training data generated from reuters dataset is in the following format.
> It has generated three files eng.train, eng.testa, eng.testb.
>
> A DT I-NP O
> rare JJ I-NP O
> early JJ I-NP O
> handwritten JJ I-NP O
> draft NN I-NP O
> of IN I-PP O
> a DT I-NP O
> song NN I-NP O
> by IN I-PP O
> U.S. NNP I-NP I-LOC
> guitar NN I-NP O
> legend NN I-NP O
> Jimi NNP I-NP I-PER
>
> Using this training data file when I ran the command:
> ./opennlp TokenNameFinderTrainer -model en-ner-person.bin -lang en -data
> /home/centos/ner/eng.train -encoding UTF-8
>
> It is giving me the following error:
> ERROR: Not enough training data
> The provided training data is not sufficient to create enough events to
> train a model.
> To resolve this error use more training data, if this doesn't help there
> might
> be some fundamental problem with the training data itself.
>
> The format required for training opennlp models is in the form of
> sentences but training data prepared from reuters dataset is in the baove
> said format. So please tell me how training data can be generated in the
> required format or how the existing training data format can be used for
> generating models.
>
> With Regards
> Madhvi Gupta
> *(Senior Software Engineer)*
>
> On Mon, Feb 20, 2017 at 5:52 PM, Joern Kottmann <kottm...@gmail.com>
> wrote:
>
>> Please explain to us what is not working. Any error messages or
>> exceptions?
>>
>> The name finder by default trains on the default format which you can see
>> in the documentation link i shared.
>>
>> Jörn
>>
>> On Mon, Feb 20, 2017 at 6:04 AM, Madhvi Gupta <mgmahi@gmail.com>
>> wrote:
>>
>> > Hi Joern,
>> >
>> > I have got the data from the following link which consist of corpus of
>> new
>> > articles.
>> > http://trec.nist.gov/data/reuters/reuters.html
>> >
>> > Following the steps given in the below link I have created training and
>> > test data but it is not working with the NameFinder of opennlp api.
>> > http://www.clips.uantwerpen.be/conll2003/ner/000README
>> >
>> > So can you please help me how to create training data out of that corpus
>> > and use it to create name entity detection models?
>> >
>> > With Regards
>> > Madhvi Gupta
>> > *(Senior Software Engineer)*
>> >
>> > On Mon, Feb 20, 2017 at 1:00 AM, Joern Kottmann <kottm...@gmail.com>
>> > wrote:
>> >
>> > > Hello,
>> > >
>> > > to train the name finder you need training data that contains the
>> > entities
>> > > you would like to decect.
>> > > Is that the case with the data you have?
>> > >
>> > > Take a look at our documentation:
>> > > https://opennlp.apache.org/documentation/1.7.2/manual/
>> > > opennlp.html#tools.namefind.training
>> > >
>> > > At the beginning of that section you can see how the data has to be
>> > marked
>> > > up.
>> > >
>> > > Please note you that you need many sentences to train the name finder.
>> > >
>> > > HTH,
>> > > Jörn
>> > >
>> > >
>> > > On Sat, Feb 18, 2017 at 11:28 AM, Madhvi Gupta <mgmahi@gmail.com>
>> > > wrote:
>> > >
>> > > > Hi All,
>> > > >
>> > > > I have got reuters data from NIST. Now I want to generate the
>> training
>> > > data
>> > > > from that to create a model for detecting named entities. Can anyone
>> > tell
>> > > > me how the models can be generated from that.
>> > > >
>> > > > --
>> > > > With Regards
>> > > > Madhvi Gupta
>> > > > *(Senior Software Engineer)*
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> >
>>
>
>


Re: How to train a Named entity detection model

2017-02-20 Thread Madhvi Gupta
Hi Joern,

Training data generated from reuters dataset is in the following format.
It has generated three files eng.train, eng.testa, eng.testb.

A DT I-NP O
rare JJ I-NP O
early JJ I-NP O
handwritten JJ I-NP O
draft NN I-NP O
of IN I-PP O
a DT I-NP O
song NN I-NP O
by IN I-PP O
U.S. NNP I-NP I-LOC
guitar NN I-NP O
legend NN I-NP O
Jimi NNP I-NP I-PER

Using this training data file when I ran the command:
./opennlp TokenNameFinderTrainer -model en-ner-person.bin -lang en -data
/home/centos/ner/eng.train -encoding UTF-8

It is giving me the following error:
ERROR: Not enough training data
The provided training data is not sufficient to create enough events to
train a model.
To resolve this error use more training data, if this doesn't help there
might
be some fundamental problem with the training data itself.

The format required for training opennlp models is in the form of sentences
but training data prepared from reuters dataset is in the baove said
format. So please tell me how training data can be generated in the
required format or how the existing training data format can be used for
generating models.

With Regards
Madhvi Gupta
*(Senior Software Engineer)*

On Mon, Feb 20, 2017 at 5:52 PM, Joern Kottmann <kottm...@gmail.com> wrote:

> Please explain to us what is not working. Any error messages or exceptions?
>
> The name finder by default trains on the default format which you can see
> in the documentation link i shared.
>
> Jörn
>
> On Mon, Feb 20, 2017 at 6:04 AM, Madhvi Gupta <mgmahi@gmail.com>
> wrote:
>
> > Hi Joern,
> >
> > I have got the data from the following link which consist of corpus of
> new
> > articles.
> > http://trec.nist.gov/data/reuters/reuters.html
> >
> > Following the steps given in the below link I have created training and
> > test data but it is not working with the NameFinder of opennlp api.
> > http://www.clips.uantwerpen.be/conll2003/ner/000README
> >
> > So can you please help me how to create training data out of that corpus
> > and use it to create name entity detection models?
> >
> > With Regards
> > Madhvi Gupta
> > *(Senior Software Engineer)*
> >
> > On Mon, Feb 20, 2017 at 1:00 AM, Joern Kottmann <kottm...@gmail.com>
> > wrote:
> >
> > > Hello,
> > >
> > > to train the name finder you need training data that contains the
> > entities
> > > you would like to decect.
> > > Is that the case with the data you have?
> > >
> > > Take a look at our documentation:
> > > https://opennlp.apache.org/documentation/1.7.2/manual/
> > > opennlp.html#tools.namefind.training
> > >
> > > At the beginning of that section you can see how the data has to be
> > marked
> > > up.
> > >
> > > Please note you that you need many sentences to train the name finder.
> > >
> > > HTH,
> > > Jörn
> > >
> > >
> > > On Sat, Feb 18, 2017 at 11:28 AM, Madhvi Gupta <mgmahi@gmail.com>
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I have got reuters data from NIST. Now I want to generate the
> training
> > > data
> > > > from that to create a model for detecting named entities. Can anyone
> > tell
> > > > me how the models can be generated from that.
> > > >
> > > > --
> > > > With Regards
> > > > Madhvi Gupta
> > > > *(Senior Software Engineer)*
> > > >
> > >
> >
> >
> >
> > --
> >
>


Re: How to train a Named entity detection model

2017-02-19 Thread Madhvi Gupta
Hi Joern,

I have got the data from the following link which consist of corpus of new
articles.
http://trec.nist.gov/data/reuters/reuters.html

Following the steps given in the below link I have created training and
test data but it is not working with the NameFinder of opennlp api.
http://www.clips.uantwerpen.be/conll2003/ner/000README

So can you please help me how to create training data out of that corpus
and use it to create name entity detection models?

With Regards
Madhvi Gupta
*(Senior Software Engineer)*

On Mon, Feb 20, 2017 at 1:00 AM, Joern Kottmann <kottm...@gmail.com> wrote:

> Hello,
>
> to train the name finder you need training data that contains the entities
> you would like to decect.
> Is that the case with the data you have?
>
> Take a look at our documentation:
> https://opennlp.apache.org/documentation/1.7.2/manual/
> opennlp.html#tools.namefind.training
>
> At the beginning of that section you can see how the data has to be marked
> up.
>
> Please note you that you need many sentences to train the name finder.
>
> HTH,
> Jörn
>
>
> On Sat, Feb 18, 2017 at 11:28 AM, Madhvi Gupta <mgmahi@gmail.com>
> wrote:
>
> > Hi All,
> >
> > I have got reuters data from NIST. Now I want to generate the training
> data
> > from that to create a model for detecting named entities. Can anyone tell
> > me how the models can be generated from that.
> >
> > --
> > With Regards
> > Madhvi Gupta
> > *(Senior Software Engineer)*
> >
>



--


How to train a Named entity detection model

2017-02-18 Thread Madhvi Gupta
Hi All,

I have got reuters data from NIST. Now I want to generate the training data
from that to create a model for detecting named entities. Can anyone tell
me how the models can be generated from that.

-- 
With Regards
Madhvi Gupta
*(Senior Software Engineer)*


Re: Upgrade to Nutch 1.12

2016-08-19 Thread Arora, Madhvi
Thank you so much Lewis. 




On 8/19/16, 4:53 AM, "lewis john mcgibbney" <lewi...@apache.org> wrote:

>Evening Madhvi,
>I will set this up and debug a clean. I'll report over on
>https://issues.apache.org/jira/browse/NUTCH-2269
>
>Thank you for reporting.
>Lewis
>
>On Thu, Aug 18, 2016 at 7:08 AM, <user-digest-h...@nutch.apache.org> wrote:
>
>>
>> From: "Arora, Madhvi" <mar...@automationdirect.com>
>> To: "user@nutch.apache.org" <user@nutch.apache.org>
>> Cc:
>> Date: Wed, 17 Aug 2016 13:30:09 +
>> Subject: Upgrade to Nutch 1.12
>> Hi,
>>
>>
>> I wanted to find out how to correct the issue below and will appreciate
>> any help.
>>
>>
>>  I am trying to upgrade to Nutch 1.12. I am using solr 5.3.1. The reason I
>> am upgrading are:
>> 1: https crawling
>> 2: Boilerplate canola extraction through tika
>>
>> The only problem so far I am having is an IOException. Please see below. I
>> searched and there is an existing jira issue
>> NUTCH-2269 <https://issues.apache.org/jira/browse/NUTCH-2269>
>>
>> [NUTCH-2269] Clean not working after crawl - ASF JIRA<
>> https://issues.apache.org/jira/browse/NUTCH-2269>
>> issues.apache.org
>> It seems like the database on Lucene can only be called crawldb. However a
>> couple of bundled version we can find online use linkdb for Lucene as
>> default
>>
>>
>>
>>
>> I get the same error if I try to clean via the old command:
>> bin/nutch solrclean crawl-adc/crawldb http://localhost:8983/solr/nutch
>>
>> But cleaning through linkdb worked as said in the jira issue i.e.
>> bin/nutch solrclean crawl-adc/linkdb http://localhost:8983/solr/nutch
>>
>>
>> Just want to know if there is a fix or an alternate way of cleaning and if
>> cleaning via linkdb might be okay or what are the repercussions of cleaning
>> via linkdb.
>>
>>
>> Exception from logs:
>> java.lang.Exception: java.lang.IllegalStateException: Connection pool
>> shut down
>> at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(
>> LocalJobRunner.java:462)
>> at org.apache.hadoop.mapred.LocalJobRunner$Job.run(
>> LocalJobRunner.java:529


Upgrade to Nutch 1.12

2016-08-17 Thread Arora, Madhvi
Hi,


I wanted to find out how to correct the issue below and will appreciate any 
help.


 I am trying to upgrade to Nutch 1.12. I am using solr 5.3.1. The reason I am 
upgrading are:
1: https crawling
2: Boilerplate canola extraction through tika

The only problem so far I am having is an IOException. Please see below. I 
searched and there is an existing jira issue
NUTCH-2269 

[NUTCH-2269] Clean not working after crawl - ASF 
JIRA
issues.apache.org
It seems like the database on Lucene can only be called crawldb. However a 
couple of bundled version we can find online use linkdb for Lucene as default




I get the same error if I try to clean via the old command:
bin/nutch solrclean crawl-adc/crawldb http://localhost:8983/solr/nutch

But cleaning through linkdb worked as said in the jira issue i.e.
bin/nutch solrclean crawl-adc/linkdb http://localhost:8983/solr/nutch


Just want to know if there is a fix or an alternate way of cleaning and if 
cleaning via linkdb might be okay or what are the repercussions of cleaning via 
linkdb.


Exception from logs:
java.lang.Exception: java.lang.IllegalStateException: Connection pool shut down
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check(Asserts.java:34)
at 
org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:169)
at 
org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:202)
at 
org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:184)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.nutch.indexwriter.solr.SolrIndexWriter.commit(SolrIndexWriter.java:190)
at 
org.apache.nutch.indexwriter.solr.SolrIndexWriter.close(SolrIndexWriter.java:178)
at org.apache.nutch.indexer.IndexWriters.close(IndexWriters.java:115)
at 
org.apache.nutch.indexer.CleaningJob$DeleterReducer.close(CleaningJob.java:120)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:237)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-08-16 15:27:47,794 ERROR indexer.CleaningJob - CleaningJob: 
java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)



Re: Protocol change to https

2016-08-16 Thread Arora, Madhvi

As per suggestion below, I am trying to upgrade to Nutch 1.12. I am using solr 
5.3.1. Crawling went very well with respect to:

1: https crawling
2: Boilerplate canola extraction through tika

The only problem so far I am having is an IOException. Please see below. I 
searched and there is an existing jira issue 
NUTCH-2269 <https://issues.apache.org/jira/browse/NUTCH-2269>

I get the same error if I try to clean via the old command:
bin/nutch solrclean crawl-adc/crawldb http://localhost:8983/solr/nutch

But cleaning through linkdb worked as said in the jira issue i.e. 
bin/nutch solrclean crawl-adc/linkdb http://localhost:8983/solr/nutch


Just want to know if there is a fix or an alternate way of cleaning and if 
cleaning via linkdb might be okay or what are the repercussions of cleaning via 
linkdb.


Exception from logs:
java.lang.Exception: java.lang.IllegalStateException: Connection pool shut down
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check(Asserts.java:34)
at 
org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:169)
at 
org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:202)
at 
org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:184)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.nutch.indexwriter.solr.SolrIndexWriter.commit(SolrIndexWriter.java:190)
at 
org.apache.nutch.indexwriter.solr.SolrIndexWriter.close(SolrIndexWriter.java:178)
at org.apache.nutch.indexer.IndexWriters.close(IndexWriters.java:115)
at 
org.apache.nutch.indexer.CleaningJob$DeleterReducer.close(CleaningJob.java:120)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:237)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-08-16 15:27:47,794 ERROR indexer.CleaningJob - CleaningJob: 
java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)

By the way, I am using the same message but maybe I should have started a new 
thread but this was kind of related to what I need.




On 8/5/16, 2:18 PM, "Arora, Madhvi" <mar...@automationdirect.com> wrote:

>Thank you very much!
>
>
>
>
>On 8/5/16, 2:13 PM, "Markus Jelsma" <markus.jel...@openindex.io> wrote:
>
>>I am not sure which version is was added, you'd have to check CHANGES.txt, 
>>but upgrading is usually a good idea and very simple.
>>Markus
>>
>> 
>> 
>>-Original message-
>>> From:Arora, Madhvi <mar...@automationdirect.com>
>>> Sent: Friday 5th August 2016 19:53
>>> To: user@nutch.apache.org
>>> Subject: Re: Protocol change to https
>>> 
>>> Markus so to crawl https and http urls successfully we just need to switch 
>>> to a newer version of Nutch I.e. Higher than Nutch 1.10? 
>>> 
>>> 
>>> 
>>> On 8/5/16, 12:47 PM, "Markus Jelsma" <markus.jel...@openindex.io> wrote:
>>> 
>>> >Hello - see inline.
>>> >Markus 
>>> 

Re: [AfriTelco-WG] AfriTelco-WG Digest, Vol 1, Issue 1

2016-08-15 Thread Madhvi
Dear Joel

Please let us know off-list as to who these Telcos are.

Regards

Madhvi


On 15/08/2016 9:38 AM, Joel Gogwim wrote:
>
> It's appears like some of the African Telcos acquired IP resources not
> from AfriNIC and assigned such to their African customers. This
> implies that the customers' traffic will count as none African
> traffic. There should be a policy to encourage such Telcos to acquire
> resources from AfriNIC.
>
> Cheers,
> Joel Gogwim
> University of Jos, Nigeria
>
>

-- 
Mrs Madhvi Gokool
Registration Services Manager
AFRINIC Ltd.
t: +230 403 51 00 | f: +230 466 6758 | tt: @afrinic |
w: www.afrinic.net | facebook.com/afrinic | flickr.com/afrinic |
youtube.com/afrinicmedia
___
Join us for the AFRINIC-25 meeting in Mauritius, 25 to 30 November 2016 


___
AfriTelco-WG mailing list
AfriTelco-WG@afrinic.net
https://lists.afrinic.net/mailman/listinfo/afritelco-wg


Re: Protocol change to https

2016-08-05 Thread Arora, Madhvi
Thank you very much!




On 8/5/16, 2:13 PM, "Markus Jelsma" <markus.jel...@openindex.io> wrote:

>I am not sure which version is was added, you'd have to check CHANGES.txt, but 
>upgrading is usually a good idea and very simple.
>Markus
>
> 
> 
>-Original message-
>> From:Arora, Madhvi <mar...@automationdirect.com>
>> Sent: Friday 5th August 2016 19:53
>> To: user@nutch.apache.org
>> Subject: Re: Protocol change to https
>> 
>> Markus so to crawl https and http urls successfully we just need to switch 
>> to a newer version of Nutch I.e. Higher than Nutch 1.10? 
>> 
>> 
>> 
>> On 8/5/16, 12:47 PM, "Markus Jelsma" <markus.jel...@openindex.io> wrote:
>> 
>> >Hello - see inline.
>> >Markus 
>> > 
>> >-Original message-
>> >> From:Arora, Madhvi <mar...@automationdirect.com>
>> >> Sent: Friday 5th August 2016 18:03
>> >> To: user@nutch.apache.org
>> >> Subject: Protocol change to https
>> >> 
>> >> Hi,
>> >> 
>> >> We are using Nutch 1.10 and Solr 5. We have around 10 different web sites 
>> >> that are crawled regularly. We are changing  protocol of a few websites 
>> >> from http to https. So we will have a mix bag of http and https protocols.
>> >> I checked in nutch user-mail archive and get that we need to change 
>> >> protocol-http to protocol-httpclient.
>> >> 1: I wanted to find out the best way to handle this
>> >
>> >You can still use protocol-http, in some recent version we added TLS 
>> >support to it.
>> >
>> >> 2: What are the issues with using protocol-httpclient i.e. there were 
>> >> previous references to issues with use of protocol-httpclient.
>> >
>> >It does not allow unencoded URL's, but in recent Nutch' we improved basic 
>> >normalizer to fix it for you.
>> >
>> >> 3: Steps that need to be taken to update the SOLR index. I think that I 
>> >> will need to delete the old http urls from solr index, re-crawl and index 
>> >>  the urls that need to be switched to https.
>> >
>> >Yes, just delete and recrawl and reindex everything. And consider upgrading 
>> >to 1.12.
>> >
>> >> 
>> >> I will be grateful for any guidance or suggestions.
>> >> 
>> >> Thanks,
>> >> Madhvi
>> >> 
>> >> 
>> 


Re: Protocol change to https

2016-08-05 Thread Arora, Madhvi
Markus so to crawl https and http urls successfully we just need to switch to a 
newer version of Nutch I.e. Higher than Nutch 1.10? 



On 8/5/16, 12:47 PM, "Markus Jelsma" <markus.jel...@openindex.io> wrote:

>Hello - see inline.
>Markus 
> 
>-Original message-
>> From:Arora, Madhvi <mar...@automationdirect.com>
>> Sent: Friday 5th August 2016 18:03
>> To: user@nutch.apache.org
>> Subject: Protocol change to https
>> 
>> Hi,
>> 
>> We are using Nutch 1.10 and Solr 5. We have around 10 different web sites 
>> that are crawled regularly. We are changing  protocol of a few websites from 
>> http to https. So we will have a mix bag of http and https protocols.
>> I checked in nutch user-mail archive and get that we need to change 
>> protocol-http to protocol-httpclient.
>> 1: I wanted to find out the best way to handle this
>
>You can still use protocol-http, in some recent version we added TLS support 
>to it.
>
>> 2: What are the issues with using protocol-httpclient i.e. there were 
>> previous references to issues with use of protocol-httpclient.
>
>It does not allow unencoded URL's, but in recent Nutch' we improved basic 
>normalizer to fix it for you.
>
>> 3: Steps that need to be taken to update the SOLR index. I think that I will 
>> need to delete the old http urls from solr index, re-crawl and index  the 
>> urls that need to be switched to https.
>
>Yes, just delete and recrawl and reindex everything. And consider upgrading to 
>1.12.
>
>> 
>> I will be grateful for any guidance or suggestions.
>> 
>> Thanks,
>> Madhvi
>> 
>> 


Protocol change to https

2016-08-05 Thread Arora, Madhvi
Hi,

We are using Nutch 1.10 and Solr 5. We have around 10 different web sites that 
are crawled regularly. We are changing  protocol of a few websites from http to 
https. So we will have a mix bag of http and https protocols.
I checked in nutch user-mail archive and get that we need to change 
protocol-http to protocol-httpclient.
1: I wanted to find out the best way to handle this
2: What are the issues with using protocol-httpclient i.e. there were previous 
references to issues with use of protocol-httpclient.
3: Steps that need to be taken to update the SOLR index. I think that I will 
need to delete the old http urls from solr index, re-crawl and index  the urls 
that need to be switched to https.

I will be grateful for any guidance or suggestions.

Thanks,
Madhvi



[jira] [Commented] (SPARK-10828) Can we use the accumulo data RDD created from JAVA in spark, in sparkR?Is there any other way to proceed with it to create RRDD from a source RDD other than text RDD?O

2015-09-25 Thread madhvi gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-10828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907725#comment-14907725
 ] 

madhvi gupta commented on SPARK-10828:
--

hey I have post this question on the spark amiling list also.There I was told 
to post this as an issue here to have better discussion.
see https://www.mail-archive.com/user@spark.apache.org/msg37450.html


> Can we use the accumulo data RDD created from JAVA in spark, in sparkR?Is 
> there any other way to proceed with it to create RRDD from a source RDD other 
> than text RDD?Or to use any other format of data stored in HDFS in sparkR?
> --
>
> Key: SPARK-10828
> URL: https://issues.apache.org/jira/browse/SPARK-10828
> Project: Spark
>  Issue Type: Question
>  Components: R
>Affects Versions: 1.5.0
> Environment: ubuntu 12.04,8GB RAM,accumulo 1.6.3,hadoop 2.6
>Reporter: madhvi gupta
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-10828) Can we use the accumulo data RDD created from JAVA in spark, in sparkR?Is there any other way to proceed with it to create RRDD from a source RDD other than text RDD?Or

2015-09-24 Thread madhvi gupta (JIRA)
madhvi gupta created SPARK-10828:


 Summary: Can we use the accumulo data RDD created from JAVA in 
spark, in sparkR?Is there any other way to proceed with it to create RRDD from 
a source RDD other than text RDD?Or to use any other format of data stored in 
HDFS in sparkR?
 Key: SPARK-10828
 URL: https://issues.apache.org/jira/browse/SPARK-10828
 Project: Spark
  Issue Type: Question
  Components: R
Affects Versions: 1.5.0
 Environment: ubuntu 12.04,8GB RAM,accumulo 1.6.3,hadoop 2.6
Reporter: madhvi gupta






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: Abnormal behaviour of custom iterator in getting entries

2015-06-23 Thread madhvi

Thanks Josh. It really worked for me.


On Wednesday 17 June 2015 08:43 PM, Josh Elser wrote:

Madhvi,

Understood. A few more questions..

How are you passing these IDs to the batch scanner? Are you providing 
individual Ranges for each ID (e.g. `new Range(new Key(row1, , 
id1), true, new Key(row1, , id1\x00), false))`)? Or are you 
providing an entire row (or set of rows) and using the 
fetchColumns(Text,Text) method (or similar) on the BatchScanner?


Are you trying to sum across all rows that you queried? Or is your sum 
per-row? If the former, that is going to cause you problems. The quick 
explanation is that you can't reliably know the tablet boundaries so 
you should try to perform an initial sum, per row. If you want, you 
can put a second iterator above the first and do a summation across 
all rows to reduce the amount of data sent to a client. However, if 
you use a BatchScanner, you will still have to perform a final 
summation at the client.


Check out 
https://blogs.apache.org/accumulo/entry/thinking_about_reads_over_accumulo 
for more details on that..


madhvi wrote:

Hi Josh,

Sorry, my company policy doesn't allow me to share full source.What we
are tryng to do is summing over a unique field stored in column
qualifier for IDs passed to batch scanner.Can u suggest how it can be
done in accumulo.

Thanks
Madhvi
On Wednesday 17 June 2015 10:32 AM, Josh Elser wrote:

You put random values in the family and qualifier? Do I misunderstand
you?

Also, if you can put up the full source for the iterator, that will be
much easier if you need help debugging it. It's hard for us to guess
at why your code might not be working as you expect.

madhvi wrote:

Hi Josh,

I have changed HashMap to TreeMap which sorts lexicographically and I
have inserted random values in column family and qualifier.Value of
TreeMap in value.
Used scanner and batch scanner but getting results only with scanner.

Thanks
Madhvi

On Tuesday 16 June 2015 08:42 PM, Josh Elser wrote:

Additionally, you're placing the Value into the ColumnQualifier and
dropping the ColumnFamily completely. Granted, that may not be a
problem for the specific data in your table, but it's not going to
work for any data.

Christopher wrote:

You're iterating over a HashMap. That's not sorted.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Tue, Jun 16, 2015 at 1:58 AM, madhvimadhvi.gu...@orkash.com
wrote:

Hi Josh,
Thanks for replying. I will enable remote debugger on my Accumulo
server.

However I am slightly confused with your statement you are not
returning
your data in sorted order. Can you point the part in my iterator
code which
seems innapropriate and any possible solution for that?

Thanks
Madhvi


On Tuesday 16 June 2015 11:07 AM, Josh Elser wrote:

//matched the condition and put values to holder map.










Re: Abnormal behaviour of custom iterator in getting entries

2015-06-17 Thread madhvi

Hi,

Thanks for the blog you shared.I found it quite useful for my requirement.
How are you passing these IDs to the batch scanner?
I am passing row ids received as a previous query result from another 
table as 'new Range(entry.getKey().getRow())' in a Range type list and 
passing that list to batch Scanner.


Are you trying to sum across all rows that you queried? 
Yes we need to sum a particular column qualifier across the rows ids 
passed to batch scanner.How the summation can be done across the rows as 
you said you can put a second iterator above the first?


Thanks
Madhvi
On Wednesday 17 June 2015 08:43 PM, Josh Elser wrote:

Madhvi,

Understood. A few more questions..

How are you passing these IDs to the batch scanner? Are you providing 
individual Ranges for each ID (e.g. `new Range(new Key(row1, , 
id1), true, new Key(row1, , id1\x00), false))`)? Or are you 
providing an entire row (or set of rows) and using the 
fetchColumns(Text,Text) method (or similar) on the BatchScanner?


Are you trying to sum across all rows that you queried? Or is your sum 
per-row? If the former, that is going to cause you problems. The quick 
explanation is that you can't reliably know the tablet boundaries so 
you should try to perform an initial sum, per row. If you want, you 
can put a second iterator above the first and do a summation across 
all rows to reduce the amount of data sent to a client. However, if 
you use a BatchScanner, you will still have to perform a final 
summation at the client.


Check out 
https://blogs.apache.org/accumulo/entry/thinking_about_reads_over_accumulo 
for more details on that..


madhvi wrote:

Hi Josh,

Sorry, my company policy doesn't allow me to share full source.What we
are tryng to do is summing over a unique field stored in column
qualifier for IDs passed to batch scanner.Can u suggest how it can be
done in accumulo.

Thanks
Madhvi
On Wednesday 17 June 2015 10:32 AM, Josh Elser wrote:

You put random values in the family and qualifier? Do I misunderstand
you?

Also, if you can put up the full source for the iterator, that will be
much easier if you need help debugging it. It's hard for us to guess
at why your code might not be working as you expect.

madhvi wrote:

Hi Josh,

I have changed HashMap to TreeMap which sorts lexicographically and I
have inserted random values in column family and qualifier.Value of
TreeMap in value.
Used scanner and batch scanner but getting results only with scanner.

Thanks
Madhvi

On Tuesday 16 June 2015 08:42 PM, Josh Elser wrote:

Additionally, you're placing the Value into the ColumnQualifier and
dropping the ColumnFamily completely. Granted, that may not be a
problem for the specific data in your table, but it's not going to
work for any data.

Christopher wrote:

You're iterating over a HashMap. That's not sorted.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Tue, Jun 16, 2015 at 1:58 AM, madhvimadhvi.gu...@orkash.com
wrote:

Hi Josh,
Thanks for replying. I will enable remote debugger on my Accumulo
server.

However I am slightly confused with your statement you are not
returning
your data in sorted order. Can you point the part in my iterator
code which
seems innapropriate and any possible solution for that?

Thanks
Madhvi


On Tuesday 16 June 2015 11:07 AM, Josh Elser wrote:

//matched the condition and put values to holder map.










Re: Abnormal behaviour of custom iterator in getting entries

2015-06-17 Thread madhvi

Hi Josh,

Sorry, my company policy doesn't allow me to share full source.What we 
are tryng to do is summing over a unique field stored in column 
qualifier for IDs passed to batch scanner.Can u suggest how it can be 
done in accumulo.


Thanks
Madhvi
On Wednesday 17 June 2015 10:32 AM, Josh Elser wrote:
You put random values in the family and qualifier? Do I misunderstand 
you?


Also, if you can put up the full source for the iterator, that will be 
much easier if you need help debugging it. It's hard for us to guess 
at why your code might not be working as you expect.


madhvi wrote:

Hi Josh,

I have changed HashMap to TreeMap which sorts lexicographically and I
have inserted random values in column family and qualifier.Value of
TreeMap in value.
Used scanner and batch scanner but getting results only with scanner.

Thanks
Madhvi

On Tuesday 16 June 2015 08:42 PM, Josh Elser wrote:

Additionally, you're placing the Value into the ColumnQualifier and
dropping the ColumnFamily completely. Granted, that may not be a
problem for the specific data in your table, but it's not going to
work for any data.

Christopher wrote:

You're iterating over a HashMap. That's not sorted.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Tue, Jun 16, 2015 at 1:58 AM, madhvimadhvi.gu...@orkash.com 
wrote:

Hi Josh,
Thanks for replying. I will enable remote debugger on my Accumulo
server.

However I am slightly confused with your statement you are not
returning
your data in sorted order. Can you point the part in my iterator
code which
seems innapropriate and any possible solution for that?

Thanks
Madhvi


On Tuesday 16 June 2015 11:07 AM, Josh Elser wrote:

//matched the condition and put values to holder map.








Re: Abnormal behaviour of custom iterator in getting entries

2015-06-15 Thread madhvi

Thanks Josh.

Outline of my code is:

public class TestIterator extends WrappingIterator {

HashMapString, Integer holder = new HashMap();
private IteratorMap.EntryString, Integer entries=null;
private EntryString, Integer entry=null;
private Key emitKey;
private Value emitValue;

@Override
public void seek(Range range, CollectionByteSequence columnFamilies, 
boolean inclusive) throws IOException {

super.seek(range, columnFamilies, inclusive);
myFunction();
}

myFunction()
{
while(super.hasTop())
{
//matched the condition and put values to holder map.
}
entries = holder.entrySet().iterator();//iterate the map holder.
}

 @Override
  public Key getTopKey() {
  return emitKey;
  }

@Override
  public Value getTopValue() {
return emitValue;
  }

 @Override
  public boolean hasTop() {
  return entries.hasNext();
  }

 @Override
  public void next() throws IOException {
  try{
  entry = entries.next();
   //put the keys of map to rowid and values of map to 
columnqualifier through emitKey
  emitKey = new Key(new Text(entry.getKey()), new Text(), 
new Text(String.valueOf(entry.getValue(;

  //return 1 in emitValue.
  emitValue = new Value(1.getBytes());
  }
  catch(Exception e)
  {
  e.printStackTrace();
  }
  }
}

This code returning result while using scanner and but not in case of 
batchscanner.

And how enable remote debugger in accumulo.

Thanks
Madhvi

On Monday 15 June 2015 09:21 PM, Josh Elser wrote:
It's hard to remotely debug an iterator, especially when we don't know 
what it's doing. If you can post the code, that would help 
tremendously. Instead of dumping values to a text file, you may fare 
better by attaching a remote debugger to the TabletServer and setting 
a breakpoint on your SKVI.


The only thing I can say is that a Scanner and BatchScanner should 
return the same data, but the invocations in the server to fetch that 
data are performed differently. It's likely that due to the 
differences in the implementations, you uncovered a bug in your iterator.


One common pitfall is incorrectly handling something we refer to as a 
re-seek. Hypothetically, take a query scanning over [0, 9], and we 
have one key per number in the range (10 keys).


As the name implies, the BatchScanner fetches batches from a server, 
and suppose that after 3 keys, the server-side buffer fills up. Thus, 
the client will get keys [0,2]. In the server, the next time you fetch 
a batch, a new instance of the iterator will be constructed (via 
deepCopy()). Seek() will then be called, but with a new range that 
represents the previous data that was already returned. Thus, your 
iterator would be seeked with (2,9] instead of [0,9] again.


I can't say whether or not you're actually hitting this case, but it's 
a common pitfall that affects devs.


madhvi wrote:

@josh
If after hasTop and getTopKey, seek would have called then this should
also be written in call hierarchy.
Because i have written all the function hierarchy in a file.
so the problem if i have called myFunction() in seek.
And after seek getTopKey and getTopValue then hasTop and next should be
called but what is happening sometime getTopValue is called sometime
not. This is happening when i am reading entries through batchscanner.
getTopValue function is called while scanning through scanner, Applying
same iterator using scanner and batchsacnner, through scanner getting
returned entries but getting no entries returned while using 
batchscanner.


So can you please explain.




Re: Change column family

2015-05-27 Thread madhvi

Hi All,

If anyone has worked on tranforming iterator can tell me if the iterator 
make tranformed changes in the accumulo table also or it returns the 
result at the scan time only. Can u provide me details how to implement 
its abstract methods and their use and workflow of the iterator?


Thanks
Madhvi
On Wednesday 27 May 2015 05:38 PM, Andrew Wells wrote:

to implement that iterator.

looks like you will only need to override replaceColumnFamily

and this looks to return the new ColumnFamily via the argument. So 
manipulate the Text object provided.


On Wed, May 27, 2015 at 8:06 AM, Andrew Wells awe...@clearedgeit.com 
mailto:awe...@clearedgeit.com wrote:


Looks like you want to override these methods:

|protected Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html|
|*replaceColumnFamily

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/iterators/user/TransformingIterator.html#replaceColumnFamily%28org.apache.accumulo.core.data.Key,%20org.apache.hadoop.io.Text%29*(Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html 
originalKey,
org.apache.hadoop.io.Text newColFam)|
  Make a new key with all parts (including delete flag)
coming from |originalKey| but use |newColFam| as the column family.
|protected Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html|
|*replaceColumnQualifier

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/iterators/user/TransformingIterator.html#replaceColumnQualifier%28org.apache.accumulo.core.data.Key,%20org.apache.hadoop.io.Text%29*(Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html 
originalKey,
org.apache.hadoop.io.Text newColQual)|
  Make a new key with all parts (including delete flag)
coming from |originalKey| but use |newColQual| as the column
qualifier.
|protected Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html|
|*replaceColumnVisibility

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/iterators/user/TransformingIterator.html#replaceColumnVisibility%28org.apache.accumulo.core.data.Key,%20org.apache.hadoop.io.Text%29*(Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html 
originalKey,
org.apache.hadoop.io.Text newColVis)|
  Make a new key with all parts (including delete flag)
coming from |originalKey| but use |newColVis| as the column
visibility.
|protected Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html|
|*replaceKeyParts

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/iterators/user/TransformingIterator.html#replaceKeyParts%28org.apache.accumulo.core.data.Key,%20org.apache.hadoop.io.Text,%20org.apache.hadoop.io.Text%29*(Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html 
originalKey,
org.apache.hadoop.io.Text newColQual,
org.apache.hadoop.io.Text newColVis)|
  Make a new key with a column qualifier, and column
visibility.
|protected Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html|
|*replaceKeyParts

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/iterators/user/TransformingIterator.html#replaceKeyParts%28org.apache.accumulo.core.data.Key,%20org.apache.hadoop.io.Text,%20org.apache.hadoop.io.Text,%20org.apache.hadoop.io.Text%29*(Key

http://accumulo.apache.org/1.6/apidocs/org/apache/accumulo/core/data/Key.html 
originalKey,
org.apache.hadoop.io.Text newColFam,
org.apache.hadoop.io.Text newColQual,
org.apache.hadoop.io.Text newColVis)|
  Make a new key with a column family, column qualifier,
and column visibility.





On Wed, May 27, 2015 at 7:40 AM, shweta.agrawal
shweta.agra...@orkash.com mailto:shweta.agra...@orkash.com wrote:

Thanks for all the suggestion.

I read about TransformingIterator and started implementing
it,  I extended this class and tried to override its abstract
method. But I am not able to get where and what to write to
change column family?

So please provide your suggestions.

Thanks
Shweta



On Tuesday 26 May 2015 08:33 PM, Adam Fuchs wrote:

This can also be done with a row-doesn't-fit-into-memory
constraint. You won't need to hold the second column
in-memory if your iterator tree deep copies, filters,
transforms and merges. Exhibit A:

[HeapIterator-derivative]
 |_
 | \
[transform-graph1-to-graph2]  \
 |   \
[column-family-graph1][all-but-column-family-graph1]

With this design, you can subclass

Re: How to install spark in spark on yarn mode

2015-04-30 Thread madhvi

Hi,

you have to specify the worker nodes of the spark cluster at the time of 
configurations of the cluster.


Thanks
Madhvi
On Thursday 30 April 2015 01:30 PM, xiaohe lan wrote:

Hi Madhvi,

If I only install spark on one node, and use spark-submit to run an 
application, which are the Worker nodes? Any where are the executors ?


Thanks,
Xiaohe

On Thu, Apr 30, 2015 at 12:52 PM, madhvi madhvi.gu...@orkash.com 
mailto:madhvi.gu...@orkash.com wrote:


Hi,
Follow the instructions to install on the following link:
http://mbonaci.github.io/mbo-spark/
You dont need to install spark on every node.Just install it on
one node or you can install it on remote system also and made a
spark cluster.
Thanks
Madhvi

On Thursday 30 April 2015 09:31 AM, xiaohe lan wrote:

Hi experts,

I see spark on yarn has yarn-client and yarn-cluster mode. I
also have a 5 nodes hadoop cluster (hadoop 2.4). How to
install spark if I want to try the spark on yarn mode.

Do I need to install spark on the each node of hadoop cluster ?

Thanks,
Xiaohe



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
mailto:user-h...@spark.apache.org






Re: How to install spark in spark on yarn mode

2015-04-29 Thread madhvi

Hi,
Follow the instructions to install on the following link:
http://mbonaci.github.io/mbo-spark/
You dont need to install spark on every node.Just install it on one node 
or you can install it on remote system also and made a spark cluster.

Thanks
Madhvi
On Thursday 30 April 2015 09:31 AM, xiaohe lan wrote:

Hi experts,

I see spark on yarn has yarn-client and yarn-cluster mode. I also have 
a 5 nodes hadoop cluster (hadoop 2.4). How to install spark if I want 
to try the spark on yarn mode.


Do I need to install spark on the each node of hadoop cluster ?

Thanks,
Xiaohe



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Serialization error

2015-04-28 Thread madhvi

Thankyou Deepak.It worked.
Madhvi
On Tuesday 28 April 2015 01:39 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote:


val conf = new SparkConf()

  .setAppName(detail)

  .set(spark.serializer, 
org.apache.spark.serializer.KryoSerializer)


  .set(spark.kryoserializer.buffer.mb, 
arguments.get(buffersize).get)


  .set(spark.kryoserializer.buffer.max.mb, 
arguments.get(maxbuffersize).get)


  .set(spark.driver.maxResultSize, 
arguments.get(maxResultSize).get)


.registerKryoClasses(Array(classOf[org.apache.accumulo.core.data.Key]))


Can you try this ?


On Tue, Apr 28, 2015 at 11:11 AM, madhvi madhvi.gu...@orkash.com 
mailto:madhvi.gu...@orkash.com wrote:


Hi,

While connecting to accumulo through spark by making sparkRDD I am
getting the following error:
 object not serializable (class: org.apache.accumulo.core.data.Key)

This is due to the 'key' class of accumulo which does not
implement serializable interface.How it can be solved and accumulo
can be used with spark

Thanks
Madhvi

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
mailto:user-h...@spark.apache.org




--
Deepak





Re: Serialization error

2015-04-28 Thread madhvi

On Tuesday 28 April 2015 01:39 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote:


val conf = new SparkConf()

  .setAppName(detail)

  .set(spark.serializer, 
org.apache.spark.serializer.KryoSerializer)


  .set(spark.kryoserializer.buffer.mb, 
arguments.get(buffersize).get)


  .set(spark.kryoserializer.buffer.max.mb, 
arguments.get(maxbuffersize).get)


  .set(spark.driver.maxResultSize, 
arguments.get(maxResultSize).get)


.registerKryoClasses(Array(classOf[org.apache.accumulo.core.data.Key]))


Can you try this ?


On Tue, Apr 28, 2015 at 11:11 AM, madhvi madhvi.gu...@orkash.com 
mailto:madhvi.gu...@orkash.com wrote:


Hi,

While connecting to accumulo through spark by making sparkRDD I am
getting the following error:
 object not serializable (class: org.apache.accumulo.core.data.Key)

This is due to the 'key' class of accumulo which does not
implement serializable interface.How it can be solved and accumulo
can be used with spark

Thanks
Madhvi

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
mailto:user-h...@spark.apache.org




--
Deepak


Hi Deepak,

The snippet you proveide is of scala but I am working on java.I am tryng 
the same thing in java but please can you specify in detail what are the 
parameters you mentioned in that such as 'arguements'.


Thanks
Madhvi


Serialization error

2015-04-27 Thread madhvi

Hi,

While connecting to accumulo through spark by making sparkRDD I am 
getting the following error:

 object not serializable (class: org.apache.accumulo.core.data.Key)

This is due to the 'key' class of accumulo which does not implement 
serializable interface.How it can be solved and accumulo can be used 
with spark


Thanks
Madhvi

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Serialization error

2015-04-27 Thread madhvi

Hi,

While connecting to accumulo through spark by making sparkRDD I am 
getting the following error:

 object not serializable (class: org.apache.accumulo.core.data.Key)

This is due to the 'key' class of accumulo which does not implement 
serializable interface.How it can be solved and accumulo can be used 
with spark


Thanks
Madhvi


Re: Error in creating spark RDD

2015-04-23 Thread madhvi

On Thursday 23 April 2015 12:22 PM, Akhil Das wrote:
Here's a complete scala example 
https://github.com/bbux-proteus/spark-accumulo-examples/blob/1dace96a115f29c44325903195c8135edf828c86/src/main/scala/org/bbux/spark/AccumuloMetadataCount.scala


Thanks
Best Regards

On Thu, Apr 23, 2015 at 12:19 PM, Akhil Das 
ak...@sigmoidanalytics.com mailto:ak...@sigmoidanalytics.com wrote:


Change your import from mapred to mapreduce. like :

import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;

Thanks
Best Regards

On Wed, Apr 22, 2015 at 2:42 PM, madhvi madhvi.gu...@orkash.com
mailto:madhvi.gu...@orkash.com wrote:

Hi,

I am creating a spark RDD through accumulo writing like:

JavaPairRDDKey, Value accumuloRDD =

sc.newAPIHadoopRDD(accumuloJob.getConfiguration(),AccumuloInputFormat.class,Key.class,
Value.class);

But I am getting the following error and it is not getting
compiled:

Bound mismatch: The generic method
newAPIHadoopRDD(Configuration, ClassF, ClassK, ClassV)
of type JavaSparkContext is not applicable for the arguments
(Configuration, ClassAccumuloInputFormat, ClassKey,
ClassValue). The inferred type AccumuloInputFormat is not a
valid substitute for the bounded parameter F extends
InputFormatK,V

I am using the following import statements:

import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
import org.apache.accumulo.core.data.Key;
import org.apache.accumulo.core.data.Value;

I am not getting what is the problem in this.

Thanks
Madhvi


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
mailto:user-h...@spark.apache.org




Hi,

Thanks.I got that solved:)

madhvi


Error in creating spark RDD

2015-04-22 Thread madhvi

Hi,

I am creating a spark RDD through accumulo writing like:

JavaPairRDDKey, Value accumuloRDD = 
sc.newAPIHadoopRDD(accumuloJob.getConfiguration(),AccumuloInputFormat.class,Key.class, 
Value.class);


But I am getting the following error and it is not getting compiled:

Bound mismatch: The generic method newAPIHadoopRDD(Configuration, 
ClassF, ClassK, ClassV) of type JavaSparkContext is not applicable 
for the arguments (Configuration, ClassAccumuloInputFormat, 
ClassKey, ClassValue). The inferred type AccumuloInputFormat is not 
a valid substitute for the bounded parameter F extends InputFormatK,V


I am using the following import statements:

import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
import org.apache.accumulo.core.data.Key;
import org.apache.accumulo.core.data.Value;

I am not getting what is the problem in this.

Thanks
Madhvi


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Running spark over HDFS

2015-04-21 Thread madhvi

On Tuesday 21 April 2015 12:12 PM, Akhil Das wrote:

Your spark master should be spark://swetha:7077 :)

Thanks
Best Regards

On Mon, Apr 20, 2015 at 2:44 PM, madhvi madhvi.gu...@orkash.com 
mailto:madhvi.gu...@orkash.com wrote:


PFA screenshot of my cluster UI

Thanks
On Monday 20 April 2015 02:27 PM, Akhil Das wrote:

Are you seeing your task being submitted to the UI? Under
completed or running tasks? How much resources are you allocating
for your job? Can you share a screenshot of your cluster UI and
the code snippet that you are trying to run?

Thanks
Best Regards

On Mon, Apr 20, 2015 at 12:37 PM, madhvi madhvi.gu...@orkash.com
mailto:madhvi.gu...@orkash.com wrote:

Hi,

I Did the same you told but now it is giving the following error:
ERROR TaskSchedulerImpl: Exiting due to error from cluster
scheduler: All masters are unresponsive! Giving up.

On UI it is showing that master is working

Thanks
Madhvi

On Monday 20 April 2015 12:28 PM, Akhil Das wrote:

In your eclipse, while you create your SparkContext, set the
master uri as shown in the web UI's top left corner like:
spark://someIPorHost:7077 and it should be fine.

Thanks
Best Regards

On Mon, Apr 20, 2015 at 12:22 PM, madhvi
madhvi.gu...@orkash.com mailto:madhvi.gu...@orkash.com
wrote:

Hi All,

I am new to spark and have installed spark cluster over
my system having hadoop cluster.I want to process data
stored in HDFS through spark.

When I am running code in eclipse it is giving the
following warning repeatedly:
scheduler.TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources.

I have made changes to spark-env.sh file as below
export SPARK_WORKER_INSTANCES=1
export HADOOP_CONF_DIR=/root/Documents/hadoop/etc/hadoop
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_EXECUTOR_MEMORY=1g

I am running the spark standalone cluster.In cluster UI
it is showing all workers with allocated resources but
still its not working.
what other configurations are needed to be changed?

Thanks
Madhvi Gupta


-
To unsubscribe, e-mail:
user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail:
user-h...@spark.apache.org
mailto:user-h...@spark.apache.org









-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
mailto:user-h...@spark.apache.org



Thanks Akhil,

It worked fine after replacing IP with the hostname and running the code 
by making jar of it by spark submit


Madhvi


Spark and accumulo

2015-04-20 Thread madhvi

Hi all,

Is there anything to integrate spark with accumulo or make spark to 
process over accumulo data?


Thanks
Madhvi Gupta

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Running spark over HDFS

2015-04-20 Thread madhvi

On Monday 20 April 2015 03:18 PM, Archit Thakur wrote:
There are lot of similar problems shared and resolved by users on this 
same portal. I have been part of those discussions before, Search 
those, Please Try them and let us know, if you still face problems.


Thanks and Regards,
Archit Thakur.

On Mon, Apr 20, 2015 at 3:05 PM, madhvi madhvi.gu...@orkash.com 
mailto:madhvi.gu...@orkash.com wrote:


On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote:

Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher
than what is available.
Please request for 256 MB explicitly while creating Spark Context
and try again.

Thanks and Regards,
Suraj Sheth



Tried the same but still no luck:|

Madhvi



Hi,

Its still not working.Dont getting where I am mistaken or doing 
wrong.Following are the configurations in my spark-env.sh file:

export HADOOP_CONF_DIR=/root/Documents/hadoop/etc/hadoop
export SPARK_EXECUTOR_MEMORY=256m
export SPARK_DRIVER_MEMORY=256m
I am running the command on the shell:
./bin/spark-submit --class Spark.testSpark.JavaWordCount --master 
yarn-client --num-executors 2 --driver-memory 256m 
--executor-memory 256m --executor-cores 1 lib/Untitled.jar


Madhvi



Re: Running spark over HDFS

2015-04-20 Thread madhvi

On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote:

Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher than 
what is available.
Please request for 256 MB explicitly while creating Spark Context and 
try again.


Thanks and Regards,
Suraj Sheth



Tried the same but still no luck:|

Madhvi


Re: Running spark over HDFS

2015-04-20 Thread madhvi

Hi,

I Did the same you told but now it is giving the following error:
ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: 
All masters are unresponsive! Giving up.


On UI it is showing that master is working

Thanks
Madhvi
On Monday 20 April 2015 12:28 PM, Akhil Das wrote:
In your eclipse, while you create your SparkContext, set the master 
uri as shown in the web UI's top left corner like: 
spark://someIPorHost:7077 and it should be fine.


Thanks
Best Regards

On Mon, Apr 20, 2015 at 12:22 PM, madhvi madhvi.gu...@orkash.com 
mailto:madhvi.gu...@orkash.com wrote:


Hi All,

I am new to spark and have installed spark cluster over my system
having hadoop cluster.I want to process data stored in HDFS
through spark.

When I am running code in eclipse it is giving the following
warning repeatedly:
scheduler.TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are
registered and have sufficient resources.

I have made changes to spark-env.sh file as below
export SPARK_WORKER_INSTANCES=1
export HADOOP_CONF_DIR=/root/Documents/hadoop/etc/hadoop
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_EXECUTOR_MEMORY=1g

I am running the spark standalone cluster.In cluster UI it is
showing all workers with allocated resources but still its not
working.
what other configurations are needed to be changed?

Thanks
Madhvi Gupta

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
mailto:user-h...@spark.apache.org






Re: Running spark over HDFS

2015-04-20 Thread madhvi
No I am not getting any task on the UI which I am running.Also I have 
set instances=1 but on UI it is showing 2 workers.i am running the java 
word count code exactly but i have the text file in HDFS.Following is 
the part of my code I writing to make connection


SparkConf sparkConf = new SparkConf().setAppName(JavaWordCount);
sparkConf.setMaster(spark://192.168.0.119:7077);
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
Configuration conf = new Configuration();
conf.set(fs.default.name, hdfs://192.168.0.119:9000);
FileSystem dfs = FileSystem.get(conf);
JavaRDDString lines = 
ctx.textFile(dfs.getWorkingDirectory()+/spark.txt, 1);


Thanks
On Monday 20 April 2015 02:27 PM, Akhil Das wrote:
Are you seeing your task being submitted to the UI? Under completed or 
running tasks? How much resources are you allocating for your job? Can 
you share a screenshot of your cluster UI and the code snippet that 
you are trying to run?


Thanks
Best Regards

On Mon, Apr 20, 2015 at 12:37 PM, madhvi madhvi.gu...@orkash.com 
mailto:madhvi.gu...@orkash.com wrote:


Hi,

I Did the same you told but now it is giving the following error:
ERROR TaskSchedulerImpl: Exiting due to error from cluster
scheduler: All masters are unresponsive! Giving up.

On UI it is showing that master is working

Thanks
Madhvi

On Monday 20 April 2015 12:28 PM, Akhil Das wrote:

In your eclipse, while you create your SparkContext, set the
master uri as shown in the web UI's top left corner like:
spark://someIPorHost:7077 and it should be fine.

Thanks
Best Regards

On Mon, Apr 20, 2015 at 12:22 PM, madhvi madhvi.gu...@orkash.com
mailto:madhvi.gu...@orkash.com wrote:

Hi All,

I am new to spark and have installed spark cluster over my
system having hadoop cluster.I want to process data stored in
HDFS through spark.

When I am running code in eclipse it is giving the following
warning repeatedly:
scheduler.TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are
registered and have sufficient resources.

I have made changes to spark-env.sh file as below
export SPARK_WORKER_INSTANCES=1
export HADOOP_CONF_DIR=/root/Documents/hadoop/etc/hadoop
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_EXECUTOR_MEMORY=1g

I am running the spark standalone cluster.In cluster UI it is
showing all workers with allocated resources but still its
not working.
what other configurations are needed to be changed?

Thanks
Madhvi Gupta

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
mailto:user-h...@spark.apache.org









Running spark over HDFS

2015-04-20 Thread madhvi

Hi All,

I am new to spark and have installed spark cluster over my system having 
hadoop cluster.I want to process data stored in HDFS through spark.


When I am running code in eclipse it is giving the following warning 
repeatedly:
scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; 
check your cluster UI to ensure that workers are registered and have 
sufficient resources.


I have made changes to spark-env.sh file as below
export SPARK_WORKER_INSTANCES=1
export HADOOP_CONF_DIR=/root/Documents/hadoop/etc/hadoop
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_EXECUTOR_MEMORY=1g

I am running the spark standalone cluster.In cluster UI it is showing 
all workers with allocated resources but still its not working.

what other configurations are needed to be changed?

Thanks
Madhvi Gupta

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



integrating Accumulo with solr

2015-04-09 Thread madhvi

Hi,

I have created lucene indexes of data stored in accumulo in HDFS.
Lucene queries are working fine over that but I want to use those indexes to be 
searched via accumulo means the lucene queries should run via accumulo.Do you 
have any idea about that if it is related to what you are trying to do, somehow?

Madhvi




Lucene and accumulo

2015-04-09 Thread madhvi

Hi,

Can we use Lucene API to store and search data in Accumulo?can we have a 
system, which is combination of the capabilities of both Accumulo and 
Lucene so that data can be searched by lucene API which is actually 
stored in Accumulo as key value.I have created lucene indexes of data 
stored in accumulo, in HDFS.
Lucene queries are working fine over that but I want to use those 
indexes to be searched via accumulo means the lucene queries should run 
via accumulo.


Madhvi Gupta

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



[jira] [Created] (LUCENE-6387) Storing lucene indexes in HDFS

2015-04-01 Thread madhvi gupta (JIRA)
madhvi gupta created LUCENE-6387:


 Summary: Storing lucene indexes in HDFS
 Key: LUCENE-6387
 URL: https://issues.apache.org/jira/browse/LUCENE-6387
 Project: Lucene - Core
  Issue Type: Test
  Components: core/search, core/store
Affects Versions: 4.10.2
 Environment: Lucene 4.10.2,Accumulo 1.6.1,Hadoop 2.5.0 
Reporter: madhvi gupta


I have created lucene indexes of data stored in accumulo in HDFS but while 
queryng over those indexes I am getting CorruptIndexException.Can anyone help 
me out for this or tell me Why the accumulo data is not getting indexed.Is 
there anything I might be missing?When I indexed the data from local file 
system, it was working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[R] How to Install R 3.1.0 on ubuntu 12.0.4

2014-10-14 Thread madhvi

Hi,
Can anyone tell me the steps to install R 3.1.0 and rstudio on ubuntu 
12.0.4.


Thanks
Madhvi

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to Install R 3.1.0 on ubuntu 12.0.4

2014-10-14 Thread madhvi

Hi,
How to install RStudio after downloading debian  package

Madhvi
On Tuesday 14 October 2014 12:09 PM, Pascal Oettli wrote:

Please reply to the list, not only to me.

RStudio is for Ubuntu 10.04+ (please note the +).

About R 3.1.0, you probably will have to compile from the source.

Regards,
Pascal


On Tue, Oct 14, 2014 at 3:31 PM, madhvi madhvi.gu...@orkash.com wrote:

Hi,
I have followed these links but it is giving R version 3.1.1 and R studio
for ubuntu 10.04

Madhvi

On Tuesday 14 October 2014 11:58 AM, Pascal Oettli wrote:

Hi,

http://cran.r-project.org/bin/linux/ubuntu/
http://www.rstudio.com/products/rstudio/download/

Enjoy,
Pascal

On Tue, Oct 14, 2014 at 3:17 PM, madhvi madhvi.gu...@orkash.com wrote:

Hi,
Can anyone tell me the steps to install R 3.1.0 and rstudio on ubuntu
12.0.4.

Thanks
Madhvi

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.








__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to Install R 3.1.0 on ubuntu 12.0.4

2014-10-14 Thread madhvi

Hi,
Thanks for your help.I got it installed.


Madhvi
On Tuesday 14 October 2014 12:50 PM, Pascal Oettli wrote:

The support for RStudio is located here: https://support.rstudio.com

Regards,
Pascal

On Tue, Oct 14, 2014 at 4:08 PM, madhvi madhvi.gu...@orkash.com wrote:

Hi,
How to install RStudio after downloading debian  package

Madhvi

On Tuesday 14 October 2014 12:09 PM, Pascal Oettli wrote:

Please reply to the list, not only to me.

RStudio is for Ubuntu 10.04+ (please note the +).

About R 3.1.0, you probably will have to compile from the source.

Regards,
Pascal


On Tue, Oct 14, 2014 at 3:31 PM, madhvi madhvi.gu...@orkash.com wrote:

Hi,
I have followed these links but it is giving R version 3.1.1 and R studio
for ubuntu 10.04

Madhvi

On Tuesday 14 October 2014 11:58 AM, Pascal Oettli wrote:

Hi,

http://cran.r-project.org/bin/linux/ubuntu/
http://www.rstudio.com/products/rstudio/download/

Enjoy,
Pascal

On Tue, Oct 14, 2014 at 3:17 PM, madhvi madhvi.gu...@orkash.com wrote:

Hi,
Can anyone tell me the steps to install R 3.1.0 and rstudio on ubuntu
12.0.4.

Thanks
Madhvi

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.










__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [webpy] Design a form

2012-03-21 Thread Madhvi gupta
I checked it is in the starting of the file only. And I have not added
anything to formtest.html.

On Wed, Mar 21, 2012 at 1:33 PM, Anand Chitipothu anandol...@gmail.comwrote:

 తేదిన 21 మార్చి 2012 12:49 సా, Madhvi gupta madhvi1...@iiitd.ac.in
 వ్రాశారు:
  Sorry for bothering you but I am not getting what is wrong.
  I made two file one is code.py and formtest.html according to
  http://webpy.org/form. I kept formtest.html in templates folder. Then I
  compiled code. py by using command python code.py and then open the local
  host. It gave me a number of errors PFA. I am not getting what is wrong
 here
  please help me.

 Looks like you have $def with (form) in the middle of the template.
 That is supposed to be start of the template.

 Have you added anything more to the formtest.html?

 Anand

 --
 You received this message because you are subscribed to the Google Groups
 web.py group.
 To post to this group, send email to webpy@googlegroups.com.
 To unsubscribe from this group, send email to
 webpy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/webpy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
web.py group.
To post to this group, send email to webpy@googlegroups.com.
To unsubscribe from this group, send email to 
webpy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/webpy?hl=en.



Re: [webpy] Design a form

2012-03-21 Thread Madhvi gupta
Hello,
   I want to make a web app using web.py which takes input from user
that is username and password, and in turn communicates with google server.
I am not sure how should I proceed. I thought of making a form which ask
for username and password and then pass it to that python function. Please
tell me if I am on right path?

Regards,
Madhvi

On Wed, Mar 21, 2012 at 3:02 PM, Madhvi gupta madhvi1...@iiitd.ac.inwrote:

 Thanx a lot! That was textedit problem it was introducing some formatting
 on its own.

 Regards,
 Madhvi


 On Wed, Mar 21, 2012 at 2:53 PM, Anand Chitipothu anandol...@gmail.comwrote:

 తేదిన 21 మార్చి 2012 2:04 సా, Madhvi gupta madhvi1...@iiitd.ac.in
 వ్రాశారు:
  I checked it is in the starting of the file only. And I have not added
  anything to formtest.html.

 From the stack trace it looks like you have the following line in your
 template.

 p class=p1$def with (form)/p

 --
 You received this message because you are subscribed to the Google Groups
 web.py group.
 To post to this group, send email to webpy@googlegroups.com.
 To unsubscribe from this group, send email to
 webpy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/webpy?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
web.py group.
To post to this group, send email to webpy@googlegroups.com.
To unsubscribe from this group, send email to 
webpy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/webpy?hl=en.



[webpy] Design a form

2012-03-20 Thread Madhvi gupta
I am trying to use web.py to design an web app. As a first step I
tried to use example given at http://webpy.org/form#example, but I am
not able to do so. I am confused in where to keep sample
formtest.html. I am working on mac OS X and python 2.6. Please help me
as I am new to this.

Regards,
Madhvi

-- 
You received this message because you are subscribed to the Google Groups 
web.py group.
To post to this group, send email to webpy@googlegroups.com.
To unsubscribe from this group, send email to 
webpy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/webpy?hl=en.



[grosir_komputer] Stunning Trisha Photoshoot

2010-08-23 Thread Madhvi
 Stunning Trisha Photoshoot 
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=974:stunning-trisha-photoshootcatid=931:tollywoodItemid=108
[1]
  [2]

Links:
--
[1]
http://masti2mail.com/index.php?option=com_contentview=articleid=974:stunning-trisha-photoshootcatid=931:tollywoodItemid=108
[2] http://groups.yahoo.com/group/masti2mail/join



[iklan-investasi] Stunning Trisha Photoshoot

2010-08-23 Thread Madhvi
 Stunning Trisha Photoshoot 
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=974:stunning-trisha-photoshootcatid=931:tollywoodItemid=108
[1]
  [2]

Links:
--
[1]
http://masti2mail.com/index.php?option=com_contentview=articleid=974:stunning-trisha-photoshootcatid=931:tollywoodItemid=108
[2] http://groups.yahoo.com/group/masti2mail/join



[Non-text portions of this message have been removed]



[IndoCareer - Execufinder Labor provider] Stunning Trisha Photoshoot

2010-08-23 Thread Madhvi
 Stunning Trisha Photoshoot 
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=974:stunning-trisha-photoshootcatid=931:tollywoodItemid=108
[1]
  [2]

Links:
--
[1]
http://masti2mail.com/index.php?option=com_contentview=articleid=974:stunning-trisha-photoshootcatid=931:tollywoodItemid=108
[2] http://groups.yahoo.com/group/masti2mail/join



[Non-text portions of this message have been removed]



[grosir_komputer] Katy Perry flaunts her California Gurls cleaavage

2010-08-09 Thread Madhvi
 Katy Perry flaunts her California Gurls cleaavage
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=3783:katy-perry-flaunts-her-california-gurls-cleaavagecatid=908:hollywoodItemid=102
[1]
  [2]

Links:
--
[1]
http://masti2mail.com/index.php?option=com_contentview=articleid=3783:katy-perry-flaunts-her-california-gurls-cleaavagecatid=908:hollywoodItemid=102
[2] http://groups.yahoo.com/group/masti2mail/join



[iklan-investasi] Katy Perry flaunts her California Gurls cleaavage

2010-08-09 Thread Madhvi
 Katy Perry flaunts her California Gurls cleaavage
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=3783:katy-perry-flaunts-her-california-gurls-cleaavagecatid=908:hollywoodItemid=102
[1]
  [2]

Links:
--
[1]
http://masti2mail.com/index.php?option=com_contentview=articleid=3783:katy-perry-flaunts-her-california-gurls-cleaavagecatid=908:hollywoodItemid=102
[2] http://groups.yahoo.com/group/masti2mail/join



[Non-text portions of this message have been removed]



[IndoCareer - Execufinder Labor provider] Katy Perry flaunts her California Gurls cleaavage

2010-08-09 Thread Madhvi
 Katy Perry flaunts her California Gurls cleaavage
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=3783:katy-perry-flaunts-her-california-gurls-cleaavagecatid=908:hollywoodItemid=102
[1]
  [2]

Links:
--
[1]
http://masti2mail.com/index.php?option=com_contentview=articleid=3783:katy-perry-flaunts-her-california-gurls-cleaavagecatid=908:hollywoodItemid=102
[2] http://groups.yahoo.com/group/masti2mail/join



[Non-text portions of this message have been removed]



[grosir_komputer] Actress Kausha Hot Sexxy Latest Unseen Photos

2010-07-06 Thread Madhvi
 Actress Kausha Hot Sexxy Latest Unseen Photos 
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=3548:actress-kausha-hot-sexxy-latest-unseen-photoscatid=931:tollywoodItemid=108
  [1]

Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join



[iklan-investasi] Actress Kausha Hot Sexxy Latest Unseen Photos

2010-07-06 Thread Madhvi
 Actress Kausha Hot Sexxy Latest Unseen Photos 
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=3548:actress-kausha-hot-sexxy-latest-unseen-photoscatid=931:tollywoodItemid=108
  [1]

Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join



[Non-text portions of this message have been removed]



[grosir_komputer] Lingerie babe Irina Sheik Expose in Bikini

2010-06-15 Thread Madhvi
 Lingerie babe Irina Sheik Expose in Bikini
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=1660:lingerie-babe-irina-sheikcatid=908:hollywoodItemid=102
  [1]

Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join



[iklan-investasi] Lingerie babe Irina Sheik Expose in Bikini

2010-06-15 Thread Madhvi
 Lingerie babe Irina Sheik Expose in Bikini
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=1660:lingerie-babe-irina-sheikcatid=908:hollywoodItemid=102
  [1]

Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join



[Non-text portions of this message have been removed]



[IndoCareer - Execufinder Labor provider] Lingerie babe Irina Sheik Expose in Bikini

2010-06-15 Thread Madhvi
 Lingerie babe Irina Sheik Expose in Bikini
 CLICK HERE TO VIEW ALL IMAGE

http://masti2mail.com/index.php?option=com_contentview=articleid=1660:lingerie-babe-irina-sheikcatid=908:hollywoodItemid=102
  [1]

Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join



[Non-text portions of this message have been removed]



[grosir_komputer] Latest Stills of Hot n Sexy illeana

2010-05-21 Thread Madhvi
 [1]
 Latest  Stills of illeana 
 CLICK HERE TO VIEW  ALL IMAGE

http://patelmantra.com/index.php?option=com_contentview=articleid=1770:latest-stills-of-illeanacatid=37:tollywoodItemid=73
  [2]

Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[2] http://groups.yahoo.com/group/masti2mail/join



[iklan-investasi] Latest Stills of Hot n Sexy illeana

2010-05-21 Thread Madhvi
 [1]
 Latest  Stills of illeana 
 CLICK HERE TO VIEW  ALL IMAGE

http://patelmantra.com/index.php?option=com_contentview=articleid=1770:latest-stills-of-illeanacatid=37:tollywoodItemid=73
  [2]

Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[2] http://groups.yahoo.com/group/masti2mail/join



[Non-text portions of this message have been removed]



rlm_counter

2005-11-29 Thread Madhvi Gokool

Hello
Freeradius version is 1.0.4
I am using plain text users file.  I have implemented counters  for each 
user - the counter should reset at the end of each month. I tested the 
counter a while ago for a particular user and it worked.
I have just implemented counter usage for the rest of the users and got the 
following error : -
Auth: Invalid user (rlm_counter: Maximum monthly usage time reached): 
[kalc/CHAP-Password] (from client as port 257 cli tel number)

this user's config in the users file was as follows :-

kalc   Auth-Type = Local, Password = kalc, Calling-Station-Id = ,  , 
Max-Monthly-Session := 36000

   Service-Type = Framed-User,
   Framed-Protocol = PPP

Any idea as to what could have caused this.
How can I log when counters roll over?
How can I verify the counter usage - how many seconds used and left for each 
user??


Thanx in advance for your prompt reply.

Regds
Madhvi



- 
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html


Fw: Billing and provisioning

2005-10-25 Thread Madhvi Gokool


Hi
As an addendum to below:-
Say the user has 500s of connection time left and he replenishes his account 
(36000 s)- is there another way apart from manually , that we can alter the 
Max-Monthly-Session ? - it should become 36500 s. We are assuming that the 
counter was not reset.


M

- Original Message - 
From: Madhvi Gokool [EMAIL PROTECTED]

To: FreeRadius users mailing list freeradius-users@lists.freeradius.org
Sent: Tuesday, October 25, 2005 9:30 AM
Subject: Billing and provisioning



Hello
Here is a the scenario we want to implement :-

1. User pays for 10hrs of internet access. We set the Max-Monthly-Serssion 
to 36000.
We want to verify the number of seconds used and left for each user on a 
daily basis.  The results should be mailed to the ISP admin and a mail 
sent to each user concerning his usage.  When his balance is say 500s, he 
is requested to recharge his account.

We are using Freeradius as AAA server.  We are not using any databases.
Can anybody recommend a script or open source package that can do the 
above? We need to urgently implement the above.


Regards
M

- List info/subscribe/unsubscribe? See 
http://www.freeradius.org/list/users.html 


- 
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html


Billing and provisioning

2005-10-24 Thread Madhvi Gokool

Hello
Here is a the scenario we want to implement :-

1. User pays for 10hrs of internet access. We set the Max-Monthly-Serssion 
to 36000.
We want to verify the number of seconds used and left for each user on a 
daily basis.  The results should be mailed to the ISP admin and a mail sent 
to each user concerning his usage.  When his balance is say 500s, he is 
requested to recharge his account.

We are using Freeradius as AAA server.  We are not using any databases.
Can anybody recommend a script or open source package that can do the above? 
We need to urgently implement the above.


Regards
M

- 
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html


Re: radwho

2005-10-17 Thread Madhvi Gokool

Hi

I added the RADIUS server as client in the 
/usr/local/etc/raddb/radiusd.conf. Used radzap and the entries were removed.


Thanx for your help.
M

- Original Message - 
From: Alan DeKok [EMAIL PROTECTED]

To: FreeRadius users mailing list freeradius-users@lists.freeradius.org
Sent: Wednesday, October 12, 2005 6:33 PM
Subject: Re: radwho



Madhvi Gokool [EMAIL PROTECTED] wrote:

When first testing the freeradius server, radwho still showed users as
connected when infact they had disconnected.


 If the server doesn't receive an accounting stop message, it doesn't
know they've disconnected.

I managed to have the server up and running properly but the above 
entries
are causing problems when I restrict Simultaneous use for user steve. 
What

should I do to remove those entries.?


 Use checkrad, so Simultaneous-Use will double-check those entries.

 Or, radzap.

 Alan DeKok.

-
List info/subscribe/unsubscribe? See 
http://www.freeradius.org/list/users.html 


- 
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html


dialup admin

2005-10-17 Thread Madhvi Gokool

Hi
Documentation of dialup admin says it works with a database.
My current users file is plain text ( I manually add users and their 
attributes).  can dialup admin be tweaked to work with this users file? Or 
is there a script that we can use to facilitate the administration of the 
users file.



Regards
Madhvi

- 
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html


freeradius - Called-Station-Id, reporting

2005-10-11 Thread Madhvi Gokool





Hello
We are currently using freeradius to authenticate dialup users.  We are 
investigating several wasy of improving the service offered to the dialup 
users and have encountered several issues: -


1. The Called-Station-Id is a 4-digit number in the detail log file.
While testing authentication based on the Called Telephone number, the 
Called-Station-Id had to be specified as 1300 instead of 2131300 in 
the users file .
If we have two PRI lines , say 3120101 and 2130101, the Called-Station-Id 
will be seen as 0101 in both cases.  So we'll not be able to 
differentiate between the two numbers.  Is there a way to configure 
freeradius or Cisco5350 RAS to use a 7-digit number as Called-Station-Id. 
The Calling -Station-Id however is a 7-digit number.
2.  I have also tested the Login-Time, Max-Daily-Session and 
Max-Monthly-Session attributes.  If the user is dialing outside the 
timeframe set in the users file or has exceeded his daily/monthly quota, 
he will not get connected. Excerpt of radius log file below : -
Thu Oct  6 15:26:18 2005 : Auth: Invalid user (rlm_counter: Maximum daily 
usage
time reached): [steve/CHAP-Password] (from client as port 255 cli 
2117039)

As administrator we'll know why the user got disconnected .
For the user side now, how can we inform the user that
i) he has exceeded his quota and that he should replenish his account.  or
ii) he's dialing outside the timeframe he paid for.
iii) he's got xxx seconds of connection time left
Are there any scripts that will interact with freeradius and send these 
users an email or sms?


3. Does dialup admin work with plain text users file ?
Thanx in advance for your help.
Regds
Madhvi



- 
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html


Fw: FreeRadius 1.0.4

2005-08-25 Thread Madhvi Gokool





Hello
We have planned to replace our cistron radius servers with Freeradius.
We have the following setup :-
1. Users dial in to access their mail and internet or work on an 
application server

2. Users dial in to access a specific server and nothing else.
After they are authenticated , users get a static IP address .
We populate the users file manually and do not create unix users?? Can we 
use huntgroups to group say mail users, internet users, if they are not 
unix users?
What attribute(s) should I use  to allow the users in Scenario 2 access to 
their server?


The NAS will either be a 3Com TCM or a Cisco access server

On the access server, we can implement access-lists to allow/deny access 
based on the assigned Ip addresses, but we'd prefer using RADIUS 
attributes to do so.


Tank you in advance for your help.
Madhvi 


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Fw: FreeRadius 1.0.4

2005-08-25 Thread Madhvi Gokool


- Original Message - 
From: Madhvi Gokool [EMAIL PROTECTED]

To: freeradius-users@lists.cistron.nl
Sent: Thursday, August 25, 2005 10:37 AM
Subject: FreeRadius 1.0.4



Hello
We have planned to replace our cistron radius servers with Freeradius.
We have the following setup :-
1. Users dial in to access their mail and internet or work on an 
application server

2. Users dial in to access a specific server and nothing else.
After they are authenticated , users get a static IP address .
We populate the users file manually and do not create unix users?? Can we 
use huntgroups to group say mail users, internet users, if they are not 
unix users?
What attribute(s) should I use  to allow the users in Scenario 2 access to 
their server?


The NAS will either be a 3Com TCM or a Cisco access server

On the access server, we can implement access-lists to allow/deny access 
based on the assigned Ip addresses, but we'd prefer using RADIUS 
attributes to do so.


Tank you in advance for your help.
Madhvi 


- 
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html


Backup over 2 tapes

2005-02-22 Thread Madhvi Gokool
Hello

My weeekly backup spans two tapes  and I change tapes manually.
When I run amdump WeeklySet1, here is an excerpt of the log
DISK planner sa01 /terrasky
DISK planner sa01 /dev/md0
DISK planner sa01 /dev/md2
DISK planner sa01 /usr/local/etc/amanda
SUCCESS dumper sa01 /usr/local/etc/amanda 20050222 0 [sec 0.316 kb 115 kps
363.3 orig-kb 820]
SUCCESS taper sa01 /usr/local/etc/amanda 20050222 0 [sec 1.646 kb 160 kps
97.2 {wr: writers 5 rdwait 0.000 wrwait 0.036 filemark 1.609}]
SUCCESS dumper sa01 /dev/md2 20050222 0 [sec 36.745 kb 152650 kps 4154.2
orig-kb 152650]
SUCCESS dumper sa01 /dev/md0 20050222 0 [sec 56.615 kb 308100 kps 5441.9
orig-kb 308100]
SUCCESS taper sa01 /dev/md2 20050222 0 [sec 68.445 kb 152704 kps 2231.0 {wr:
writers 4772 rdwait 7.577 wrwait 59.433 filemark 1.305}]
SUCCESS taper sa01 /dev/md0 20050222 0 [sec 283.949 kb 308160 kps 1085.3
{wr: writers 9630 rdwait 0.320 wrwait 281.870 filemark 1.427}]
SUCCESS dumper sa01 /terrasky 20050222 0 [sec 1508.775 kb 2412431 kps 1598.9
orig-kb 7055670]
FAIL taper sa01 /terrasky 20050222 0 [out of tape]

When I insert the second tape and run amflush,
/terrasky is not being backed up ( In fact it is not even present on the
holding disk
[EMAIL PROTECTED] WeeklySet1]$ ls -ali /terrasky/amandaholdingdisk/20050222
total 7236212
1294514 drwx--2 amanda   disk 4096 Feb 23 01:23 .
  65537 drwxr-xr-x4 amanda   root 4096 Feb 23 10:16 ..
1294528 -rw---1 amanda   disk 1073741824 Feb 22 11:33
sa03._terrasky_tslibrary__gh.0
1294529 -rw---1 amanda   disk 1073741824 Feb 22 11:39
sa03._terrasky_tslibrary__gh.0.1
1294530 -rw---1 amanda   disk 1073741824 Feb 22 11:48
sa03._terrasky_tslibrary__gh.0.2
1294531 -rw---1 amanda   disk 566267831 Feb 22 11:51
sa03._terrasky_tslibrary__gh.0.3
1294526 -rw---1 amanda   disk 1073741824 Feb 22 11:10
sa03._terrasky_tslibrary__qr.0
1294518 -rw---1 amanda   disk 1073741824 Feb 22 11:16
sa03._terrasky_tslibrary__qr.0.1
1294519 -rw---1 amanda   disk 1073741824 Feb 22 11:23
sa03._terrasky_tslibrary__qr.0.2
1294527 -rw---1 amanda   disk 393890584 Feb 22 11:25
sa03._terrasky_tslibrary__qr.0.3

How can I resolve this problem as I now have an incomplete weekly backup.
Regds
Madhvi




Exclude may not be working

2004-12-15 Thread Madhvi Gokool
Hi
My amanda backup server has the following filesystem structure : -

   /
   |
  terrasky
|
tsfileserver tslibrary amandaholdingdisk

As part of daily backup , /terrasky is backed up (level 0) with the exclude
file containing  tslibrary and amandaholdingdisk .  This works perfectly and
I have around 6 GB of data written to tape.

tslibrary is a nfsmounted  from another server sa03

During my Weeklybackup, I maintain the line /terrasky in my disklist and
have the same dumptype as for Daily bkup.
When I run amdump WeeklySet1 , and run amstatus, the estimate for /terrasky
is found to be 11 GB but when /terrasky is dumped to disk the size increases
to 23 GB .  Now, I know that /tslibrary from sa03  of approx 12 GB of data
was just dumped to disk before sa01 :/terrasky .
How do I bypass this problem as it seems that amandaholdingdisk is also
being backed up ?

regds
madhvi





Windows-based RFC868 Time

2004-12-07 Thread Madhvi Gokool
Hi

I am trying to replace a Windows server with a FreeBSD one .
Does anyone know the equivalent UNIX package for a Windows-based RFC868 Time
Protocol server.

Thanx in advance for your response

M

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


INFO planner sa03 /terrasky/tslibrary_qr 20041206 0 [dumps too big, full dump delayed

2004-12-07 Thread Madhvi Gokool
Hi

I am getting above error message during my Monthly backup.  My holding disk
is 55 GB - big eneough to accomodate  the dump.
A level 1 backup is done instead of level 1.

Regds
M



Filesystems 12 GB

2004-07-02 Thread Madhvi Gokool
I have a 15GB filesystem that cannot fit on one tape. So as per the
disklist example in the amanda package, I break the filesystem into 12 GB
chunks as follows : -

sa03  /terrasky/tslibrary/  {
# all directories that start with [h-m]
user-tar
include ./[a-f]*
} 1
sa03  /terrasky/tslibrary/  {
# all directories that start with [n-u]
user-tar
include ./[g-z]*
}
I get the following error message

[EMAIL PROTECTED] MonthlySet1]$ amcheck -s MonthlySet1
/usr/local/etc/amanda/MonthlySet1/disklist, line 22: duplicate disk
record, previous on line 17
/usr/local/etc/amanda/MonthlySet1/disklist, line 26: dumptype
custom(sa03:/dev/hdc1/tslibrary) already defined on line 17
amcheck: could not load disklist /usr/local/etc/amanda/MonthlySet1/disklist

Can anyone help me in determine what could be wrong with this config.  I
changed the second sa03 entry to sa03.terra.terrasky.mu and I get no error
messages.

Thanx
madhvi





files larger than tape

2004-03-03 Thread Madhvi Gokool
hello
I need to backup a 18G directory(/terrasky/tslibrary)  which contains 12
subdirectories.
Currently, using the disklist example in the documentation for large files,
my disklist looks like this : -
#sa03   /terrasky/tslibrary/ {
# All directories that start with [h-z]
#   user-tar
#   include ./[h-z]*
#   } 1

sa03/terrasky/tslibrary/ {
# All directories that start with [a-g]
user-tar
include ./[a-g]*
} 1

Since I do not have a tape changer, I insert the first tape and run amdump .
Then I manually edit the disklist file  to backup the other files onto a
second tape. If the size of the data being backed up exceeds the tape
capacity of 12G, the disklist is modified again to get backup data of les
than 12 G to fiit on one tape.
I have the following queries: -
1. Instead of me calculating the backup data size, can't amanda do it.  If
it exceeds the tape capacity, can't amanda break the data up , copy folders
up to 12 G on the first tape, request for the second tape and copy the rest
of the data to the second tape.

Pls note that this directory is being fully backed up weekly and monthly -
without any incremental backups being done.  We want to minimise user
intervention during the backup.

regards
madhvi



amrecover error

2004-02-26 Thread Madhvi Gokool
hello
When running the command below on client server I get the following error -
details are : -
terrabkup# amrecover -C fullbkup -s sa01.terra.terrasky.mu -t
sa01.terra.terrasky.mu -d /dev/nst0
AMRECOVER Version 2.4.3. Contacting server on sa01.terra.terrasky.mu ...
amrecover: Unexpected end of file, check amindexd*debug on server
sa01.terra.terrasky.mu

Contents of amindexd.20040226113359.debug on backup server

[EMAIL PROTECTED] amanda]$ more amindexd.20040226113359.debug
amindexd: debug 1 pid 30290 ruid 512 euid 512: start at Thu Feb 26 11:33:59
2004
amindexd: version 2.4.3
amindexd: time 0.002: gethostbyaddr(10.10.20.40): hostname lookup failed
amindexd: time 0.002: pid 30290 finish time Thu Feb 26 11:33:59 2004

On the client server
terrabkup# more amrecover.20040226114140.debug
amrecover: debug 1 pid 48907 ruid 0 euid 0: start at Thu Feb 26 11:41:40
2004
amrecover: stream_client_privileged: connected to 10.10.20.32.10082
amrecover: stream_client_privileged: our side is 0.0.0.0.575

hostname are currently being resolved by  dns server.  The backup was done
when /etc/hosts was still being used.
I'll be grateful for any help obtained to resolve this problem.

regds
M



how to fit backup on 1 tape

2004-02-15 Thread Madhvi Gokool
Hello

I have a18G filesystem that needs to be backed up and do not have a tape
changer.  I have tried the following :-

runtapes 2  # number of tapes to be used in a single run of
amdump
tpchanger chg-manual  # the tape-changer glue script
tapedev /dev/nst0 # the no-rewind tape device to be used
changerfile /usr/adm/amanda/WeeklySet1/changer

When I run amdump WeeklySet1, it asks me to load tapes repeatedly.
3 files were created in the log directory and are as follows :-
changer-status-access, changer-status-clean, changer-status-slot with
following contents : -
changer-status-access
::
2
::
changer-status-clean
::
0
::
changer-status-slot
::
1

Since 2 tapes of 12G should be enough for the backup ( no hardware
compression and using tar) , I am not sure what I'm doing wrong.
can anyone help me with this config???

Thanx in advance
Madhvi





Holding disk

2004-02-11 Thread Madhvi Gokool
Hello
How can I enable level 0 backups to the holding disk ( I have enough hard
disk space) ?
Regards
Madhvi



Re: no incremental backups when using gnu tar

2004-02-11 Thread Madhvi Gokool
[EMAIL PROTECTED] amanda]$ amadmin DailySet1 disklist localhost /etc/mrtg
line 7:
host localhost:
interface default
disk /etc/mrtg:
program GNUTAR
priority 2
dumpcycle 0
maxdumps 1
strategy NOINC
compress NONE
auth BSD
kencrypt NO
holdingdisk YES
record YES
index YES
skip-incr NO
skip-full NO

The skip-incr is set to NO -- should this be changed to YES???
Regds
Madhvi
- Original Message - 
From: Paul Bijnens [EMAIL PROTECTED]
To: Madhvi Gokool [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, February 11, 2004 12:22 PM
Subject: Re: no incremental backups when using gnu tar


 Madhvi Gokool wrote:

  define dumptype high-tar {
  root-tar
  comment partitions dumped with tar
  priority high
  dumpcycle 0
  compress none
  index
  }
  excert from disklist
  localhost /etc/mrtg high-tar
  ns01/home/nfsuser high-tar
 
  Excerpt from a log
  SUCCESS dumper ns01 /home/nfsuser 20031230 2 [sec 0.355 kb 530 kps
1489.2
  orig-kb 530]

 What is the output of:

amadmin YourConfig disklist localhost /etc/mrtg

 Does it say dumpcycle 0 too?
 Are there other constraints, like not enough tapecapacity?

 ps.  using localhost as the name of your computer will bite you when
   you move your tapedrive to another computer (e.g. after a crash)
   and you want to restore files from the previous one.  Use the
   real DNS name.


 -- 
 Paul Bijnens, XplanationTel  +32 16 397.511
 Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
 http://www.xplanation.com/  email:  [EMAIL PROTECTED]
 ***
 * I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
 * quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
 * stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
 * PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
 * kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
 * ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
 ***



no incremental backups when using gnu tar

2004-02-10 Thread Madhvi Gokool
Hello
I do not want to run any incremental backup using gnu tar. How would I do
this ?
Excerpt from amanda.conf
define dumptype high-tar {
root-tar
comment partitions dumped with tar
priority high
dumpcycle 0
compress none
index
}
excert from disklist
localhost /etc/mrtg high-tar
ns01/home/nfsuser high-tar

Excerpt from a log
SUCCESS dumper ns01 /home/nfsuser 20031230 2 [sec 0.355 kb 530 kps 1489.2
orig-kb 530]
SUCCESS taper ns01 /home/nfsuser 20031230 2 [sec 2.763 kb 576 kps 208.4 {wr:
writers 18 rdwait 0.000 wrwait 0.176 filemark 2.
586}]
SUCCESS dumper localhost /etc/mrtg 20031230 1 [sec 3.650 kb 1060 kps 290.4
orig-kb 1060]
SUCCESS taper localhost /etc/mrtg 20031230 1 [sec 1.569 kb 1120 kps 713.8
{wr: writers 35 rdwait 0.000 wrwait 0.368 filemark
1.200}]
 The backup level of these directories is certainly 0 .
regds
madhvi



amdump results

2003-03-20 Thread Madhvi Gokool
Hello
I have been testing how many filesystem full dumps of the servers on my
network  could be stored on  a specific tape.
An excerpt of the dump summary is as follows: -
 DUMPER STATSTAPER STATS
HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS  KB/s MMM:SS  KB/s
-- - 
backup.terra hda11 FAILED ---
backup.terra hda50 13389001338944   --6:013712.0  10:172168.5
backup.terra hda60   18160  18208   --0:035387.0   0:111707.4
backuprad.te ad0s1a  0   35220  35264   --0:065669.3   0:221598.6
backuprad.te ad0s1e  0 250288   --0:04  63.3   0:01 205.3
backuprad.te ad0s1f  02960   3008   --0:04 825.1   0:021263.6


FAILURE AND STRANGE DUMP SUMMARY:

  backup.ter hda1 lev 1 FAILED [data timeout]


Upon verifying the contents of the tape after the backup was completed,
backup.ter hda1  does not figure in the list of backup images.

Is it possible to retry the failed filesystem backup and write it on the
same tape 

Regards
madhvi



amrestore

2003-03-19 Thread Madhvi Gokool
Hello
I am tetsing the different restore options that are available.  I have
started with the ones documented in amanda package.
Could anyone explain the results below: -

 a) From the backup server , I ran command below .  The backup image gets
stored in the current working directory.
 [EMAIL PROTECTED] restoredir]$ amrestore /dev/st0 osama /



  1.. amrestore:   0: skipping start of tape: date 20030318 label DailySet15
  2.. amrestore:   1: skipping osama.hda5.20030318.0
  3.. amrestore:   2: skipping osama.hda1.20030318.0
  4.. amrestore:   3: restoring osama._.20030318.0
  5.. amrestore:   4: skipping start of tape: date 20030318 label DailySet15
  6.. amrestore:   5: skipping osama.hda5.20030318.0
  7.. amrestore:   6: skipping osama.hda1.20030318.0
  8.. amrestore:   7: restoring osama._.20030318.0
  9.. amrestore:   8: skipping start of tape: date 20030318 label DailySet15
  10.. amrestore:   9: skipping osama.hda5.20030318.0
  11.. amrestore:  10: skipping osama.hda1.20030318.0
  12.. amrestore:  11: restoring osama._.20030318.0
  13.. amrestore:  12: skipping start of tape: date 20030318 label
DailySet15
  14.. amrestore:  13: skipping osama.hda5.20030318.0
  15.. amrestore:  14: skipping osama.hda1.20030318.0
  16.. amrestore:  15: restoring osama._.20030318.0
  17.. amrestore:  17: skipping osama.hda5.20030318.0
  18.. amrestore:  18: skipping osama.hda1.20030318.0
  19.. amrestore:  19: restoring osama._.20030318.0
The above carried on until I manually stopped the operation.  Please note
that the size of the backup image increased as well.

b)  From the client machine , I ran the following command  :-
rsh -n -l amanda  backup.terrasky.mu /usr/local/sbin/amrestore -p /dev/st0
osama
 / | restore -iv -b 2 -f -


and I could restore some files chosen at random.



If I modify the command above as follows : -

rsh -n -l amanda  backup.terrasky.mu /usr/local/sbin/amrestore  /dev/nst0
osama
/

the backup image is not stored in the current working directory.



thanks in advance for comments/explanations.



Madhvi




Error while running amcheck/amdump

2003-03-17 Thread Madhvi Gokool
Hello
I have encountered the error below while testing the amanda backup software.

Amanda Backup Client Hosts Check

ERROR: osama: [could not access hda6 (hda6): No such file or directory]
Client check: 1 host checked in 0.026 seconds, 1 problem found

Excerpt of disklist on my amanda server ;-
osama  hda1always-full
 osama  hda2always-full
 osama  hda5always-full
 osama  hda6always-full
 osama  hda7always-full

permissions on osama
[EMAIL PROTECTED] root]# ls -ali /dev/hda6
   8596 brw-rw1 root disk   3,   6 Aug 31  2002 /dev/hda6
[EMAIL PROTECTED] root]#  ls -ali /dev/hda7
   8597 brw-rw1 root disk   3,   7 Aug 31  2002 /dev/hda7
no errors are obtained with hda7 .  A level 0 backup was successfully done
on tape.

Grateful if someone could help me solve this problem.

Cheers
madhvi




Fw: Error while running amcheck/amdump

2003-03-17 Thread Madhvi Gokool
The client is running Linux 8.0  (ext3 filesystem) as shown below :
LABEL=/ /   ext3defaults1 1
LABEL=/boot /boot   ext3defaults1 2
none/dev/ptsdevpts  gid=5,mode=620  0 0
LABEL=/home /home   ext3defaults1 2
none/proc   procdefaults0 0
none/dev/shmtmpfs   defaults0 0
LABEL=/usr  /usrext3defaults1 2
LABEL=/var  /varext3defaults1 2
/dev/hda3   swapswapdefaults0 0

- Original Message -
From: Madhvi Gokool [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, March 17, 2003 12:27 PM
Subject: Error while running amcheck/amdump


 Hello
 I have encountered the error below while testing the amanda backup
software.

 Amanda Backup Client Hosts Check
 
 ERROR: osama: [could not access hda6 (hda6): No such file or directory]
 Client check: 1 host checked in 0.026 seconds, 1 problem found

 Excerpt of disklist on my amanda server ;-
 osama  hda1always-full
  osama  hda2always-full
  osama  hda5always-full
  osama  hda6always-full
  osama  hda7always-full

 permissions on osama
 [EMAIL PROTECTED] root]# ls -ali /dev/hda6
8596 brw-rw1 root disk   3,   6 Aug 31  2002 /dev/hda6
 [EMAIL PROTECTED] root]#  ls -ali /dev/hda7
8597 brw-rw1 root disk   3,   7 Aug 31  2002 /dev/hda7
 no errors are obtained with hda7 .  A level 0 backup was successfully done
 on tape.

 Grateful if someone could help me solve this problem.

 Cheers
 madhvi





Re: Fw: Error while running amcheck/amdump

2003-03-17 Thread Madhvi Gokool
The /dev/hda6 mountpoint is /
I have added the mount point to the disklist file . The dumptype was set to
tar and then dump. amcheck detected no errors on the client host.
When amdump was run , a level 0 dump was done on /dev/hda6 ( as seen in
/etc/dumpdates).
If I put back hda6 in the disklist, amcheck gives me the same error reported
below.
Why can't /dev/hda6 be backed up ??? A manual dump of the latter on the
client works.

Thanx in advance
Madhvi
- Original Message -
From: Paul Bijnens [EMAIL PROTECTED]
To: Madhvi Gokool [EMAIL PROTECTED]
Sent: Monday, March 17, 2003 3:50 PM
Subject: Re: Fw: Error while running amcheck/amdump


 Madhvi Gokool wrote:
  The client is running Linux 8.0  (ext3 filesystem) as shown below :
  LABEL=/ /   ext3defaults
1 1
  LABEL=/boot /boot   ext3defaults
1 2
  LABEL=/home /home   ext3defaults
1 2
  LABEL=/usr  /usrext3defaults
1 2
  LABEL=/var  /varext3defaults
1 2
  /dev/hda3   swapswapdefaults
0 0
 ...
 
 ERROR: osama: [could not access hda6 (hda6): No such file or directory]
 Client check: 1 host checked in 0.026 seconds, 1 problem found
 
 Excerpt of disklist on my amanda server ;-
 osama  hda1always-full
  osama  hda2always-full
  osama  hda5always-full
  osama  hda6always-full
  osama  hda7always-full

 And which one of the above LABEL=... is /dev/hda6 ?
 To simplify things, why don't you just put the mountpoint in
 you disklist?
 (This will not solve your problem, but it will be clearer to us,
 and to you.)

 --
 Paul Bijnens, XplanationTel  +32 16 397.511
 Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax  +32 16 397.512
 http://www.xplanation.com/  email:  [EMAIL PROTECTED]
 ***
 * I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
 * quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
 * stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
 * PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
 * kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...*
 * ...  Are you sure?  ...   YES   ...   Phew ...   I'm out  *
 ***




Error when installing amanda client on a RedHat 7.1 server

2003-03-07 Thread Madhvi Gokool
hello
I have encountered the error below when trying to install amanda on a
server.  Any ideas as to how I can resolve this problem.

[EMAIL PROTECTED] amanda-2.4.3]$
./configure --prefix=/usr/local --without-server --with-user=amanda --with-g
roup=disk --with-config=DailySet1
loading cache ./config.cache
checking host system type... i686-pc-linux-gnuoldld
checking target system type... i686-pc-linux-gnuoldld
checking build system type... i686-pc-linux-gnuoldld
checking cached system tuple... ok
checking for a BSD compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets ${MAKE}... yes
checking for non-rewinding tape device... /dev/null
checking for raw ftape device... /dev/null
checking for Kerberos and Amanda kerberos4 bits... no
checking for gcc... no
checking for cc... no
configure: error: no acceptable cc found in $PATH

Thanks in advance

madhvi




config.site

2003-02-24 Thread Madhvi Gokool
Hello

I have configured amanda on a test server .  If I modify a parameter in the
config.site file , do I have to go through the following steps before the
changes are applied : -
run ./configure, make , make install ???

Is there a quicker way

Thanx in advance
M



Re: Permissin denied

2003-02-21 Thread Madhvi Gokool
I have got it to work --- had to modify the group of amanda user in
/etc/passwd

Thanx
Madhvi
- Original Message -
From: Madhvi Gokool [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: Joshua Baker-LePain [EMAIL PROTECTED]
Sent: Friday, February 21, 2003 11:00 AM
Subject: Permissin denied


 Hello
 After changing the group of the user amanda to disk , when I run amcheck ,
I
 ams till getiing the
 Amanda Tape Server Host Check
 -
 ERROR: /dev/nst0: Permission denied
(expecting a new tape)
 NOTE: skipping tape-writable test
 Server check took 0.000 seconds

 The permissions on the tape device are as follows : -

 [[amanda@backup sbin]$ ls -ali /dev/nst0
  177632 crw-rw-r--1 root disk   9, 128 Mar 24  2001 /dev/nst0

 Are there any otehr solutions ???

 M




amcheck error

2003-02-20 Thread Madhvi Gokool
Hello
amcheck is giveing the error messsages below : -

ERROR: backup.terrasky.mu: [could not access /dev/hda5 (/usr): Permission
denied]
ERROR: backup.terrasky.mu: [could not access /dev/hda5 (/dev/hda5):
Permission denied]
I have verified the permissions on the /usr as follows: -
   2 drwxr-xr-x   16 root root 4096 Jan 24 16:09 usr
Logged in as user amanda, I have been able to view a file in one of the /usr
sub-directories.
From my point of view, everyone has read and execute access on the /usr
directory.

Please help.

Madhvi




Re: amcheck error

2003-02-20 Thread Madhvi Gokool
Hello
Got confused between the mount point and the device.
The amanda user is in the operator group  and did not have permission on the
device
I have changed the device /dev/hd5 permission as follows : -
chmod +or /dev/hda5
I did not get any errors on the client host when i ran amcheck.
Results are :-
[amanda@backup sbin]$  ./amcheck DailySet1
Amanda Tape Server Host Check
-
ERROR: /dev/nst0: Permission denied
   (expecting a new tape)
NOTE: skipping tape-writable test
Server check took 0.000 seconds

Amanda Backup Client Hosts Check

Client check: 1 host checked in 0.026 seconds, 0 problems found

I have inserted a tape containing a backup that can be overwritten .
Do you think that I need to insert a blank tape for the error on the Tape
Server host to be resolved.

Regards
madhvi
- Original Message -
From: Joshua Baker-LePain [EMAIL PROTECTED]
To: Madhvi Gokool [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, February 20, 2003 8:03 PM
Subject: Re: amcheck error


 On Thu, 20 Feb 2003 at 7:11pm, Madhvi Gokool wrote

  ERROR: backup.terrasky.mu: [could not access /dev/hda5 (/usr):
Permission
  denied]
  ERROR: backup.terrasky.mu: [could not access /dev/hda5 (/dev/hda5):
  Permission denied]
  I have verified the permissions on the /usr as follows: -
 2 drwxr-xr-x   16 root root 4096 Jan 24 16:09 usr
  Logged in as user amanda, I have been able to view a file in one of the
/usr
  sub-directories.
  From my point of view, everyone has read and execute access on the /usr
  directory.

 What you're interested in are the permissions on the device, not the mount
 point.  What does 'ls -l /dev/hda5' say, and what group is the amanda user
 in?

 --
 Joshua Baker-LePain
 Department of Biomedical Engineering
 Duke University




Permissin denied

2003-02-20 Thread Madhvi Gokool
Hello
After changing the group of the user amanda to disk , when I run amcheck , I
ams till getiing the
Amanda Tape Server Host Check
-
ERROR: /dev/nst0: Permission denied
   (expecting a new tape)
NOTE: skipping tape-writable test
Server check took 0.000 seconds

The permissions on the tape device are as follows : -

[[amanda@backup sbin]$ ls -ali /dev/nst0
 177632 crw-rw-r--1 root disk   9, 128 Mar 24  2001 /dev/nst0

Are there any otehr solutions ???

M




Re: Error after installing/configuring amanda

2003-02-18 Thread Madhvi Gokool
Hello
I added the /usr/local/sbin to the Path variable of user amanda.  I can
successfully execute the amanda commands.
I ran amdump without running amlabel.  I want to label the tape now-
Running the command gives the following error
[amanda@backup admin]$ amlabel -f DailySet1 DailySet10
rewinding
amlabel: tape_rewind: tape open: /dev/nst0: Permission denied

If I try to remove the tape from the tape database using the amrmtape
command - i do not know the label that should be given .
amrmtape -v  DailySet1 label
How can I start from scratch using the same tape???

Regards
madhvi
- Original Message -
From: Gene Heskett [EMAIL PROTECTED]
To: Madhvi Gokool [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, February 17, 2003 2:20 PM
Subject: Re: Error after installing/configuring amanda


 On Monday 17 February 2003 01:57 am, Madhvi Gokool wrote:
 Hello
 I have encountered the following error when trying to run the
  following command or any other amanda program on the tape server
  host [amanda@backup sbin]$ amcheck -l
 bash: amcheck: command not found

 On linux, possibly others, the syntax would have been ./amcheck in
 order to tell the shell to look here rather than in one of the dirs
 in the $PATH environment variable.

 I'd guess that the best way is to add /usr/local/sbin to the $PATH
 that the user amanda gets when you are the user amanda.

 The permissions of the am* commands in /usr/local/sbin area s
  follows :- 34807 -rwxr-xr-x1 amanda   operator   442325 Feb
  4 16:30 amadmin 34808 -rwxr-xr-x1 amanda   operator   542304
  Feb  4 16:30 amcheck 34814 -rwxr-xr-x1 amanda   operator
  1811 Feb  4 16:30 amcheckdb 34846 -rwsr-x---1 root root
  542304 Feb 11 15:22 amcheck_old 34815 -rwxr-xr-x1 amanda
   operator 3936 Feb  4 16:30 amcleanup 34806 -rwxr-xr-x1
  amanda   operator   218270 Feb  4 16:30 amdd 34816 -rwxr-xr-x
  1 amanda   operator 3744 Feb  4 16:30 amdump 34809 -rwxr-xr-x
 1 amanda   operator   381176 Feb  4 16:30 amflush 34810
  -rwxr-xr-x1 amanda   operator   197506 Feb  4 16:30 amgetconf
  34811 -rwxr-xr-x1 amanda   operator   413403 Feb  4 16:30
  amlabel 34805 -rwxr-xr-x1 amanda   operator   218366 Feb  4
  16:30 ammt 34817 -rwxr-xr-x1 amanda   operator 4427 Feb
  4 16:30 amoverview 34824 -rwxr-x---1 amanda   operator
  379886 Feb  4 16:30 amrecover 34813 -rwxr-xr-x1 amanda
  operator   402562 Feb  4 16:30 amreport 34823 -rwxr-xr-x1
  amanda   operator   312351 Feb  4 16:30 amrestore 34818
  -rwxr-xr-x1 amanda   operator 6607 Feb  4 16:30 amrmtape
  34822 -rwxr-xr-x1 amanda   operator29050 Feb  4 16:30
  amstatus 34812 -rwxr-xr-x1 amanda   operator   422231 Feb  4
  16:30 amtape 34819 -rwxr-xr-x1 amanda   operator 6888 Feb
   4 16:30 amtoc 34820 -rwxr-xr-x1 amanda   operator12053
  Feb  4 16:30 amverify 34821 -rwxr-xr-x1 amanda   operator
  1123 Feb  4 16:30 amverifyrun
 
 Where have I gone wrong ???

 Missed the dot-slash in front of the command to anchor it to the
 current directory you are cd'd to.  A common mistake :)

 Thanx in asdvance
 Madhvi

 --
 Cheers, Gene
 AMD K6-III@500mhz 320M
 Athlon1600XP@1400mhz  512M
 99.23% setiathome rank, not too shabby for a WV hillbilly




  1   2   >