Dear Frank
We have taken note of the issue you have raised. The AFRINIC team are
investigating the matter and will provide feedback in due course.
Regards
Madhvi
On 24/04/2024 08:54, Frank Habicht wrote:
Hi AfriNIC NOC,
in DNS for whois.afrinic.net IPs 196.192.115.21 and
2001:42d0:2:601
Branch: refs/heads/master
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
Commit: 661198ad6f3f0aa2edb5b56c315b1f7df471e16f
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/661198ad6f3f0aa2edb5b56c315b1f7df471e16f
Author: Madhvi Dua
Date: 2021-01-27 (Wed
Branch: refs/tags/aws-codepipeline-0.44
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
--
You received this message because you are subscribed to the Google Groups
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Branch: refs/heads/master
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
Commit: 72f475a5c946a35e0777147598ec88abf428c058
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/72f475a5c946a35e0777147598ec88abf428c058
Author: Madhvi Dua
Date: 2021-01-27 (Wed
ci/aws-codepipeline-plugin/commit/2f8269a6ab1b7b276fe6a6bf7a25bc2f105b41f7
Author: Madhvi Dua
Date: 2021-01-06 (Wed, 06 Jan 2021)
Changed paths:
M
src/main/java/com/amazonaws/codepipeline/jenkinsplugin/AWSCodePipelineSCM.java
Log Message:
---
Add HKG/AP-EAST-1 as a new
Branch: refs/heads/master
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
Commit: cd7a68e47b4b2ba0827d6ea58266e634a3b0627c
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/cd7a68e47b4b2ba0827d6ea58266e634a3b0627c
Author: Madhvi Dua
Date: 2021-01-08 (Fri
Branch: refs/tags/aws-codepipeline-0.43
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
--
You received this message because you are subscribed to the Google Groups
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
org type means .
Logically it shall happen as follows :-
1. The org-types definition will be provided to the database working group
2. The whois database will be updated with the definitions so that they
are then visible through the query whois -v organisation
Regards
Madhvi
On 04/10/2020 9
nk its community for taking the time and effort
to make the propositions.
We will assess the feasibility of these propositions and will come back
with a plan next week. We will keep you informed.
Kind Regards
Madhvi Gokool
-
Chers membres de la comm
-Original Message-
From: ffmpeg-user [mailto:ffmpeg-user-boun...@ffmpeg.org] On Behalf Of Paul B
Mahol
Sent: Thursday, January 03, 2019 2:22 PM
To: FFmpeg user questions
Subject: Re: [FFmpeg-user] FFmpeg Configure error, Help need
On 1/3/19, Dinesh Gupta wrote:
> whatever that
be used for the above mentioned page example?
Any help will be great!
Thanks,
Madhvi
,
Madhvi
in advance,
Madhvi
. If
not then the how the required format can be generated?
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
On Mon, Feb 27, 2017 at 12:47 PM, Madhav Sharan <msha...@usc.edu> wrote:
> Hi - Can you ensure that your training data is in format like mentioned in
> wiki ? [0]
>
&
Please let me know if anyone have any idea about this
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
On Tue, Feb 21, 2017 at 10:51 AM, Madhvi Gupta <mgmahi@gmail.com> wrote:
> Hi Joern,
>
> Training data generated from reuters dataset is in the following format.
>
but training data prepared from reuters dataset is in the baove said
format. So please tell me how training data can be generated in the
required format or how the existing training data format can be used for
generating models.
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
On Mon, Feb 20
://www.clips.uantwerpen.be/conll2003/ner/000README
So can you please help me how to create training data out of that corpus
and use it to create name entity detection models?
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
On Mon, Feb 20, 2017 at 1:00 AM, Joern Kottmann <kottm...@gmail.com>
Hi All,
I have got reuters data from NIST. Now I want to generate the training data
from that to create a model for detecting named entities. Can anyone tell
me how the models can be generated from that.
--
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
Thank you so much Lewis.
On 8/19/16, 4:53 AM, "lewis john mcgibbney" <lewi...@apache.org> wrote:
>Evening Madhvi,
>I will set this up and debug a clean. I'll report over on
>https://issues.apache.org/jira/browse/NUTCH-2269
>
>Thank you for reporting.
>Lewis
Hi,
I wanted to find out how to correct the issue below and will appreciate any
help.
I am trying to upgrade to Nutch 1.12. I am using solr 5.3.1. The reason I am
upgrading are:
1: https crawling
2: Boilerplate canola extraction through tika
The only problem so far I am having is an
s kind of related to what I need.
On 8/5/16, 2:18 PM, "Arora, Madhvi" <mar...@automationdirect.com> wrote:
>Thank you very much!
>
>
>
>
>On 8/5/16, 2:13 PM, "Markus Jelsma" <markus.jel...@openindex.io> wrote:
>
>>I am not sure which versio
Dear Joel
Please let us know off-list as to who these Telcos are.
Regards
Madhvi
On 15/08/2016 9:38 AM, Joel Gogwim wrote:
>
> It's appears like some of the African Telcos acquired IP resources not
> from AfriNIC and assigned such to their African customers. This
> implies that t
Thank you very much!
On 8/5/16, 2:13 PM, "Markus Jelsma" <markus.jel...@openindex.io> wrote:
>I am not sure which version is was added, you'd have to check CHANGES.txt, but
>upgrading is usually a good idea and very simple.
>Markus
>
>
>
>-Origin
Markus so to crawl https and http urls successfully we just need to switch to a
newer version of Nutch I.e. Higher than Nutch 1.10?
On 8/5/16, 12:47 PM, "Markus Jelsma" <markus.jel...@openindex.io> wrote:
>Hello - see inline.
>Markus
>
>-Original message
will
need to delete the old http urls from solr index, re-crawl and index the urls
that need to be switched to https.
I will be grateful for any guidance or suggestions.
Thanks,
Madhvi
[
https://issues.apache.org/jira/browse/SPARK-10828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907725#comment-14907725
]
madhvi gupta commented on SPARK-10828:
--
hey I have post this question on the spark amiling list
madhvi gupta created SPARK-10828:
Summary: Can we use the accumulo data RDD created from JAVA in
spark, in sparkR?Is there any other way to proceed with it to create RRDD from
a source RDD other than text RDD?Or to use any other format of data
Thanks Josh. It really worked for me.
On Wednesday 17 June 2015 08:43 PM, Josh Elser wrote:
Madhvi,
Understood. A few more questions..
How are you passing these IDs to the batch scanner? Are you providing
individual Ranges for each ID (e.g. `new Range(new Key(row1, ,
id1), true, new Key
to batch Scanner.
Are you trying to sum across all rows that you queried?
Yes we need to sum a particular column qualifier across the rows ids
passed to batch scanner.How the summation can be done across the rows as
you said you can put a second iterator above the first?
Thanks
Madhvi
Hi Josh,
Sorry, my company policy doesn't allow me to share full source.What we
are tryng to do is summing over a unique field stored in column
qualifier for IDs passed to batch scanner.Can u suggest how it can be
done in accumulo.
Thanks
Madhvi
On Wednesday 17 June 2015 10:32 AM, Josh
of
batchscanner.
And how enable remote debugger in accumulo.
Thanks
Madhvi
On Monday 15 June 2015 09:21 PM, Josh Elser wrote:
It's hard to remotely debug an iterator, especially when we don't know
what it's doing. If you can post the code, that would help
tremendously. Instead of dumping values to a text
?
Thanks
Madhvi
On Wednesday 27 May 2015 05:38 PM, Andrew Wells wrote:
to implement that iterator.
looks like you will only need to override replaceColumnFamily
and this looks to return the new ColumnFamily via the argument. So
manipulate the Text object provided.
On Wed, May 27, 2015 at 8:06 AM
Hi,
you have to specify the worker nodes of the spark cluster at the time of
configurations of the cluster.
Thanks
Madhvi
On Thursday 30 April 2015 01:30 PM, xiaohe lan wrote:
Hi Madhvi,
If I only install spark on one node, and use spark-submit to run an
application, which are the Worker
Hi,
Follow the instructions to install on the following link:
http://mbonaci.github.io/mbo-spark/
You dont need to install spark on every node.Just install it on one node
or you can install it on remote system also and made a spark cluster.
Thanks
Madhvi
On Thursday 30 April 2015 09:31 AM
Thankyou Deepak.It worked.
Madhvi
On Tuesday 28 April 2015 01:39 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote:
val conf = new SparkConf()
.setAppName(detail)
.set(spark.serializer,
org.apache.spark.serializer.KryoSerializer)
.set(spark.kryoserializer.buffer.mb,
arguments.get(buffersize).get
(spark.kryoserializer.buffer.max.mb,
arguments.get(maxbuffersize).get)
.set(spark.driver.maxResultSize,
arguments.get(maxResultSize).get)
.registerKryoClasses(Array(classOf[org.apache.accumulo.core.data.Key]))
Can you try this ?
On Tue, Apr 28, 2015 at 11:11 AM, madhvi madhvi.gu...@orkash.com
can be used
with spark
Thanks
Madhvi
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
can be used
with spark
Thanks
Madhvi
:19 PM, Akhil Das
ak...@sigmoidanalytics.com mailto:ak...@sigmoidanalytics.com wrote:
Change your import from mapred to mapreduce. like :
import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
Thanks
Best Regards
On Wed, Apr 22, 2015 at 2:42 PM, madhvi
InputFormatK,V
I am using the following import statements:
import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
import org.apache.accumulo.core.data.Key;
import org.apache.accumulo.core.data.Value;
I am not getting what is the problem in this.
Thanks
Madhvi
On Tuesday 21 April 2015 12:12 PM, Akhil Das wrote:
Your spark master should be spark://swetha:7077 :)
Thanks
Best Regards
On Mon, Apr 20, 2015 at 2:44 PM, madhvi madhvi.gu...@orkash.com
mailto:madhvi.gu...@orkash.com wrote:
PFA screenshot of my cluster UI
Thanks
On Monday 20
Hi all,
Is there anything to integrate spark with accumulo or make spark to
process over accumulo data?
Thanks
Madhvi Gupta
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
.
On Mon, Apr 20, 2015 at 3:05 PM, madhvi madhvi.gu...@orkash.com
mailto:madhvi.gu...@orkash.com wrote:
On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote:
Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher
than what is available.
Please request
On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote:
Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher than
what is available.
Please request for 256 MB explicitly while creating Spark Context and
try again.
Thanks and Regards,
Suraj Sheth
Tried the same but still
Hi,
I Did the same you told but now it is giving the following error:
ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler:
All masters are unresponsive! Giving up.
On UI it is showing that master is working
Thanks
Madhvi
On Monday 20 April 2015 12:28 PM, Akhil Das wrote
a screenshot of your cluster UI and the code snippet that
you are trying to run?
Thanks
Best Regards
On Mon, Apr 20, 2015 at 12:37 PM, madhvi madhvi.gu...@orkash.com
mailto:madhvi.gu...@orkash.com wrote:
Hi,
I Did the same you told but now it is giving the following error:
ERROR
=2
export SPARK_EXECUTOR_MEMORY=1g
I am running the spark standalone cluster.In cluster UI it is showing
all workers with allocated resources but still its not working.
what other configurations are needed to be changed?
Thanks
Madhvi Gupta
to do, somehow?
Madhvi
in accumulo, in HDFS.
Lucene queries are working fine over that but I want to use those
indexes to be searched via accumulo means the lucene queries should run
via accumulo.
Madhvi Gupta
-
To unsubscribe, e-mail: java-user
madhvi gupta created LUCENE-6387:
Summary: Storing lucene indexes in HDFS
Key: LUCENE-6387
URL: https://issues.apache.org/jira/browse/LUCENE-6387
Project: Lucene - Core
Issue Type: Test
Hi,
Can anyone tell me the steps to install R 3.1.0 and rstudio on ubuntu
12.0.4.
Thanks
Madhvi
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting
Hi,
How to install RStudio after downloading debian package
Madhvi
On Tuesday 14 October 2014 12:09 PM, Pascal Oettli wrote:
Please reply to the list, not only to me.
RStudio is for Ubuntu 10.04+ (please note the +).
About R 3.1.0, you probably will have to compile from the source.
Regards
Hi,
Thanks for your help.I got it installed.
Madhvi
On Tuesday 14 October 2014 12:50 PM, Pascal Oettli wrote:
The support for RStudio is located here: https://support.rstudio.com
Regards,
Pascal
On Tue, Oct 14, 2014 at 4:08 PM, madhvi madhvi.gu...@orkash.com wrote:
Hi,
How to install
I checked it is in the starting of the file only. And I have not added
anything to formtest.html.
On Wed, Mar 21, 2012 at 1:33 PM, Anand Chitipothu anandol...@gmail.comwrote:
తేదిన 21 మార్చి 2012 12:49 సా, Madhvi gupta madhvi1...@iiitd.ac.in
వ్రాశారు:
Sorry for bothering you but I am
. Please
tell me if I am on right path?
Regards,
Madhvi
On Wed, Mar 21, 2012 at 3:02 PM, Madhvi gupta madhvi1...@iiitd.ac.inwrote:
Thanx a lot! That was textedit problem it was introducing some formatting
on its own.
Regards,
Madhvi
On Wed, Mar 21, 2012 at 2:53 PM, Anand Chitipothu anandol
,
Madhvi
--
You received this message because you are subscribed to the Google Groups
web.py group.
To post to this group, send email to webpy@googlegroups.com.
To unsubscribe from this group, send email to
webpy+unsubscr...@googlegroups.com.
For more options, visit this group at
http
Stunning Trisha Photoshoot
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=974:stunning-trisha-photoshootcatid=931:tollywoodItemid=108
[1]
[2]
Links:
--
[1]
Stunning Trisha Photoshoot
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=974:stunning-trisha-photoshootcatid=931:tollywoodItemid=108
[1]
[2]
Links:
--
[1]
Stunning Trisha Photoshoot
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=974:stunning-trisha-photoshootcatid=931:tollywoodItemid=108
[1]
[2]
Links:
--
[1]
Katy Perry flaunts her California Gurls cleaavage
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=3783:katy-perry-flaunts-her-california-gurls-cleaavagecatid=908:hollywoodItemid=102
[1]
[2]
Links:
--
[1]
Katy Perry flaunts her California Gurls cleaavage
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=3783:katy-perry-flaunts-her-california-gurls-cleaavagecatid=908:hollywoodItemid=102
[1]
[2]
Links:
--
[1]
Katy Perry flaunts her California Gurls cleaavage
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=3783:katy-perry-flaunts-her-california-gurls-cleaavagecatid=908:hollywoodItemid=102
[1]
[2]
Links:
--
[1]
Actress Kausha Hot Sexxy Latest Unseen Photos
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=3548:actress-kausha-hot-sexxy-latest-unseen-photoscatid=931:tollywoodItemid=108
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
Actress Kausha Hot Sexxy Latest Unseen Photos
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=3548:actress-kausha-hot-sexxy-latest-unseen-photoscatid=931:tollywoodItemid=108
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
Lingerie babe Irina Sheik Expose in Bikini
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=1660:lingerie-babe-irina-sheikcatid=908:hollywoodItemid=102
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
Lingerie babe Irina Sheik Expose in Bikini
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=1660:lingerie-babe-irina-sheikcatid=908:hollywoodItemid=102
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[Non-text portions of
Lingerie babe Irina Sheik Expose in Bikini
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_contentview=articleid=1660:lingerie-babe-irina-sheikcatid=908:hollywoodItemid=102
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[Non-text portions of
[1]
Latest Stills of illeana
CLICK HERE TO VIEW ALL IMAGE
http://patelmantra.com/index.php?option=com_contentview=articleid=1770:latest-stills-of-illeanacatid=37:tollywoodItemid=73
[2]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[2]
[1]
Latest Stills of illeana
CLICK HERE TO VIEW ALL IMAGE
http://patelmantra.com/index.php?option=com_contentview=articleid=1770:latest-stills-of-illeanacatid=37:tollywoodItemid=73
[2]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[2]
.
Regds
Madhvi
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html
- Original Message -
From: Madhvi Gokool [EMAIL PROTECTED]
To: FreeRadius users mailing list freeradius-users@lists.freeradius.org
Sent: Tuesday, October 25, 2005 9:30 AM
Subject: Billing and provisioning
Hello
Here is a the scenario we want to implement :-
1. User pays for 10hrs of internet
Hello
Here is a the scenario we want to implement :-
1. User pays for 10hrs of internet access. We set the Max-Monthly-Serssion
to 36000.
We want to verify the number of seconds used and left for each user on a
daily basis. The results should be mailed to the ISP admin and a mail sent
to
: Wednesday, October 12, 2005 6:33 PM
Subject: Re: radwho
Madhvi Gokool [EMAIL PROTECTED] wrote:
When first testing the freeradius server, radwho still showed users as
connected when infact they had disconnected.
If the server doesn't receive an accounting stop message, it doesn't
know they've
.
Regards
Madhvi
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html
Are there any scripts that will interact with freeradius and send these
users an email or sms?
3. Does dialup admin work with plain text users file ?
Thanx in advance for your help.
Regds
Madhvi
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html
be a 3Com TCM or a Cisco access server
On the access server, we can implement access-lists to allow/deny access
based on the assigned Ip addresses, but we'd prefer using RADIUS
attributes to do so.
Tank you in advance for your help.
Madhvi
- Original Message -
From: Madhvi Gokool [EMAIL PROTECTED]
To: freeradius-users@lists.cistron.nl
Sent: Thursday, August 25, 2005 10:37 AM
Subject: FreeRadius 1.0.4
Hello
We have planned to replace our cistron radius servers with Freeradius.
We have the following setup :-
1. Users
._terrasky_tslibrary__qr.0.2
1294527 -rw---1 amanda disk 393890584 Feb 22 11:25
sa03._terrasky_tslibrary__qr.0.3
How can I resolve this problem as I now have an incomplete weekly backup.
Regds
Madhvi
GB of data
was just dumped to disk before sa01 :/terrasky .
How do I bypass this problem as it seems that amandaholdingdisk is also
being backed up ?
regds
madhvi
Hi
I am trying to replace a Windows server with a FreeBSD one .
Does anyone know the equivalent UNIX package for a Windows-based RFC868 Time
Protocol server.
Thanx in advance for your response
M
___
[EMAIL PROTECTED] mailing list
Hi
I am getting above error message during my Monthly backup. My holding disk
is 55 GB - big eneough to accomodate the dump.
A level 1 backup is done instead of level 1.
Regds
M
the second sa03 entry to sa03.terra.terrasky.mu and I get no error
messages.
Thanx
madhvi
want to minimise user
intervention during the backup.
regards
madhvi
hello
When running the command below on client server I get the following error -
details are : -
terrabkup# amrecover -C fullbkup -s sa01.terra.terrasky.mu -t
sa01.terra.terrasky.mu -d /dev/nst0
AMRECOVER Version 2.4.3. Contacting server on sa01.terra.terrasky.mu ...
amrecover: Unexpected end of
???
Thanx in advance
Madhvi
Hello
How can I enable level 0 backups to the holding disk ( I have enough hard
disk space) ?
Regards
Madhvi
kencrypt NO
holdingdisk YES
record YES
index YES
skip-incr NO
skip-full NO
The skip-incr is set to NO -- should this be changed to YES???
Regds
Madhvi
- Original Message -
From: Paul Bijnens [EMAIL PROTECTED]
To: Madhvi Gokool [EMAIL PROTECTED]
Cc
localhost /etc/mrtg 20031230 1 [sec 3.650 kb 1060 kps 290.4
orig-kb 1060]
SUCCESS taper localhost /etc/mrtg 20031230 1 [sec 1.569 kb 1120 kps 713.8
{wr: writers 35 rdwait 0.000 wrwait 0.368 filemark
1.200}]
The backup level of these directories is certainly 0 .
regds
madhvi
FAILED [data timeout]
Upon verifying the contents of the tape after the backup was completed,
backup.ter hda1 does not figure in the list of backup images.
Is it possible to retry the failed filesystem backup and write it on the
same tape
Regards
madhvi
at random.
If I modify the command above as follows : -
rsh -n -l amanda backup.terrasky.mu /usr/local/sbin/amrestore /dev/nst0
osama
/
the backup image is not stored in the current working directory.
thanks in advance for comments/explanations.
Madhvi
if someone could help me solve this problem.
Cheers
madhvi
2
LABEL=/var /varext3defaults1 2
/dev/hda3 swapswapdefaults0 0
- Original Message -
From: Madhvi Gokool [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, March 17, 2003 12:27 PM
Subject: Error
, amcheck gives me the same error reported
below.
Why can't /dev/hda6 be backed up ??? A manual dump of the latter on the
client works.
Thanx in advance
Madhvi
- Original Message -
From: Paul Bijnens [EMAIL PROTECTED]
To: Madhvi Gokool [EMAIL PROTECTED]
Sent: Monday, March 17, 2003 3:50 PM
: no acceptable cc found in $PATH
Thanks in advance
madhvi
Hello
I have configured amanda on a test server . If I modify a parameter in the
config.site file , do I have to go through the following steps before the
changes are applied : -
run ./configure, make , make install ???
Is there a quicker way
Thanx in advance
M
I have got it to work --- had to modify the group of amanda user in
/etc/passwd
Thanx
Madhvi
- Original Message -
From: Madhvi Gokool [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: Joshua Baker-LePain [EMAIL PROTECTED]
Sent: Friday, February 21, 2003 11:00 AM
Subject: Permissin denied
16 root root 4096 Jan 24 16:09 usr
Logged in as user amanda, I have been able to view a file in one of the /usr
sub-directories.
From my point of view, everyone has read and execute access on the /usr
directory.
Please help.
Madhvi
Client check: 1 host checked in 0.026 seconds, 0 problems found
I have inserted a tape containing a backup that can be overwritten .
Do you think that I need to insert a blank tape for the error on the Tape
Server host to be resolved.
Regards
madhvi
- Original Message -
From: Joshua
Hello
After changing the group of the user amanda to disk , when I run amcheck , I
ams till getiing the
Amanda Tape Server Host Check
-
ERROR: /dev/nst0: Permission denied
(expecting a new tape)
NOTE: skipping tape-writable test
Server check took 0.000 seconds
rewinding
amlabel: tape_rewind: tape open: /dev/nst0: Permission denied
If I try to remove the tape from the tape database using the amrmtape
command - i do not know the label that should be given .
amrmtape -v DailySet1 label
How can I start from scratch using the same tape???
Regards
madhvi
1 - 100 of 126 matches
Mail list logo