Dear Frank
We have taken note of the issue you have raised. The AFRINIC team are
investigating the matter and will provide feedback in due course.
Regards
Madhvi
On 24/04/2024 08:54, Frank Habicht wrote:
Hi AfriNIC NOC,
in DNS for whois.afrinic.net IPs 196.192.115.21 and
2001:42d0:2:601
Branch: refs/heads/master
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
Commit: 661198ad6f3f0aa2edb5b56c315b1f7df471e16f
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/661198ad6f3f0aa2edb5b56c315b1f7df471e16f
Author: Madhvi Dua
Date: 2021-01-27 (Wed
Branch: refs/tags/aws-codepipeline-0.44
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
--
You received this message because you are subscribed to the Google Groups
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to jenki
Branch: refs/heads/master
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
Commit: 72f475a5c946a35e0777147598ec88abf428c058
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/72f475a5c946a35e0777147598ec88abf428c058
Author: Madhvi Dua
Date: 2021-01-27 (Wed
ci/aws-codepipeline-plugin/commit/2f8269a6ab1b7b276fe6a6bf7a25bc2f105b41f7
Author: Madhvi Dua
Date: 2021-01-06 (Wed, 06 Jan 2021)
Changed paths:
M
src/main/java/com/amazonaws/codepipeline/jenkinsplugin/AWSCodePipelineSCM.java
Log Message:
---
Add HKG/AP-EAST-1 as a new
Branch: refs/heads/master
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
Commit: cd7a68e47b4b2ba0827d6ea58266e634a3b0627c
https://github.com/jenkinsci/aws-codepipeline-plugin/commit/cd7a68e47b4b2ba0827d6ea58266e634a3b0627c
Author: Madhvi Dua
Date: 2021-01-08 (Fri
Branch: refs/tags/aws-codepipeline-0.43
Home: https://github.com/jenkinsci/aws-codepipeline-plugin
--
You received this message because you are subscribed to the Google Groups
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to jenki
org type means .
Logically it shall happen as follows :-
1. The org-types definition will be provided to the database working group
2. The whois database will be updated with the definitions so that they
are then visible through the query whois -v organisation
Regards
Madhvi
On 04/10/2020 9
nk its community for taking the time and effort
to make the propositions.
We will assess the feasibility of these propositions and will come back
with a plan next week. We will keep you informed.
Kind Regards
Madhvi Gokool
-
Chers membres de la comm
-Original Message-
From: ffmpeg-user [mailto:ffmpeg-user-boun...@ffmpeg.org] On Behalf Of Paul B
Mahol
Sent: Thursday, January 03, 2019 2:22 PM
To: FFmpeg user questions
Subject: Re: [FFmpeg-user] FFmpeg Configure error, Help need
On 1/3/19, Dinesh Gupta wrote:
> whatever that mean
be used for the above mentioned page example?
Any help will be great!
Thanks,
Madhvi
,
Madhvi
in advance,
Madhvi
Dear List Members,
AFRINIC has entered into Phase 1 of the soft landing policy. The
official communique can be viewed at the URL below:-
http://www.afrinic.net/en/library/news/2053-afrinic-enters-ipv4-exhaustion-phase-1
Kind Regards
Madhvi Gokool
Registration Services Manager
. If
not then the how the required format can be generated?
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
On Mon, Feb 27, 2017 at 12:47 PM, Madhav Sharan wrote:
> Hi - Can you ensure that your training data is in format like mentioned in
> wiki ? [0]
>
> Like mentioned in w
Please let me know if anyone have any idea about this
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
On Tue, Feb 21, 2017 at 10:51 AM, Madhvi Gupta wrote:
> Hi Joern,
>
> Training data generated from reuters dataset is in the following format.
> It has generated three fil
ences
but training data prepared from reuters dataset is in the baove said
format. So please tell me how training data can be generated in the
required format or how the existing training data format can be used for
generating models.
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
On Mon, F
://www.clips.uantwerpen.be/conll2003/ner/000README
So can you please help me how to create training data out of that corpus
and use it to create name entity detection models?
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
On Mon, Feb 20, 2017 at 1:00 AM, Joern Kottmann wrote:
> Hello,
>
Hi All,
I have got reuters data from NIST. Now I want to generate the training data
from that to create a model for detecting named entities. Can anyone tell
me how the models can be generated from that.
--
With Regards
Madhvi Gupta
*(Senior Software Engineer)*
Thank you so much Lewis.
On 8/19/16, 4:53 AM, "lewis john mcgibbney" wrote:
>Evening Madhvi,
>I will set this up and debug a clean. I'll report over on
>https://issues.apache.org/jira/browse/NUTCH-2269
>
>Thank you for reporting.
>Lewis
>
>
Hi,
I wanted to find out how to correct the issue below and will appreciate any
help.
I am trying to upgrade to Nutch 1.12. I am using solr 5.3.1. The reason I am
upgrading are:
1: https crawling
2: Boilerplate canola extraction through tika
The only problem so far I am having is an IOExcep
ind of related to what I need.
On 8/5/16, 2:18 PM, "Arora, Madhvi" wrote:
>Thank you very much!
>
>
>
>
>On 8/5/16, 2:13 PM, "Markus Jelsma" wrote:
>
>>I am not sure which version is was added, you'd have to check CHANGES.txt,
>&g
Dear Joel
Please let us know off-list as to who these Telcos are.
Regards
Madhvi
On 15/08/2016 9:38 AM, Joel Gogwim wrote:
>
> It's appears like some of the African Telcos acquired IP resources not
> from AfriNIC and assigned such to their African customers. This
> implies t
Thank you very much!
On 8/5/16, 2:13 PM, "Markus Jelsma" wrote:
>I am not sure which version is was added, you'd have to check CHANGES.txt, but
>upgrading is usually a good idea and very simple.
>Markus
>
>
>
>-Original message-
>> From
Markus so to crawl https and http urls successfully we just need to switch to a
newer version of Nutch I.e. Higher than Nutch 1.10?
On 8/5/16, 12:47 PM, "Markus Jelsma" wrote:
>Hello - see inline.
>Markus
>
>-Original message-
>> From:Arora, Madhvi
>
need to delete the old http urls from solr index, re-crawl and index the urls
that need to be switched to https.
I will be grateful for any guidance or suggestions.
Thanks,
Madhvi
[
https://issues.apache.org/jira/browse/SPARK-10828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907725#comment-14907725
]
madhvi gupta commented on SPARK-10828:
--
hey I have post this question on the s
madhvi gupta created SPARK-10828:
Summary: Can we use the accumulo data RDD created from JAVA in
spark, in sparkR?Is there any other way to proceed with it to create RRDD from
a source RDD other than text RDD?Or to use any other format of data
Thanks Josh. It really worked for me.
On Wednesday 17 June 2015 08:43 PM, Josh Elser wrote:
Madhvi,
Understood. A few more questions..
How are you passing these IDs to the batch scanner? Are you providing
individual Ranges for each ID (e.g. `new Range(new Key("row1", ""
passing that list to batch Scanner.
"Are you trying to sum across all rows that you queried? "
Yes we need to sum a particular column qualifier across the rows ids
passed to batch scanner.How the summation can be done across the rows as
you said "you can put a second iterator &qu
Hi Josh,
Sorry, my company policy doesn't allow me to share full source.What we
are tryng to do is summing over a unique field stored in column
qualifier for IDs passed to batch scanner.Can u suggest how it can be
done in accumulo.
Thanks
Madhvi
On Wednesday 17 June 2015 10:32 AM,
Hi Josh,
I have changed HashMap to TreeMap which sorts lexicographically and I
have inserted random values in column family and qualifier.Value of
TreeMap in value.
Used scanner and batch scanner but getting results only with scanner.
Thanks
Madhvi
On Tuesday 16 June 2015 08:42 PM, Josh
n for that?
Thanks
Madhvi
On Tuesday 16 June 2015 11:07 AM, Josh Elser wrote:
//matched the condition and put values to holder map.
n accumulo.
Thanks
Madhvi
On Monday 15 June 2015 09:21 PM, Josh Elser wrote:
It's hard to remotely debug an iterator, especially when we don't know
what it's doing. If you can post the code, that would help
tremendously. Instead of dumping values to a text file, you may fare
better
entries through batchscanner.
getTopValue function is called while scanning through scanner, Applying
same iterator using scanner and batchsacnner, through scanner getting
returned entries but getting no entries returned while using batchscanner.
So can you please explain.
Thanks
Madhvi
On
?
Thanks
Madhvi
On Wednesday 27 May 2015 05:38 PM, Andrew Wells wrote:
to implement that iterator.
looks like you will only need to override replaceColumnFamily
and this looks to return the new ColumnFamily via the argument. So
manipulate the Text object provided.
On Wed, May 27, 2015 at 8:06 AM
Hi,
you have to specify the worker nodes of the spark cluster at the time of
configurations of the cluster.
Thanks
Madhvi
On Thursday 30 April 2015 01:30 PM, xiaohe lan wrote:
Hi Madhvi,
If I only install spark on one node, and use spark-submit to run an
application, which are the Worker
Hi,
Follow the instructions to install on the following link:
http://mbonaci.github.io/mbo-spark/
You dont need to install spark on every node.Just install it on one node
or you can install it on remote system also and made a spark cluster.
Thanks
Madhvi
On Thursday 30 April 2015 09:31 AM
")
.set("spark.driver.maxResultSize",
arguments.get("maxResultSize").get)
.registerKryoClasses(Array(classOf[org.apache.accumulo.core.data.Key]))
Thanks
Madhvi
On Tuesday 28 April 2015 11:32 PM, Josh Elser wrote:
Thanks for the report back, Vaibhav.
To clari
Thankyou Deepak.It worked.
Madhvi
On Tuesday 28 April 2015 01:39 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote:
val conf = new SparkConf()
.setAppName(detail)
.set("spark.serializer",
"org.apache.spark.serializer.KryoSerializer")
.set("spark.kryoserializer.b
quot;buffersize").get)
.set("spark.kryoserializer.buffer.max.mb",
arguments.get("maxbuffersize").get)
.set("spark.driver.maxResultSize",
arguments.get("maxResultSize").get)
.registerKryoClasses(Array(classOf[org.apache.accumulo.core.data.Key]))
d accumulo can be used
with spark
Thanks
Madhvi
d accumulo can be used
with spark
Thanks
Madhvi
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
at 12:19 PM, Akhil Das
mailto:ak...@sigmoidanalytics.com>> wrote:
Change your import from mapred to mapreduce. like :
import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
Thanks
Best Regards
On Wed, Apr 22, 2015 at 2:42 PM, madhvi mailto:madhvi.gu...
nts:
import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
import org.apache.accumulo.core.data.Key;
import org.apache.accumulo.core.data.Value;
I am not getting what is the problem in this.
Thanks
Madhvi
-
To unsubscribe
On Tuesday 21 April 2015 12:12 PM, Akhil Das wrote:
Your spark master should be spark://swetha:7077 :)
Thanks
Best Regards
On Mon, Apr 20, 2015 at 2:44 PM, madhvi <mailto:madhvi.gu...@orkash.com>> wrote:
PFA screenshot of my cluster UI
Thanks
On Monday 20 April 2015
Hi all,
Is there anything to integrate spark with accumulo or make spark to
process over accumulo data?
Thanks
Madhvi Gupta
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
.
On Mon, Apr 20, 2015 at 3:05 PM, madhvi <mailto:madhvi.gu...@orkash.com>> wrote:
On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote:
Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher
than what is available.
Please request for 256 MB e
On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote:
Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher than
what is available.
Please request for 256 MB explicitly while creating Spark Context and
try again.
Thanks and Regards,
Suraj Sheth
Tried the same but still
are you allocating for your job? Can
you share a screenshot of your cluster UI and the code snippet that
you are trying to run?
Thanks
Best Regards
On Mon, Apr 20, 2015 at 12:37 PM, madhvi <mailto:madhvi.gu...@orkash.com>> wrote:
Hi,
I Did the same you told but now it is givin
Hi,
I Did the same you told but now it is giving the following error:
ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler:
All masters are unresponsive! Giving up.
On UI it is showing that master is working
Thanks
Madhvi
On Monday 20 April 2015 12:28 PM, Akhil Das wrote:
In
=2
export SPARK_EXECUTOR_MEMORY=1g
I am running the spark standalone cluster.In cluster UI it is showing
all workers with allocated resources but still its not working.
what other configurations are needed to be changed?
Thanks
Madhvi Gupta
stored in accumulo, in HDFS.
Lucene queries are working fine over that but I want to use those
indexes to be searched via accumulo means the lucene queries should run
via accumulo.
Madhvi Gupta
in accumulo, in HDFS.
Lucene queries are working fine over that but I want to use those
indexes to be searched via accumulo means the lucene queries should run
via accumulo.
Madhvi Gupta
-
To unsubscribe, e-mail: java-user
to do, somehow?
Madhvi
madhvi gupta created LUCENE-6387:
Summary: Storing lucene indexes in HDFS
Key: LUCENE-6387
URL: https://issues.apache.org/jira/browse/LUCENE-6387
Project: Lucene - Core
Issue Type: Test
Hi,
Thanks for your help.I got it installed.
Madhvi
On Tuesday 14 October 2014 12:50 PM, Pascal Oettli wrote:
The support for RStudio is located here: https://support.rstudio.com
Regards,
Pascal
On Tue, Oct 14, 2014 at 4:08 PM, madhvi wrote:
Hi,
How to install RStudio after downloading
Hi,
How to install RStudio after downloading debian package
Madhvi
On Tuesday 14 October 2014 12:09 PM, Pascal Oettli wrote:
Please reply to the list, not only to me.
RStudio is for Ubuntu 10.04+ (please note the "+").
About R 3.1.0, you probably will have to compile from
Hi,
Can anyone tell me the steps to install R 3.1.0 and rstudio on ubuntu
12.0.4.
Thanks
Madhvi
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting
lude/llvm/IR/Use.h:45:7: note: previous definition
is
here
class PointerLikeTypeTraits {
If anyone of you can help in resolving the error that I m getting.
Thanks,
Madhvi
___
cfe-users mailing list
cfe-users@cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/cfe-users
.
Regards,
Madhvi
--
You received this message because you are subscribed to the Google Groups
"web.py" group.
To post to this group, send email to webpy@googlegroups.com.
To unsubscribe from this group, send email to
webpy+unsubscr...@googlegroups.com.
For more options, visit this grou
.
Regards,
Madhvi
On Thu, Mar 22, 2012 at 9:52 AM, Anand Chitipothu wrote:
>
>
> Hello,
>>I want to make a web app using web.py which takes input from user
>> that is username and password, and in turn communicates with google server.
>> I am not sure how should I p
. Please
tell me if I am on right path?
Regards,
Madhvi
On Wed, Mar 21, 2012 at 3:02 PM, Madhvi gupta wrote:
> Thanx a lot! That was textedit problem it was introducing some formatting
> on its own.
>
> Regards,
> Madhvi
>
>
> On Wed, Mar 21, 2012 at 2:53 PM, Anand Chitipo
Thanx a lot! That was textedit problem it was introducing some formatting
on its own.
Regards,
Madhvi
On Wed, Mar 21, 2012 at 2:53 PM, Anand Chitipothu wrote:
> తేదిన 21 మార్చి 2012 2:04 సా, Madhvi gupta
> వ్రాశారు:
> > I checked it is in the starting of the file only. And I ha
I checked it is in the starting of the file only. And I have not added
anything to formtest.html.
On Wed, Mar 21, 2012 at 1:33 PM, Anand Chitipothu wrote:
> తేదిన 21 మార్చి 2012 12:49 సా, Madhvi gupta
> వ్రాశారు:
> > Sorry for bothering you but I am not getting what is wrong.
&g
I read that it goes in templates directory but where to make this directory.
On Wed, Mar 21, 2012 at 7:21 AM, Anand Chitipothu wrote:
> తేదిన 21 మార్చి 2012 12:15 ఉ, Madhvi gupta
> వ్రాశారు:
> > I am trying to use web.py to design an web app. As a first step I
> > tried to us
,
Madhvi
--
You received this message because you are subscribed to the Google Groups
"web.py" group.
To post to this group, send email to webpy@googlegroups.com.
To unsubscribe from this group, send email to
webpy+unsubscr...@googlegroups.com.
For more options, visit this grou
Stunning Trisha Photoshoot
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=974:stunning-trisha-photoshoot&catid=931:tollywood&Itemid=108
[1]
[2]
Links:
--
[1]
http://masti2mail.com/index.php?option=com_content&view=article&id=974:stunning-t
Stunning Trisha Photoshoot
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=974:stunning-trisha-photoshoot&catid=931:tollywood&Itemid=108
[1]
[2]
Links:
--
[1]
http://masti2mail.com/index.php?option=com_content&view=article&id=974:stunning-t
Stunning Trisha Photoshoot
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=974:stunning-trisha-photoshoot&catid=931:tollywood&Itemid=108
[1]
[2]
Links:
--
[1]
http://masti2mail.com/index.php?option=com_content&view=article&id=974:stunning-t
Katy Perry flaunts her California Gurls cleaavage
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=3783:katy-perry-flaunts-her-california-gurls-cleaavage&catid=908:hollywood&Itemid=102
[1]
[2]
Links:
--
[1]
http://masti2mail.com/index.php?opt
Katy Perry flaunts her California Gurls cleaavage
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=3783:katy-perry-flaunts-her-california-gurls-cleaavage&catid=908:hollywood&Itemid=102
[1]
[2]
Links:
--
[1]
http://masti2mail.com/index.php?opt
Katy Perry flaunts her California Gurls cleaavage
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=3783:katy-perry-flaunts-her-california-gurls-cleaavage&catid=908:hollywood&Itemid=102
[1]
[2]
Links:
--
[1]
http://masti2mail.com/index.php?opt
Actress Kausha Hot Sexxy Latest Unseen Photos
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=3548:actress-kausha-hot-sexxy-latest-unseen-photos&catid=931:tollywood&Itemid=108
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
Actress Kausha Hot Sexxy Latest Unseen Photos
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=3548:actress-kausha-hot-sexxy-latest-unseen-photos&catid=931:tollywood&Itemid=108
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
Actress Kausha Hot Sexxy Latest Unseen Photos
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=3548:actress-kausha-hot-sexxy-latest-unseen-photos&catid=931:tollywood&Itemid=108
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
Lingerie babe Irina Sheik Expose in Bikini
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=1660:lingerie-babe-irina-sheik&catid=908:hollywood&Itemid=102
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
Lingerie babe Irina Sheik Expose in Bikini
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=1660:lingerie-babe-irina-sheik&catid=908:hollywood&Itemid=102
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[Non-text portions of
Lingerie babe Irina Sheik Expose in Bikini
CLICK HERE TO VIEW ALL IMAGE
http://masti2mail.com/index.php?option=com_content&view=article&id=1660:lingerie-babe-irina-sheik&catid=908:hollywood&Itemid=102
[1]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[Non-text portions of
[1]
Latest Stills of illeana
CLICK HERE TO VIEW ALL IMAGE
http://patelmantra.com/index.php?option=com_content&view=article&id=1770:latest-stills-of-illeana&catid=37:tollywood&Itemid=73
[2]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[2] http://groups.yahoo.com/
[1]
Latest Stills of illeana
CLICK HERE TO VIEW ALL IMAGE
http://patelmantra.com/index.php?option=com_content&view=article&id=1770:latest-stills-of-illeana&catid=37:tollywood&Itemid=73
[2]
Links:
--
[1] http://groups.yahoo.com/group/masti2mail/join
[2] http://groups.yahoo.com/
, ,
Max-Monthly-Session := 36000
Service-Type = Framed-User,
Framed-Protocol = PPP
Any idea as to what could have caused this.
How can I log when counters roll over?
How can I verify the counter usage - how many seconds used and left for each
user??
Thanx in advance for your pr
- Original Message -
From: "Madhvi Gokool" <[EMAIL PROTECTED]>
To: "FreeRadius users mailing list"
Sent: Tuesday, October 25, 2005 9:30 AM
Subject: Billing and provisioning
Hello
Here is a the scenario we want to implement :-
1. User pays for 10hrs of internet acce
Hello
Here is a the scenario we want to implement :-
1. User pays for 10hrs of internet access. We set the Max-Monthly-Serssion
to 36000.
We want to verify the number of seconds used and left for each user on a
daily basis. The results should be mailed to the ISP admin and a mail sent
to each
.
Regards
Madhvi
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html
day, October 12, 2005 6:33 PM
Subject: Re: radwho
"Madhvi Gokool" <[EMAIL PROTECTED]> wrote:
When first testing the freeradius server, radwho still showed users as
connected when infact they had disconnected.
If the server doesn't receive an accounting stop message, it d
.?
Kind Regards
Madhvi
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html
xxx seconds of connection time left
Are there any scripts that will interact with freeradius and send these
users an email or sms?
3. Does dialup admin work with plain text users file ?
Thanx in advance for your help.
Regds
Madhvi
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html
- Original Message -
From: "Madhvi Gokool" <[EMAIL PROTECTED]>
To:
Sent: Thursday, August 25, 2005 10:37 AM
Subject: FreeRadius 1.0.4
Hello
We have planned to replace our cistron radius servers with Freeradius.
We have the following setup :-
1. Users dial in to a
be a 3Com TCM or a Cisco access server
On the access server, we can implement access-lists to allow/deny access
based on the assigned Ip addresses, but we'd prefer using RADIUS
attributes to do so.
Tank you in advance for your help.
M
._terrasky_tslibrary__qr.0.2
1294527 -rw---1 amanda disk 393890584 Feb 22 11:25
sa03._terrasky_tslibrary__qr.0.3
How can I resolve this problem as I now have an incomplete weekly backup.
Regds
Madhvi
GB of data
was just dumped to disk before sa01 :/terrasky .
How do I bypass this problem as it seems that amandaholdingdisk is also
being backed up ?
regds
madhvi
Hi
I am getting above error message during my Monthly backup. My holding disk
is 55 GB - big eneough to accomodate the dump.
A level 1 backup is done instead of level 1.
Regds
M
Hi
I am trying to replace a Windows server with a FreeBSD one .
Does anyone know the equivalent UNIX package for a Windows-based RFC868 Time
Protocol server.
Thanx in advance for your response
M
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.
rmine what could be wrong with this config. I
changed the second sa03 entry to sa03.terra.terrasky.mu and I get no error
messages.
Thanx
madhvi
mental backups being done. We want to minimise user
intervention during the backup.
regards
madhvi
hello
When running the command below on client server I get the following error -
details are : -
terrabkup# amrecover -C fullbkup -s sa01.terra.terrasky.mu -t
sa01.terra.terrasky.mu -d /dev/nst0
AMRECOVER Version 2.4.3. Contacting server on sa01.terra.terrasky.mu ...
amrecover: Unexpected end of f
wrong.
can anyone help me with this config???
Thanx in advance
Madhvi
auth BSD
kencrypt NO
holdingdisk YES
record YES
index YES
skip-incr NO
skip-full NO
The skip-incr is set to NO -- should this be changed to YES???
Regds
Madhvi
- Original Message -
From: "Paul Bijnens" <[EMAIL PROTECTED]>
T
Hello
How can I enable level 0 backups to the holding disk ( I have enough hard
disk space) ?
Regards
Madhvi
1 - 100 of 139 matches
Mail list logo