Unsubscribe

2016-06-16 Thread Sanjeev Sagar
Unsubscribe 

Sent from my iPhone

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



unsubscribe

2016-06-15 Thread Sanjeev Sagar
unsubscribe


Q on downloading spark for standalone cluster

2014-08-28 Thread Sanjeev Sagar

Hello there,

I've a basic question on the downloadthat which option I need to 
downloadfor standalone cluster.


I've a private cluster of three machineson Centos. When I click on 
download it shows me following:



   Download Spark

The latest release is Spark 1.0.2, released August 5, 2014 (release 
notes) http://spark.apache.org/releases/spark-release-1-0-2.html (git 
tag) 
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=8fb6f00e195fb258f3f70f04756e07c259a2351f


Pre-built packages:

 * For Hadoop 1 (HDP1, CDH3): find an Apache mirror
   
http://www.apache.org/dyn/closer.cgi/spark/spark-1.0.2/spark-1.0.2-bin-hadoop1.tgz
   or direct file download
   http://d3kbcqa49mib13.cloudfront.net/spark-1.0.2-bin-hadoop1.tgz
 * For CDH4: find an Apache mirror
   
http://www.apache.org/dyn/closer.cgi/spark/spark-1.0.2/spark-1.0.2-bin-cdh4.tgz
   or direct file download
   http://d3kbcqa49mib13.cloudfront.net/spark-1.0.2-bin-cdh4.tgz
 * For Hadoop 2 (HDP2, CDH5): find an Apache mirror
   
http://www.apache.org/dyn/closer.cgi/spark/spark-1.0.2/spark-1.0.2-bin-hadoop2.tgz
   or direct file download
   http://d3kbcqa49mib13.cloudfront.net/spark-1.0.2-bin-hadoop2.tgz

Pre-built packages, third-party (NOTE: may include non ASF-compatible 
licenses):


 * For MapRv3: direct file download (external)
   http://package.mapr.com/tools/apache-spark/1.0.2/spark-1.0.2-bin-mapr3.tgz
 * For MapRv4: direct file download (external)
   http://package.mapr.com/tools/apache-spark/1.0.2/spark-1.0.2-bin-mapr4.tgz


From the above it looks like that I've to donwload Hadoop or CDH4 first 
in order to use Spark ? I've a standalone cluster and my data size is 
also like hundreds of Gig or close to Terabyte.


I don't get it that which one I need to download from the above list.

Could some one assist me that which one I need to download for 
standalone cluster and for big data foot print ?


or Hadoop is needed or mandatory for using Spark? that's not the 
understanding I've. My understanding is that you can use spark with 
Hadoop if you like from yarn2 but you could use spark standalone also 
without hadoop.


Please assist. I'm confused !

-Sanjeev


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Flume-NG agent issue on daily rotation files

2013-06-20 Thread sanjeev sagar
Hello All, I'm trying to load the app servers request logs in Hadoop hdfs.



I get all the consolidate logs in one file for a day. I'm running the flume
agent with following config:



##

agent.sources = apache

agent.sources.apache.type = exec

agent.sources.apache.command = cat

/appserverlogs/requestfile/request.log.2013_06_07

agent.sources.apache.batchSize = 1

agent.sources.apache.channels = memoryChannel
agent.sources.apache.interceptors = itime ihost itype #
http://flume.apache.org/FlumeUserGuide.html#timestamp-interceptor

agent.sources.apache.interceptors.itime.type = timestamp #
http://flume.apache.org/FlumeUserGuide.html#host-interceptor

agent.sources.apache.interceptors.ihost.type = host
agent.sources.apache.interceptors.ihost.useIP = false
agent.sources.apache.interceptors.ihost.hostHeader = host #
http://flume.apache.org/FlumeUserGuide.html#static-interceptor

agent.sources.apache.interceptors.itype.type = static
agent.sources.apache.interceptors.itype.key = log_type
agent.sources.apache.interceptors.itype.value = request_logs



# http://flume.apache.org/FlumeUserGuide.html#memory-channel

agent.channels = memoryChannel

agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 1000
agent.channels.memoryChannel.transactionCapacity = 100
agent.channels.memoryChannel.keep-alive = 3
agent.channels.memoryChannel.byteCapacityBufferPercentage = 20



## Send to Flume Collector on 1.2.3.4 (Hadoop Slave Node) #
http://flume.apache.org/FlumeUserGuide.html#avro-sink

agent.sinks = AvroSink

agent.sinks.AvroSink.type = avro

agent.sinks.AvroSink.channel = memoryChannel agent.sinks.AvroSink.hostname
= h1.vgs.mypoints.com agent.sinks.AvroSink.port = 4545



here you can see that I'm using the cat command with the specific file.

As I said that i get one file a day with the date in it.



Q: How could I mention in the config file to keep rotating the cat file
name in the above config for everyday new file? Currently once the file is
loaded then I've to stop the agent and change the config and run the agent
again.





On the Hadoop slave I've the collector running with the following config:



collector.sources = AvroIn

collector.sources.AvroIn.type = avro

collector.sources.AvroIn.bind = 0.0.0.0

collector.sources.AvroIn.port = 4545

collector.sources.AvroIn.channels = mc1 mc2



## Channels 

## Source writes to 2 channels, one for each sink (Fan Out)
collector.channels = mc1 mc2



collector.channels.mc1.type = memory

collector.channels.mc1.capacity = 1000

collector.channels.mc1.transactionCapacity = 100
collector.channels.mc1.keep-alive = 3
collector.channels.mc1.byteCapacityBufferPercentage = 20



collector.channels.mc2.type = memory

collector.channels.mc2.capacity = 1000

collector.channels.mc2.transactionCapacity = 100
collector.channels.mc2.keep-alive = 3
collector.channels.mc2.byteCapacityBufferPercentage = 20



## Sinks ###

collector.sinks = LocalOut HadoopOut



## Write copy to Local Filesystem (Debugging) #
http://flume.apache.org/FlumeUserGuide.html#file-roll-sink

collector.sinks.LocalOut.type = file_roll
collector.sinks.LocalOut.sink.directory = /var/log/flume
collector.sinks.LocalOut.sink.rollInterval = 0
collector.sinks.LocalOut.channel = mc1



## Write to HDFS

# http://flume.apache.org/FlumeUserGuide.html#hdfs-sink

collector.sinks.HadoopOut.type = hdfs

collector.sinks.HadoopOut.channel = mc2

collector.sinks.HadoopOut.hdfs.path =

/user/flume/events/%{log_type}/%{host}/%y-%m-%d

collector.sinks.HadoopOut.hdfs.fileType = DataStream
collector.sinks.HadoopOut.hdfs.writeFormat = Text
collector.sinks.HadoopOut.hdfs.rollSize = 0
collector.sinks.HadoopOut.hdfs.rollCount = 0
collector.sinks.HadoopOut.hdfs.rollInterval = 0



Q: Collector is loading the file into hdfs as .tmp extention. Untill I kill
the collector it dont' rotate the file to normal name. I've played with



collector.sinks.HadoopOut.hdfs.rollSize = 0
collector.sinks.HadoopOut.hdfs.rollCount = 0
collector.sinks.HadoopOut.hdfs.rollInterval = 0



but then it  create many files.I'm looking for creating one file for one
day requestlogs.



I really appreciate any help on this issue.



-Sanjeev







-- 
Sanjeev Sagar

***Separate yourself from everything that separates you from others
! - Nirankari
Baba Hardev Singh ji *

**


Hive External Table issue

2013-06-20 Thread sanjeev sagar
Hello Everyone, I'm running into the following Hive external table issue.



hive CREATE EXTERNAL TABLE access(

host STRING,

identity STRING,

user STRING,

time STRING,

request STRING,

status STRING,

size STRING,

referer STRING,

agent STRING)

ROW FORMAT SERDE

'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'

WITH SERDEPROPERTIES (

   input.regex = ([^ ]*) ([^ ]*) ([^ ]*) (-|\\[[^\\]]*\\])

([^ \]*|\[^\]*\) (-|[0-9]*) (-|[0-9]*)(?: ([^ \]*|\[^\]*\) ([^
\]*|\[^\]*\))?,

output.format.string = %1$s %2$s %3$s %4$s %5$s %6$s

%7$s %8$s %9$s

)

STORED AS TEXTFILE

LOCATION

'/user/flume/events/request_logs/
ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033';

FAILED: Error in metadata:

MetaException(message:hdfs://
h1.vgs.mypoints.com:8020/user/flume/events/request_logs/ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

is not a directory or unable to create one)

FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask





In HDFS: file exists



hadoop fs -ls

/user/flume/events/request_logs/
ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

Found 1 items

-rw-r--r--   3 hdfs supergroup 2242037226 2013-06-13 11:14

/user/flume/events/request_logs/
ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033



I've download the serde2 jar file too and install it in
/usr/lib/hive/lib/hive-json-serde-0.2.jar and I've bounced all the hadoop
services after that.



I even added the jar file manually in hive and run the above sql but still
failing.

ive add jar /usr/lib/hive/lib/hive-json-serde-0.2.jar

  ;

Added /usr/lib/hive/lib/hive-json-serde-0.2.jar to class path Added
resource: /usr/lib/hive/lib/hive-json-serde-0.2.jar



Any help would be highly appreciable.



-Sanjeev









-- 
Sanjeev Sagar

***Separate yourself from everything that separates you from others
! - Nirankari
Baba Hardev Singh ji *

**


Re: Hive External Table issue

2013-06-20 Thread sanjeev sagar
I did mention in my mail the hdfs file exists in that location. See below

In HDFS: file exists



hadoop fs -ls

/user/flume/events/request_logs/
ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

Found 1 items

-rw-r--r--   3 hdfs supergroup 2242037226 2013-06-13 11:14

/user/flume/events/request_logs/
ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

so the directory and file both exists.


On Thu, Jun 20, 2013 at 10:24 AM, Nitin Pawar nitinpawar...@gmail.comwrote:

 MetaException(message:hdfs://
 h1.vgs.mypoints.com:8020/user/flume/events/request_logs/ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

 is not a directory or unable to create one)


 it clearly says its not a directory. Point to the dictory and it will work


 On Thu, Jun 20, 2013 at 10:52 PM, sanjeev sagar 
 sanjeev.sa...@gmail.comwrote:

 Hello Everyone, I'm running into the following Hive external table issue.



 hive CREATE EXTERNAL TABLE access(

 host STRING,

 identity STRING,

 user STRING,

 time STRING,

 request STRING,

 status STRING,

 size STRING,

 referer STRING,

 agent STRING)

 ROW FORMAT SERDE

 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'

 WITH SERDEPROPERTIES (

input.regex = ([^ ]*) ([^ ]*) ([^ ]*) (-|\\[[^\\]]*\\])

 ([^ \]*|\[^\]*\) (-|[0-9]*) (-|[0-9]*)(?: ([^ \]*|\[^\]*\) ([^
 \]*|\[^\]*\))?,

 output.format.string = %1$s %2$s %3$s %4$s %5$s %6$s

 %7$s %8$s %9$s

 )

 STORED AS TEXTFILE

 LOCATION

 '/user/flume/events/request_logs/
 ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033';

 FAILED: Error in metadata:

 MetaException(message:hdfs://
 h1.vgs.mypoints.com:8020/user/flume/events/request_logs/ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

 is not a directory or unable to create one)

 FAILED: Execution Error, return code 1 from
 org.apache.hadoop.hive.ql.exec.DDLTask





 In HDFS: file exists



 hadoop fs -ls

 /user/flume/events/request_logs/
 ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

 Found 1 items

 -rw-r--r--   3 hdfs supergroup 2242037226 2013-06-13 11:14

 /user/flume/events/request_logs/
 ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033



 I've download the serde2 jar file too and install it in
 /usr/lib/hive/lib/hive-json-serde-0.2.jar and I've bounced all the hadoop
 services after that.



 I even added the jar file manually in hive and run the above sql but
 still failing.

 ive add jar /usr/lib/hive/lib/hive-json-serde-0.2.jar

   ;

 Added /usr/lib/hive/lib/hive-json-serde-0.2.jar to class path Added
 resource: /usr/lib/hive/lib/hive-json-serde-0.2.jar



 Any help would be highly appreciable.



 -Sanjeev









 --
 Sanjeev Sagar

 ***Separate yourself from everything that separates you from others !- 
 Nirankari
 Baba Hardev Singh ji *

 **




 --
 Nitin Pawar




-- 
Sanjeev Sagar

***Separate yourself from everything that separates you from others
! - Nirankari
Baba Hardev Singh ji *

**


Re: Hive External Table issue

2013-06-20 Thread sanjeev sagar
Two issues:

1. I've created external tables in hive based on file location before and
it work without any issue. It don't have to be a directory.

2. If there are more than one file in the directory, and you create
external table based on directory then how the table knows that which file
it need to look for the data?

I tried to create the table based on directory, it created the table but
all the rows were NULL.

-Sanjeev


On Thu, Jun 20, 2013 at 10:30 AM, Nitin Pawar nitinpawar...@gmail.comwrote:

 in hive when you create table and use the location to refer hdfs path,
 that path is supposed to be a directory.
 If the directory is not existing it will try to create it and if its a
 file it will throw an error as its not a directory

 thats the error you are getting that location you referred is a file.
 Change it to the directory and see if that works for you


 On Thu, Jun 20, 2013 at 10:57 PM, sanjeev sagar 
 sanjeev.sa...@gmail.comwrote:

 I did mention in my mail the hdfs file exists in that location. See below

 In HDFS: file exists



 hadoop fs -ls

 /user/flume/events/request_logs/
 ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

 Found 1 items

 -rw-r--r--   3 hdfs supergroup 2242037226 2013-06-13 11:14

 /user/flume/events/request_logs/
 ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

 so the directory and file both exists.


 On Thu, Jun 20, 2013 at 10:24 AM, Nitin Pawar nitinpawar...@gmail.comwrote:

 MetaException(message:hdfs://
 h1.vgs.mypoints.com:8020/user/flume/events/request_logs/ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

 is not a directory or unable to create one)


 it clearly says its not a directory. Point to the dictory and it will
 work


 On Thu, Jun 20, 2013 at 10:52 PM, sanjeev sagar sanjeev.sa...@gmail.com
  wrote:

 Hello Everyone, I'm running into the following Hive external table
 issue.



 hive CREATE EXTERNAL TABLE access(

 host STRING,

 identity STRING,

 user STRING,

 time STRING,

 request STRING,

 status STRING,

 size STRING,

 referer STRING,

 agent STRING)

 ROW FORMAT SERDE

 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'

 WITH SERDEPROPERTIES (

input.regex = ([^ ]*) ([^ ]*) ([^ ]*) (-|\\[[^\\]]*\\])

 ([^ \]*|\[^\]*\) (-|[0-9]*) (-|[0-9]*)(?: ([^ \]*|\[^\]*\) ([^
 \]*|\[^\]*\))?,

 output.format.string = %1$s %2$s %3$s %4$s %5$s %6$s

 %7$s %8$s %9$s

 )

 STORED AS TEXTFILE

 LOCATION

 '/user/flume/events/request_logs/
 ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033';

 FAILED: Error in metadata:

 MetaException(message:hdfs://
 h1.vgs.mypoints.com:8020/user/flume/events/request_logs/ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

 is not a directory or unable to create one)

 FAILED: Execution Error, return code 1 from
 org.apache.hadoop.hive.ql.exec.DDLTask





 In HDFS: file exists



 hadoop fs -ls

 /user/flume/events/request_logs/
 ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033

 Found 1 items

 -rw-r--r--   3 hdfs supergroup 2242037226 2013-06-13 11:14

 /user/flume/events/request_logs/
 ar1.vgs.mypoints.com/13-06-13/FlumeData.1371144648033



 I've download the serde2 jar file too and install it in
 /usr/lib/hive/lib/hive-json-serde-0.2.jar and I've bounced all the hadoop
 services after that.



 I even added the jar file manually in hive and run the above sql but
 still failing.

 ive add jar /usr/lib/hive/lib/hive-json-serde-0.2.jar

   ;

 Added /usr/lib/hive/lib/hive-json-serde-0.2.jar to class path Added
 resource: /usr/lib/hive/lib/hive-json-serde-0.2.jar



 Any help would be highly appreciable.



 -Sanjeev









 --
 Sanjeev Sagar

 ***Separate yourself from everything that separates you from others !- 
 Nirankari
 Baba Hardev Singh ji *

 **




 --
 Nitin Pawar




 --
 Sanjeev Sagar

 ***Separate yourself from everything that separates you from others !- 
 Nirankari
 Baba Hardev Singh ji *

 **




 --
 Nitin Pawar




-- 
Sanjeev Sagar

***Separate yourself from everything that separates you from others
! - Nirankari
Baba Hardev Singh ji *

**


Fwd: CDH4: sqoop issue in using --direct with Netezza driver

2012-12-19 Thread sanjeev sagar
-- Forwarded message --
From: sanjeev sagar sanjeev.sa...@gmail.com
Date: Wed, Dec 19, 2012 at 5:15 PM
Subject: CDH4: sqoop issue in using --direct with Netezza driver
To: user@sqoop.apache.org
Cc: sanjeev sagar sanjeev.sa...@gmail.com


Hello, I'm running into a issue while using the Sqoop with Netezza driver
in using the option --direct. Following are the details



version: Sqoop 1.4.1-cdh4.1.2

Hadoop: Hadoop 2.0.0-cdh4.1.2



command: sqoop --options-file /etc/sqoop/conf/sqoop_nz_remote_import.cnf

--direct --table SILO --num-mappers 1 --escaped-by '\\'

--fields-terminated-by ',' --verbose -- --nz-logdir /tmp/



Error:



12/12/19 15:42:51 ERROR tool.ImportTool: Encountered IOException running
import job: java.io.IOException: Unable to validate object type for given
table. Please ensure that the given user name and table name is in the the
correct case. If you are not sure, please use upper case to specify both
these values.

 at

com.cloudera.sqoop.netezza.DirectNetezzaManager.validateTargetObjectType(DirectNetezzaManager.java:131)

 at

com.cloudera.sqoop.netezza.DirectNetezzaManager.importTable(DirectNetezzaManager.java:196)

 at

org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:403)

 at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)

 at org.apache.sqoop.Sqoop.run(Sqoop.java:145)

 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

 at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)

 at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)

 at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)

 at org.apache.sqoop.Sqoop.main(Sqoop.java:238)

 at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)



I've checked the options many times and tried using
db_name/table_name/user_name in all caps but still failing.



In user guide it does mention about this error but no solution as such.

Just say that use username and table_name in same case or use them in caps,
which i did but no help.



I dont' know how else to troubleshoot this to get more details.



anyone any idea to correct this? If I dont' use direct and num-mappers then
it goes fine but very slow. Direct will help in moving faster.



Appreciate any assistance on the issue.



Thanks,

-- 
Sanjeev Sagar

***Separate yourself from everything that separates you from others
! - Nirankari
Baba Hardev Singh ji *

**



-- 
Sanjeev Sagar

***Separate yourself from everything that separates you from others
! - Nirankari
Baba Hardev Singh ji *

**


Hue not starting jobsubd and besswax server on RHEL 5 64 bit

2011-07-11 Thread sanjeev sagar
Hello All, I'm using CDH 3 and Hue 1.2 on RHEL5 64 bit with mysql db based
metastore.



I'm running into a issue where Hue is showing any app in the browser.

After checking the log files, it looks like it's not even starting jobsubd
and besswax server at all.



 From the log file:



[11/Jul/2011 15:33:29 +] supervisor   INFO Command

/usr/share/hue/build/env/bin/hue runcpserver exited normally.

[11/Jul/2011 15:36:15 +] supervisor   INFO Starting process

/usr/share/hue/build/env/bin/hue runcpserver

[11/Jul/2011 15:36:15 +] supervisor   INFO Starting process

/usr/share/hue/build/env/bin/hue kt_renewer

[11/Jul/2011 15:36:15 +] supervisor   INFO Started proceses (pid

17083) /usr/share/hue/build/env/bin/hue runcpserver

[11/Jul/2011 15:36:15 +] supervisor   INFO Started proceses (pid

17085) /usr/share/hue/build/env/bin/hue kt_renewer

[11/Jul/2011 15:36:16 +] supervisor   INFO Command

/usr/share/hue/build/env/bin/hue kt_renewer exited normally.



So i checked the subcommands to see if i can start manually but it was
strange that following subcommands were not there



 beeswax_install_examples

  beeswax_server

  jobsub_setup

  jobsubd



following is the complete list;



ssa...@h1.iad hue$ /usr/share/hue/build/env/bin/hue help Traceback (most
recent call last):

  File /usr/lib64/python2.4/logging/config.py, line 157, in fileConfig

log.addHandler(handlers[hand])

KeyError: 'logfile'

INFO:root:Welcome to Hue 1.2.0

DEBUG:desktop.appmanager:Loaded Desktop Libraries: hadoop
DEBUG:desktop.appmanager:Old-style static directory:

/usr/share/hue/desktop/core/static

DEBUG:desktop.appmanager:Loaded Desktop Applications:

DEBUG:root:Installed Django modules: DesktopModule(hadoop:

hadoop),DesktopModule(Hue: desktop)

DEBUG:desktop.lib.conf:Loading configuration from: hue-beeswax.ini
DEBUG:desktop.lib.conf:Loading configuration from: hue.ini
DEBUG:desktop.lib.conf:Loading configuration from: hue-beeswax.ini
DEBUG:desktop.lib.conf:Loading configuration from: hue.ini

usage: hue subcommand [options] [args]



options:

  -v VERBOSITY, --verbosity=VERBOSITY

Verbosity level; 0=minimal output, 1=normal output,

2=all output

  --settings=SETTINGS   The Python path to a settings module, e.g.

myproject.settings.main. If this isn't provided,
the

DJANGO_SETTINGS_MODULE environment variable will be

used.

  --pythonpath=PYTHONPATH

A directory to add to the Python path, e.g.

/home/djangoprojects/myproject.

  --traceback   Print traceback on exception

  --version show program's version number and exit

  -h, --helpshow this help message and exit



Type 'hue help subcommand' for help on a specific subcommand.



Available subcommands:

  changepassword

  clean_pyc

  cleanup

  compile_pyc

  compilemessages

  config_dump

  config_help

  config_upgrade

  convert_to_south

  create_app

  create_command

  create_desktop_app

  create_jobs

  create_test_fs

  createcachetable

  createsuperuser

  datamigration

  dbshell

  depender_check

  depender_rewrite

  describe_form

  diffsettings

  dumpdata

  dumpscript

  export_emails

  flush

  generate_secret_key

  graph_models

  graphmigrations

  inspectdb

  kt_renewer

  loaddata

  mail_debug

  makemessages

  migrate

  passwd

  print_user_for_session

  reset

  reset_db

  runcpserver

  runfcgi

  runjob

  runjobs

  runprofileserver

  runpylint

  runscript

  runserver

  runserver_plus

  schemamigration

  set_fake_emails

  set_fake_passwords

  shell

  shell_plus

  show_templatetags

  show_urls

  sql

  sqlall

  sqlclear

  sqlcustom

  sqldiff

  sqlflush

  sqlindexes

  sqlinitialdata

  sqlreset

  sqlsequencereset

  startapp

  startmigration

  sync_media_s3

  syncdata

  syncdb

  test

  test_windmill

  testserver

  unreferenced_files

  validate



On my 32 bit, jobsubd and beeswax server subcommands are there.



Am i missing anything on 64 bit?



In log directory, there was only one runcpserver.out file. see below for the
message



starting server with options {'ssl_certificate': None, 'workdir': None,

'server_name': 'localhost', 'host': 'h1', 'daemonize': False, 'threads':

10, 'pidfile': None, 'server_group': 'hue', 'ssl_private_key': None,

'port': 8088, 'server_user': 'hue'}



Any help is highly appreciable. I've the same rpms on 32 and 64 bit. In
addition I've installed the 64 bit cyrus and hue-common rpm too.



Regards,

-- 
Sanjeev Sagar


HUE Beeswax failed to execute status bar view

2011-06-03 Thread sanjeev sagar
Hello

I'm trying to connect hue/beeswax to my cloudera hadoop cluster master node
with Hive running. Hive and Hue both are using the mysql database repository
on master node itself.

I'm running into the following error in my browser when i click on Beeswax
or file browser or user admin etc. I've rechecked all the dependency, all
hue/cyrus rpm's. My OS is RHEL 5.4 32 bit.

An error occurred: No module for code code object execute_query at
0xa9c07e0, file /usr/share/hue/apps/beeswax/src/beeswax/views.py, line

250 (frame frame object at 0xa7e724c). Perhaps you have an old .pyc file
hanging around?





[03/Jun/2011 11:27:26 +] viewsERRORFailed to execute

status_bar view function dock_jobs at 0x92cd6bc

Traceback (most recent call last):

  File /var/share/hue/desktop/core/src/desktop/views.py, line 152, in

status_bar

r = view(request)

  File /usr/share/hue/apps/jobbrowser/src/jobbrowser/views.py, line

93, in dock_jobs

}, force_template=True)

  File /var/share/hue/desktop/core/src/desktop/lib/django_util.py,

line 202, in render

template_lib=template_lib,

  File /var/share/hue/desktop/core/src/desktop/lib/django_util.py,

line 129, in _render_to_response

return django_mako.render_to_response(template, *args, **kwargs)

  File /var/share/hue/desktop/core/src/desktop/lib/django_mako.py,

line 115, in render_to_response

return HttpResponse(render_to_string(template_name,

data_dictionary), **kwargs)

  File /var/share/hue/desktop/core/src/desktop/lib/django_mako.py,

line 103, in render_to_string_normal

template = lookup.get_template(template_name)

  File /var/share/hue/desktop/core/src/desktop/lib/django_mako.py,

line 74, in get_template

app = apputil.get_current_app()

  File /var/share/hue/desktop/core/src/desktop/lib/apputil.py, line

43, in get_current_app

raise Exception((No module for code %s (frame %s). Perhaps you have

an old  +

Exception: No module for code code object dock_jobs at 0x9069ea0, file

/usr/share/hue/apps/jobbrowser/src/jobbrowser/views.py, li

ne 88 (frame frame object at 0x957d36c). Perhaps you have an old .pyc

file hanging around?

[03/Jun/2011 11:27:26 +] middleware   ERRORMiddleware caught an

exception

Traceback (most recent call last):

  File

/var/share/hue/build/env/lib/python2.4/site-packages/Django-1.2.3-py2.4.egg/django/core/handlers/base.py,


line 100, in get_

response

response = callback(request, *callback_args, **callback_kwargs)
  File /usr/share/hue/apps/beeswax/src/beeswax/views.py, line 59, in index


Any help is highly appreciable.

-- 
Sanjeev Sagar


Re: which mysql 64 bit binary for PowerEdge 1950 ?

2009-11-09 Thread Sanjeev Sagar
Is there anyone, who is using the mysql 64 bit on Intel(R) Xeon(R) CPU 
processors? Will it be *IntelEM64T* ?


Regards,

Sanjeev Sagar wrote:

Hello Everyone,

I would like to verify that which mysql 64 bit binary i need to use 
for PowerEdge 1950 ?


Will it be *IntelEM64T* ?

Following is the cpu info

Processor 1
Processor Brand: Intel(R) Xeon(R) CPU   E5310  @ 
1.60GHz

Processor Version  : Model 15 Stepping 11
Voltage: 1400 mV

Processor 2
Processor Brand: Intel(R) Xeon(R) CPU   E5310  @ 
1.60GHz

Processor Version  : Model 15 Stepping 11
Voltage: 1400 mV

I highly appreciate it. For 32 bit, i normally use the Linux generic 
x86. Not sure for mysql 64 bit?


Regards,
Sanjeev



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



which mysql 64 bit binary for PowerEdge 1950 ?

2009-10-30 Thread Sanjeev Sagar

Hello Everyone,

I would like to verify that which mysql 64 bit binary i need to use for 
PowerEdge 1950 ?


Will it be *IntelEM64T* ?

Following is the cpu info

Processor 1
Processor Brand: Intel(R) Xeon(R) CPU   E5310  @ 1.60GHz
Processor Version  : Model 15 Stepping 11
Voltage: 1400 mV

Processor 2
Processor Brand: Intel(R) Xeon(R) CPU   E5310  @ 1.60GHz
Processor Version  : Model 15 Stepping 11
Voltage: 1400 mV

I highly appreciate it. For 32 bit, i normally use the Linux generic 
x86. Not sure for mysql 64 bit?


Regards,
Sanjeev


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Problems with replication restarting

2004-12-20 Thread Sanjeev Sagar
First of all MySQL replication do not mismatch position in simply
starting and stopping the slave as long as you have all relay and info
files.

Secondly, change master recreates the master.info file. 

Third...one can implement the log_position table on slave inserting a
record every minute to record the details from show master logs and
show slave status commands. This will give you exact position every
minute in your database and one can run change master anytime to play
with the position.

I have never seen MySQL replication changing position in just starting
and stopping slave. Which MySQL version or release on what O/S?



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 20, 2004 9:26 AM
To: [EMAIL PROTECTED]
Subject: RE: Problems with replication restarting

So this would imply that you cannot simply stop/start a slave server -
instead, I would need to write a wrapper script that stops the slave
using STOP SLAVE, and at next startup, read the master.info file to
find out where it left off, and then issue a CHANGE MASTER TO...
statement to continue on ?

That can certainly be done, but it seems strange that there's no mention
of such a big gotcha in the MySQL manual...

Many thanks for the advice,

-Mark

-Original Message-
From: Ian Sales [mailto:[EMAIL PROTECTED] 
Sent: 20 December 2004 16:42
To: Round, Mark - CALM Technical [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: Problems with replication restarting


[EMAIL PROTECTED] wrote:

However, when I restart the slave (through init scripts, or when
rebooting the server etc.), instead of continuing on from where it left
off, it appears to start again from the beginning. This is confirmed by
watching the value of Relay_Log_Pos from SHOW SLAVE STATUS\G.
  

- delete the master.info file in the data directory, and then use CHANGE

MASTER ... to set the required values before starting replication on the

slave.

- ian


-- 
+---+
| Ian Sales  Database Administrator |
|   |
| eBuyer  http://www.ebuyer.com |
+---+
---
This E-mail is from IPC Media Ltd whose registered office is at Kings
Reach Tower, Stamford Street, London SE1 9LS, registered number 53626.
The contents and any attachments to it include information that is 
private and confidential and should only be read by those persons to 
whom they are addressed. IPC Media accepts no liability for any loss or
damage suffered by any person arising from the use of this e-mail. 
Neither IPC Media nor the sender accepts any responsibility for viruses
and it is your responsibility to check the email and attachments (if
any).
No contracts may be concluded on behalf of IPC Media by means of e-mail
communications. If you have received this e-mail in error, please
destroy
and delete the message from your computer. For unbeatable savings on
magazine subscriptions and great gift ideas visit www.ipcsubs.co.uk/IZAF


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Load data question in cross database replication

2004-12-09 Thread Sanjeev Sagar
Thanks !

-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 08, 2004 3:18 AM
To: [EMAIL PROTECTED]
Subject: Re: Load data question in cross database replication

Hello.

It seems to be a bug:
  http://bugs.mysql.com/bug.php?id=6353


Sanjeev Sagar [EMAIL PROTECTED] wrote:
 Hello Gleb,
 
 My question was related to LOAD DATA INFILE, not LOAD DATA FROM
MASTER.=20
 
 LOAD DATA INFILE work those slaves which are not using
 --replicate-rewrite-db. It do not work for those which are using this.
 
 Thanks for you reply.
 
 -Original Message-
 From: Gleb Paharenko [mailto:[EMAIL PROTECTED]
 Sent: Friday, December 03, 2004 4:14 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Load data question in cross database replication
 
 Hello.
 
 --replicate-rewrite-db is not taken into account while executing LOAD
 DATA
 FROM MASTER. See:
  http://dev.mysql.com/doc/mysql/en/LOAD_DATA_FROM_MASTER.html
 
 
 
 Sanjeev Sagar [EMAIL PROTECTED] wrote:
 
 
 --=20
 For technical support contracts, goto
 https://order.mysql.com/?ref=3Densita
 This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
 /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com
 
 
 
 
 --=20
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Load data question in cross database replication

2004-12-06 Thread Sanjeev Sagar
Hello Gleb,

My question was related to LOAD DATA INFILE, not LOAD DATA FROM MASTER. 

LOAD DATA INFILE work those slaves which are not using
--replicate-rewrite-db. It do not work for those which are using this.

Thanks for you reply.

-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 03, 2004 4:14 AM
To: [EMAIL PROTECTED]
Subject: Re: Load data question in cross database replication

Hello.

--replicate-rewrite-db is not taken into account while executing LOAD
DATA
FROM MASTER. See:
  http://dev.mysql.com/doc/mysql/en/LOAD_DATA_FROM_MASTER.html



Sanjeev Sagar [EMAIL PROTECTED] wrote:


-- 
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Master_log_pos in replication

2004-12-06 Thread Sanjeev Sagar
Hello All,

 

I am having problem in finding out the exact position in master bin log
file in replication setup for point-in-time recovery process. Let me
explain the problem in detail.

 

I have log_position table on all slaves, see the definition below

 

   Table: log_position

Create Table: CREATE TABLE `log_position` (

  `host` varchar(60) NOT NULL default '',

  `time_stamp` timestamp(14) NOT NULL,

  `log_file` varchar(32) default NULL,

  `log_pos` int(11) default NULL,

  `master_host` varchar(60) default NULL,

  `master_log_file` varchar(32) default NULL,

  `master_log_pos` int(11) default NULL,

  PRIMARY KEY  (`host`,`time_stamp`)

) TYPE=MyISAM

 

I am running a script every minute to insert a record into this table by
using the information from show slave status. 

 

my $mastersql=SHOW MASTER STATUS;

my $slavesql=SHOW SLAVE STATUS;

 


my $masterinfo = $conn-Hash($mastersql) if (defined $conn);

my $slaveinfo = $conn-Hash($slavesql) if (defined $conn);

 


my $logfile = $masterinfo-{File} ;

my $logpos = $masterinfo-{Position} ;

my $masterhost = $slaveinfo-{Master_Host} ;

my $masterlogfile = $slaveinfo-{Relay_Master_Log_File} ;

my $masterlogpos = $slaveinfo-{Exec_master_log_pos} ;

 

 

Also I am taking log snapshot every hour to my backup server.

 

I am in a situation where every minute I get 3000 transactions in my
database, mostly inserts.

 

When I am doing a point-in-time recovery, I am not able to see the
correct position in master_log_file during that minute interval.

 

My question is that is there any way which can help me if I have to
locate the position in master_log_file during that one minute interval.
I do not have option of running the script 30 sec. We are looking for
something where there is a unique identifier for every record in binlog
file. 

 

Any idea or help will be highly appreciable.

 

Regards, 



Binlog question in replication setup

2004-12-02 Thread Sanjeev Sagar
Hello everyone,

 

I have a question on how MySQL database write to binlogs in replication
environment. My table type is MyISAM. MySQL version is 4.0.21.

 

I have a replication farm. Let's suppose I am running a ALTER TABLE
statement on central master and had a syntax error in table name.
Replication abort everywhere saying that table do not exists.

 

I need to know that how MySQL write to binlog file.

 

Does it write before or after a successful execution or commit the
statement. Why a syntax error statement need to hand  over to Slave IO
thread to relay log.

 

Is there any control like any parameter in option file or anything else
in order to control to write only those statements in binary log which
ran successful on master.

 

Appreciate it.



Load data question in cross database replication

2004-12-02 Thread Sanjeev Sagar
Hello Everyone,

 

I have a question on using LOAD DATA command in cross database
replication setup. MySQL version is 4.0.21

 

I have replication farm where few slaves have been set up as cross
database replication slave by using (replicate-rewrite-db).

 

When Load data command get executed on central master, it get replicated
fine in those slave which are not cross database but it abort with error
saying that database do not exist in those slave, which are configured
with replicate-rewrite-db.

 

This is my understanding that it has hard coded database name in .ini
file, which it generate for load data command. Is there any way to make
it happen that load data should get executed irrespective of
replicate-rewrite-db configuration. Any parameter or any work around?

 

Regards,

 



how to connect to MyDBPAL from MySQL

2004-11-23 Thread Sanjeev Sagar
I have seen mails talking about MydbPAL. I downloaded it and I have
MySQL ODBC data source too but not able to connect to MydbPal at all. I
have tested my odbc data source, it work just fine. My machine is having
Linux and Windows through vmvare workstation. I installed dbpal on
windows and try to connect to database on my local linux based
partition.

 

Indise MydbPAL, I click on Workshop-click on Object Browser-select the
data source name

 

I still see wait symbol in toolbar. Also when I click on db test
connect, nothing happen. Am I missing anything. I checked the
documentation but it's not that clear that how to connect by using ODBC.

 

Any help will be highly appreciable. 

 

Regards,



RE: how to connect to MyDBPAL from MySQL

2004-11-23 Thread Sanjeev Sagar
YES ODBC work perfect. I am using same data source for MySQL
administrator and Query Browser too. 

Anyway, if you are using can you go over the steps for db connection

1. Open dbPAL, Click on Workshop
2. Choose data source by clicking ODBC from the list under MySQL 
3. What next after that??
4. I clicked on new user and gave entry, it just gave me one a open lock
icon on toolbar. How do I test my database connection. Db-test-connect
is grade out, not clickable.

No error reported in err file

Regards,


-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 3:32 PM
To: Sanjeev Sagar
Cc: [EMAIL PROTECTED]
Subject: Re: how to connect to MyDBPAL from MySQL

Are there any MySQL errors being logged? Did you confirm that the ODBC 
connection is working?

Sanjeev Sagar wrote:

I have seen mails talking about MydbPAL. I downloaded it and I have
MySQL ODBC data source too but not able to connect to MydbPal at all. I
have tested my odbc data source, it work just fine. My machine is
having
Linux and Windows through vmvare workstation. I installed dbpal on
windows and try to connect to database on my local linux based
partition.

 

Indise MydbPAL, I click on Workshop-click on Object Browser-select
the
data source name

 

I still see wait symbol in toolbar. Also when I click on db test
connect, nothing happen. Am I missing anything. I checked the
documentation but it's not that clear that how to connect by using
ODBC.

 

Any help will be highly appreciable. 

 

Regards,


  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: how to connect to MyDBPAL from MySQL

2004-11-23 Thread Sanjeev Sagar
I guess I am getting close

I drag the odbc connection line to database icon in db-test-connection.
I created new user and drag it to user part of db-test-connection. Now
the waiting tray has go icon. I drag the go icon to console tray,
nothing happened.

What is the next step to see if database connection is going through.

-Original Message-
From: Sanjeev Sagar [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 3:55 PM
To: Victor Pendleton
Cc: [EMAIL PROTECTED]
Subject: RE: how to connect to MyDBPAL from MySQL

YES ODBC work perfect. I am using same data source for MySQL
administrator and Query Browser too. 

Anyway, if you are using can you go over the steps for db connection

1. Open dbPAL, Click on Workshop
2. Choose data source by clicking ODBC from the list under MySQL 
3. What next after that??
4. I clicked on new user and gave entry, it just gave me one a open lock
icon on toolbar. How do I test my database connection. Db-test-connect
is grade out, not clickable.

No error reported in err file

Regards,


-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 3:32 PM
To: Sanjeev Sagar
Cc: [EMAIL PROTECTED]
Subject: Re: how to connect to MyDBPAL from MySQL

Are there any MySQL errors being logged? Did you confirm that the ODBC 
connection is working?

Sanjeev Sagar wrote:

I have seen mails talking about MydbPAL. I downloaded it and I have
MySQL ODBC data source too but not able to connect to MydbPal at all. I
have tested my odbc data source, it work just fine. My machine is
having
Linux and Windows through vmvare workstation. I installed dbpal on
windows and try to connect to database on my local linux based
partition.

 

Indise MydbPAL, I click on Workshop-click on Object Browser-select
the
data source name

 

I still see wait symbol in toolbar. Also when I click on db test
connect, nothing happen. Am I missing anything. I checked the
documentation but it's not that clear that how to connect by using
ODBC.

 

Any help will be highly appreciable. 

 

Regards,


  



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Change master on replication

2004-10-22 Thread Sanjeev Sagar


Master A has Slave 1 and Slave 2
Master B has Slave 3 and Slave 4

Master A is ahead of Master B, and Master A goes down, we want to 
repoint Slave 3 and Slave 4 to Master B.  Since the data is large, we 
don't want to do a fresh resync of Slave 3 and 4 from scratch.  And A 
and B may not have been updated at the same time (or in same order), so 
their binary logs won't be in same order (so it's not as simple as 
saying go to the last position you were in when your Master died on your 
new master and continue from there).

 I am not sure that I understand what you mean by repoint slave 3 and slave 4 when 
 Master B is already behind than Master A. is it a typo mistake.

Anyway, you can implement Heart beat mechanism at master level and run it every min or 
30 sec whatever you like. You can take a look at replication chapter from Jeremy book.

http://dev.mysql.com/books/hpmysql-excerpts/ch07.html   

Regards,


RE: Ignore a single query in replication

2004-10-21 Thread Sanjeev Sagar
Try SET SQL_LOG_BIN=0 before you run your queires on master. This will be valid for 
that connection only. 


-Original Message-
From: Gary Richardson [mailto:[EMAIL PROTECTED]
Sent: Thu 10/21/2004 11:24 AM
To: Mysql General (E-mail)
Subject: Ignore a single query in replication
 
Hey,

Is there a way to tell the slave to not execute a query without
ignoring tables or databases?

There are a bunch of queries that happen on the master for statistical
purposes that don't use temp tables and generate large amounts of
data. These queries don't need to run on the slaves and in fact slow
it down quite a bit.

I've tried hunting around the online docs, but I can't seem to find
anything. For some reason I thought there was some sort of comment
that I could put infront of my query to accomplish this.

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




how to use mysql client source filename command through DBI/DBD

2004-10-18 Thread Sanjeev Sagar
Hello All,
 
I am trying to create a perl DBI/DBD script for creating a database initial build. My 
input is a extract file, which is a mysqldump result file with --opt and -B option. 
 
I am using DBIx::DWIW. I am able to open a successful database handler. I am having 
code like
 
my $dropsql=DROP DATABASE $ARGV[1];
my $loadsql=source /tmp/extract-file-name;
 
print Running database load...\n;
 
$conn-Execute($dropsql);
RC=$?
die Error running in $dropsql...\n if ( RC != 0 );
 
$conn-Execute($loadsql);
RC=$?
die Error in running $loadsql...\n if (RC != 0);
 
 
It appears that $loadsql is not going through. It is able to drop database but not 
running source command at all. I have tried by using \. extract-file-name, still no 
luck.
 
Looks to me that I am not doing it right way. Any help will be highly appreciated.
 
Regards,
 
 


empty user catalog entry for user column

2004-09-22 Thread Sanjeev Sagar
Hello All,

For a new build I am seeing that mysql_install_db is putting entries in user table 
where user= in addition to the correct entries for root. See below

mysql select * from user where user=;
+-+--+--+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-+--+---+-+-+
| Host| User | Password | Select_priv | Insert_priv | Update_priv 
| Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | 
File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | 
Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv 
| Repl_client_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | 
max_questions | max_updates | max_connections |
+-+--+--+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-+--+---+-+-+
| localhost   |  |  | N   | N   | N   
| N   | N   | N | N   | N | N| 
N | N  | N   | N  | N  | N| N  
| N | N| N| N   | 
N|  | | |  | 0 |   
0 |   0 |
| testd4.a.com |  |  | N   | N   | N   | N 
  | N   | N | N   | N | N| N   
  | N  | N   | N  | N  | N| N  
| N | N| N| N   | N
|  | | |  | 0 |   0 |  
 0 |
+-+--+--+-+-+-+-+-+---+-+---+--+---++-+++--++---+--+--+-+--+--++-+--+---+-+-+

This is creating problem in conencting when I use -h full-domain-name. For localhost 
it work.

Any reason that MySQL is creating these entries in addtion to the correct entries for 
root.

Any explanation???

Regards,



Installing DBIx::DWIW on CPAN

2004-09-15 Thread Sanjeev Sagar

Hello All,

I am trying to install DBIx::DWIW but giving me following error.

No such file `DBIx-DWIW-0.41.tar.gz'

I am trying to install from CPAN 

cpan install DBIx::DWIW

Could not fetch authors/id/J/JZ/JZAWODNY/DBIx-DWIW-0.41.tar.gz
Giving up on '/root/.cpan/sources/authors/id/J/JZ/JZAWODNY/DBIx-DWIW-0.41.tar.gz'
Note: Current database in memory was generated on Wed, 15 Sep 2004 11:08:29 GMT
 

Regards,


RE: Installing DBIx::DWIW on CPAN

2004-09-15 Thread Sanjeev Sagar
I am trying on cpan.yahoo.com and it's giving me following error

The requested URL /authors/id/J/JZ/JZAWODNY/DBIx-DWIW-0.41.tar.gz was not found on 
this server.



-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]
Sent: Wed 9/15/2004 1:05 PM
To: Sanjeev Sagar
Cc: [EMAIL PROTECTED]
Subject: Re: Installing DBIx::DWIW on CPAN
 
On Wed, Sep 15, 2004 at 11:27:55AM -0700, Sanjeev Sagar wrote:
 
 Hello All,
 
 I am trying to install DBIx::DWIW but giving me following error.
 
 No such file `DBIx-DWIW-0.41.tar.gz'
 
 I am trying to install from CPAN 
 
 cpan install DBIx::DWIW
 
 Could not fetch authors/id/J/JZ/JZAWODNY/DBIx-DWIW-0.41.tar.gz
 Giving up on '/root/.cpan/sources/authors/id/J/JZ/JZAWODNY/DBIx-DWIW-0.41.tar.gz'
 Note: Current database in memory was generated on Wed, 15 Sep 2004 11:08:29 GMT

Weird.

Perhaps your CPAN mirror has an index that's out of sync with the
actual data?  I've seen that happen before (rsync isn't atomic in that
way).  And 0.41 is fairly new.

I'd try another mirror.  It's current on cpan.yahoo.com, for example.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/



RE: Installing DBIx::DWIW on CPAN

2004-09-15 Thread Sanjeev Sagar
Thanks !

I am trying to run make test but it's giving me following error

[EMAIL PROTECTED] DBIx-DWIW-0.41]# make test
PERL_DL_NONLAZY=1 /usr/bin/perl -Iblib/lib -Iblib/arch test.pl
1..1
# Running under perl version 5.008 for linux
# Current time local: Wed Sep 15 13:42:27 2004
# Current time GMT:   Wed Sep 15 20:42:27 2004
# Using Test.pm version 1.23
Can't locate Time/HiRes.pm in @INC (@INC contains: blib/lib blib/arch 
/usr/lib/perl5/5.8.0/i386-linux-thread-multi /usr/lib/perl5/5.8.0 
/usr/lib/perl5/site_perl/5.8.0/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.0 
/usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.8.0/i386-linux-thread-multi 
/usr/lib/perl5/vendor_perl/5.8.0 /usr/lib/perl5/vendor_perl 
/usr/lib/perl5/5.8.0/i386-linux-thread-multi /usr/lib/perl5/5.8.0 .) at 
blib/lib/DBIx/DWIW.pm line 12.
BEGIN failed--compilation aborted at blib/lib/DBIx/DWIW.pm line 12.
Compilation failed in require at test.pl line 10.
BEGIN failed--compilation aborted at test.pl line 10.
make: *** [test_dynamic] Error 2

Any help will be highly appreciable.





-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]
Sent: Wed 9/15/2004 1:18 PM
To: Sanjeev Sagar
Cc: [EMAIL PROTECTED]
Subject: Re: Installing DBIx::DWIW on CPAN
 
On Wed, Sep 15, 2004 at 01:10:35PM -0700, Sanjeev Sagar wrote:
 I am trying on cpan.yahoo.com and it's giving me following error
 
 The requested URL /authors/id/J/JZ/JZAWODNY/DBIx-DWIW-0.41.tar.gz was not found on 
 this server.

Hmm.

wget http://cpan.yahoo.com/authors/id/J/JZ/JZAWODNY/DBIx-DWIW-0.41.tar.gz

Works for me.

How about you?

I checked and the mirrors on both machines behind cpan.yahoo.com are
up-to-date.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/



Local Master replication issue

2004-09-10 Thread Sanjeev Sagar
Hello All,

I am seeing a small problem in Ring replication where one slave is acting as Local 
Master. See below

M - Super Master
S1/LM1 - Slave of super Master and act as Local Master for S2
S2 - slave of LM1

I ran one Insert on M, it showd up on S1/LM1 but it did not showed up in S2.

What I can see that S1 I/O thread bring that transaction in relay log on S1 and apply 
it but it did not consider as write on s1/LM1 resulting that binlog do not have record 
of it. Since binlog do not hast it, so it did not replicate to S2.

Am I missing anything here?

As per our requirement that transaction should also showed up in S2 too?

It's obivious to think that make S2 as direct slave of M but it is not accepted 
because things r bit complicated here.

Is there any specific configuration thing to acheive a slave as local master for 
another slave.

Any help will be highly appreciable.

Regards,



RE: Local Master replication issue

2004-09-10 Thread Sanjeev Sagar

--log-slave-updates did the job.

Regards,

-Original Message-
From: Sanjeev Sagar [mailto:[EMAIL PROTECTED]
Sent: Fri 9/10/2004 3:36 PM
To: [EMAIL PROTECTED]
Subject: Local Master replication issue
 
Hello All,

I am seeing a small problem in Ring replication where one slave is acting as Local 
Master. See below

M - Super Master
S1/LM1 - Slave of super Master and act as Local Master for S2
S2 - slave of LM1

I ran one Insert on M, it showd up on S1/LM1 but it did not showed up in S2.

What I can see that S1 I/O thread bring that transaction in relay log on S1 and apply 
it but it did not consider as write on s1/LM1 resulting that binlog do not have record 
of it. Since binlog do not hast it, so it did not replicate to S2.

Am I missing anything here?

As per our requirement that transaction should also showed up in S2 too?

It's obivious to think that make S2 as direct slave of M but it is not accepted 
because things r bit complicated here.

Is there any specific configuration thing to acheive a slave as local master for 
another slave.

Any help will be highly appreciable.

Regards,




RE: Multiple MysQL servers with different IP address on same machine

2004-09-09 Thread Sanjeev Sagar
Actually mysqld parameter bind-address work great for different IP addresses on same 
port for different servers on same machine. One can use -h hostname for clients 
connection to a specific MySQL database server. 

Thanks !


-Original Message-
From: Peter Lovatt [mailto:[EMAIL PROTECTED]
Sent: Thu 9/9/2004 9:51 AM
To: [EMAIL PROTECTED]; Sanjeev Sagar
Cc: [EMAIL PROTECTED]; Sanjeev Sagar
Subject: RE: Multiple MysQL servers with different IP address on same machine
 
Hi

We have a machine with 2 IP addresses and mysql 3.23 on one and 4.10 on the
other. Both using port 3306

One instance listens on localhost, which maps to 127.0.0.1, and also on one
of the public IP addreses and the other listens to the other IP address.

I use the IP address in the connection string and so far it  works fine. I
am in the process of setting up the server, and only have phpmyadmin
installed (twice - one installation per mysql server) but that works
correctly, so I expect everything  else will.

HTH

Peter




 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: 09 September 2004 14:53
 To: Sanjeev Sagar
 Cc: [EMAIL PROTECTED]; Sanjeev Sagar
 Subject: Re: Multiple MysQL servers with different IP address on same
 machine


 I need to add to my previous post -- You asked about using the SAME
 operating system socket as well as using separate addresses with the same
 port number (different IP sockets)

 My answer to that is NOT ON YOUR LIFE. Think of the chaos. If one client
 tried to connect to an OS socket that 3 different servers were listening
 to... Which one gets the connection? Which one validates the client? If
 for some reason the client *were* able to validate against all three
 servers at the same time, how could it sort out the 3 different responses
 to a query?

 NO each server must have it's own socket. It doesn't matter if we are
 discussing IP sockets or OS sockets the answer is still the same.

 Sorry for the previous oversight,

 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine

 Sanjeev Sagar [EMAIL PROTECTED] wrote on 09/08/2004
 05:04:38 PM:

 
  Hello All,
 
  MySQL : Standar Binary 4.0.20
  O/S : Red Hat Linux release 9 (Shrike)
  Linux  2.4.20-31.9smp #1 SMP Tue Apr 13 17:40:10 EDT 2004 i686 i686
  i386 GNU/Linux
 
  I already have setup of Three Multiple MySQL servers listening on
  different ports and sockets on same machine
 
  Option File:
 
  [mysqld1]
  server-id =1
  port=3306
  socket=/tmp/mysql.sock
  datadir=data1
 
  [mysqld2]
  server-id=2
  port=3307
  socket=/tmp/mysql.sock2
  datadir=data2
 
  [mysqld3]
  server-id=3
  port=3308
  socket=/tmp/mysql.sock3
  datadir=data3
 
  All three servers started with no problem. Question is if I don't
  want to use different ports or scokets, can I use the different I.P.
  Addresses on same machine for three servers with same default port or
 socket.
 
  /etc/hosts file
  ===
 
  127.0.0.100  s1
  127.0.0.101  s2
  127.0.0.102   s3
 
 
  Can I start three servers on  same port (3306), same socket
  (/tmp/mysql.sock) on same machine by using above IP addresses? If
  yes then HOW?
 
  Can I use the replication in b/w them? keeping datadir and log-bin
  directory differtent is not a problem.
 
  Appreciate it.
 






Multiple MysQL servers with different IP address on same machine

2004-09-08 Thread Sanjeev Sagar

Hello All,

MySQL : Standar Binary 4.0.20
O/S : Red Hat Linux release 9 (Shrike)
Linux  2.4.20-31.9smp #1 SMP Tue Apr 13 17:40:10 EDT 2004 i686 i686 i386 GNU/Linux

I already have setup of Three Multiple MySQL servers listening on different ports and 
sockets on same machine

Option File:

[mysqld1]
server-id =1
port=3306
socket=/tmp/mysql.sock
datadir=data1

[mysqld2]
server-id=2
port=3307
socket=/tmp/mysql.sock2
datadir=data2

[mysqld3]
server-id=3
port=3308
socket=/tmp/mysql.sock3
datadir=data3

All three servers started with no problem. Question is if I don't want to use 
different ports or scokets, can I use the different I.P. Addresses on same machine for 
three servers with same default port or socket.

/etc/hosts file
===

127.0.0.100  s1
127.0.0.101  s2
127.0.0.102   s3
 

Can I start three servers on  same port (3306), same socket (/tmp/mysql.sock) on same 
machine by using above IP addresses? If yes then HOW?

Can I use the replication in b/w them? keeping datadir and log-bin directory 
differtent is not a problem.

Appreciate it.