Michael,
So as you said, do you want the upstream to encrypt data before sending it to
HDFS.
Regards
Abhishek
On Feb 15, 2013, at 8:47 AM, Michael Segel michael_se...@hotmail.com wrote:
Simple, have your app encrypt the field prior to writing to HDFS.
Also consider HBase.
On Feb 14,
Marcos,
Encryption from upstream is that what you are talking about
Regards
Abhishek
On Feb 15, 2013, at 1:28 PM, Marcos Ortiz Valmaseda mlor...@uci.cu wrote:
Regards, abhishek.
I´m agree with Michael. You can encrypt your incoming data from your
application.
I recommend to use HBase
For the longer term, Project
Rhinohttps://github.com/intel-hadoop/project-rhino is
an attempt to add additional security features to the open source of
Hadoop, as opposed to several companies that have advertised end-to-end
encryption (for a price).
On Fri, Mar 1, 2013 at 9:32 AM, abhishek
Hey there,
I've updated the Ansible install/configure scripts for Hadoop, right
now the repo points to CDH4 but you can change that easily enough.
It needs some cleanup over the next few days, but it should help
anyone who's had to do a lot of workarounds to get the 12.04 scripts
to work.
Hi,
we are writing our fsimage and edits file on the namenode and secondary
namenode and additional on a nfs share.
In these folders we found a a lot of fsimage.ckpt_0
. files, the oldest is from 9. Aug 2012.
As far a i know these files should be deleted after the secondary
Any help...
On Fri, Mar 1, 2013 at 12:06 PM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi All,
I am facing one problem , how to specify the schema name before the
table while executing the sqoop import statement.
$ sqoop import --connect jdbc:sap://host:port/db_name --driver
thanks Harsh but you didn't answer on this before, I will try to move old
directory name to new location and restart services . Hope it will not
loose any data .
old Location
$ sudo ls -l /var/lib/hadoop-hdfs/cache/hdfs/dfs/
total 12
drwx--. 3 hdfs hdfs 4096 Dec 19 02:37 data
drwxr-xr-x. 3
Hi vikas:
Have you asked google first? I searched in Google using your keyword, and
found the following link.
It may be helpful to you.
yours,
Ling Kun
On Fri, Mar 1, 2013 at 4:42 PM, Vikas Jadhav vikascjadha...@gmail.comwrote:
Hello,
I want to to build hadoop code downloded from
Use hbase import and export for migration of data from one cluster to
another.
On Fri, Mar 1, 2013 at 2:36 PM, samir das mohapatra samir.help...@gmail.com
wrote:
Hi All,
Problem Statement:
1) We have two cluster , let for example
i) cluster-1
ii) cluster-2
There
Try this
./sqoop import --connect jdbc:mysql://localhost/my --username user
--password 1234 --query 'select * from table where id=5 AND $CONDITIONS'
--split-by table.id --target-dir /dir
you must specify --split-by and --target-dir
On Fri, Mar 1, 2013 at 12:32 PM, samir das mohapatra
Your job.xml file is kept for a set period of time.
I believe the others are automatically removed.
You can easily access the job.xml file from the JT webpage.
On Mar 1, 2013, at 4:14 AM, Ling Kun lkun.e...@gmail.com wrote:
Dear all,
In order to know more about the files creation and
Ling, do you have Hadoop: The Definitive Guide close-by?
I think I remember somewhere they said about keeping the intermediate files.
Take a look at keep.task.files.pattern... It might help you to keep
some of the files you are looking for? Maybe not all... Or even maybe
not any.
JM
2013/3/1
Hello,
The following patch was released on 25th February, 2013, Monday:
https://issues.apache.org/jira/browse/HIVE-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#issue-tabs
Although it says resolution: unresolved, I was wondering that if it is
available in the
Also required: --query or --table .
If your table has a primary key, --split-by is optional.
Detailed:
Have a look at sqoop documentation here http://sqoop.apache.org/docs/.
v1.4.2 user guide: http://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html
Thanks,
Abhijeet
On Fri, Mar 1, 2013 at 6:36
There are two ways to import data from sqoop.
Table dump (without select statements)
Example:
sqoop import --connect jdbc:mysql://host/database --username name
--password * --table table name -m 1 --fields-terminated-by choose
any delimiter of your choice
the -m 1 will give you one input
p.s. Missed the parenthesis
sqoop import --connect jdbc:mysql://host/database --username name
--password * --fields-terminated-by ','
--query select field1,field2... from table name WHERE \$CONDITIONS
conditions --target-dir name of directory if you want --split-by field1
-m 3
From:
Hi,
Please unsubscribe me from this mailing list.
Thanks and regards !
Dude, use this:
user-unsubscr...@hadoop.apache.org
*Fabio
*
2013/3/1 Kaliyug Antagonist kaliyugantagon...@gmail.com
Hi,
Please unsubscribe me from this mailing list.
Thanks and regards !
Err..sorry for that wrong mail !
Done, thanks !
On Fri, Mar 1, 2013 at 9:12 PM, Fabio Pitzolu fabio.pitz...@gr-ci.comwrote:
Dude, use this:
user-unsubscr...@hadoop.apache.org
*Fabio
*
2013/3/1 Kaliyug Antagonist kaliyugantagon...@gmail.com
Hi,
Please unsubscribe me from this
Mohit,
Take a look at this article:
http://www.kernelhardware.org/how-should-run-fsck-linux-file-system/
From: Mohit Vadhera [mailto:project.linux.p...@gmail.com]
Sent: Friday, March 01, 2013 1:52 PM
To: user@hadoop.apache.org
Subject: hadoop filesystem corrupt
Hi,
While moving the data my
Hi Mohit,
Is your replication factor really setup to 1?
Default replication factor:1
Also, can you look into you data directories and ensure you always
have the right structur and all the related META files?
JM
2013/3/1 Mohit Vadhera project.linux.p...@gmail.com:
Hi,
While moving the
Hello,
Current Design: I have a java object MyObjectA. MyObjectA goes through
Three processors (jars) that are run in sequence and do a lot of processing
to beef up A with tons of additional stuff (think ETL) and the final result
is MyObjectD (note: MyObjectD is really A with more fields if you
good place to start is http://flume.apache.org/FlumeUserGuide.html
On Fri, Mar 1, 2013 at 11:16 AM, samir das mohapatra
samir.help...@gmail.com wrote:
Hi All,
I am planing to use flume in one of the POC project . I am new to
flume
Do you have any supported doc/link/example from where i
Hi Samir,
I may be alone here, but I would prefer you not use urgent when
asking for free help from a mailing list.
My recommendation is that if this is really urgent and you need
instant support for your Hadoop installation, that you consider
getting a proper support contract to help you when
On 03/02/2013 12:46 AM, samir das mohapatra wrote:
Hi All,
I am planing to use flume in one of the POC project . I am new to
flume
Do you have any supported doc/link/example from where i will get all
the context ASAP.
Regards,
samir.
Start here
Please unsubscribe me. I tried user-unsubscr...@hadoop.apache.org; got
the link; and confirmed it. But I keep receiving the mail. What should I
do? Thanks.
Zhu, Guojun
Financial Model Dev Sr
571-3824370
guojun_...@freddiemac.com
Financial Engineering
Freddie Mac
Kaliyug Antagonist
Hi,
I wanna start using Hadoop. I have Hadoop clusters set up by someone and I have
to try my hands on it. How do I get going with running any kind of task. Any
suggestions ?
Also before running any task what configuration should I make sure are setup
properly. I have not installed it myself
On 03/02/2013 01:57 AM, Shah, Rahul1 wrote:
Hi,
I wanna start using Hadoop. I have Hadoop clusters set up by someone
and I have to try my hands on it. How do I get going with running any
kind of task. Any suggestions ?
Also before running any task what configuration should I make sure are
Hi,
I solved this problem by running command hadoop fsck / -delete . As I
written in my email that partial data has been moved and remaining is not
moved. Can i merge the data again . Below are directories. How can i sure
the hadoop cluster is running fine . Is there any way to verify ?
# ll
Regards, Chengi
Intel is working in a battery-tested Hadoop distribution, with a marked
focused on Security Enhancements. You can see it here:
https://github.com/intel-hadoop/project-rhino/
Best wishes
On 03/01/2013 04:47 PM, Chengi Liu wrote:
Hi,
I am curious. In this strata, intel made
When I try this.. I get an error
cat: Unable to write to output stream.
Are these permissions issue
How do i resolve this?
THanks
On Wed, Feb 20, 2013 at 12:21 PM, Harsh J ha...@cloudera.com wrote:
No problem JM, I was confused as well.
AFAIK, there's no shell utility that can let you
Though it copies.. but it gives this error?
On Fri, Mar 1, 2013 at 3:21 PM, jamal sasha jamalsha...@gmail.com wrote:
When I try this.. I get an error
cat: Unable to write to output stream.
Are these permissions issue
How do i resolve this?
THanks
On Wed, Feb 20, 2013 at 12:21 PM, Harsh
yes,just ignore this log.
On Mar 2, 2013 7:27 AM, jamal sasha jamalsha...@gmail.com wrote:
Though it copies.. but it gives this error?
On Fri, Mar 1, 2013 at 3:21 PM, jamal sasha jamalsha...@gmail.com wrote:
When I try this.. I get an error
cat: Unable to write to output stream.
Are
Hi Vikas
I use this link:
http://wiki.apache.org/hadoop/EclipseEnvironment
Things go well,except for the maven part,it needs to download the
dependecies from repositroy ,it gets stuck duing the download because of
some mirror urls are unavaiable,
so you need to changes somewhat avaiable.
Hi* *Mohit
Your fsOwner hdfs should have the permission to access to the
/mnt/san1/hdfs/cache/hdfs/dfs/name.
So please check the permission of /mnt/ , and sub-directories on the OS.
they all need to be read ,writen.
regards
2013/2/28 Mohit Vadhera project.linux.p...@gmail.com
Please find
Hi Dhanasekaran
if you have delete the user directory on the HDFS.
please check using this:
hdfs dfs -ls -R /(-R maybe -r,i almost forget)
regards
2013/2/27 Dhanasekaran Anbalagan bugcy...@gmail.com
Hi Guys,
I am facing error in my name-node.
2013-02-27 07:21:58,207 ERROR
Hi Patai
I found a similar explanation on the google mapreduce publication.
http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/zh-CN//archive/mapreduce-osdi04.pdf
Please refere to the chapter:3.6 Backup Tasks
Hope to be helpful
regards
2013/3/1
A few basic questions -
1) is the rate limiting step the Java processing or storage in accumulo.
Hadoop may not be able to speed up a database which is not designed to work
in a distributed manner.
B) Can ObjectD or any intermediate objects be serialized possibly to xml
and efficiently
Samir,
It was pointed out by another member in your earlier post but here it is
again. The error returned is sorta clear enough:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=hadoop, access=WRITE,
inode=/user/dasmohap/samir_tmp:dasmohap:dasmohap:drwxr-xr-x
1. Your
39 matches
Mail list logo