Hi Markus,

There are just three columns, all of them varchars, nothing complicated at
all. No keys/indexes on the table either. A simple table with InnoDB engine
and varchar columns (lengths 40, 2 and 5).

Best,
Mark

On Fri, Aug 5, 2016 at 7:55 PM, Markus Kemper <[email protected]> wrote:

> Hello Mark,
>
> Are you able to share  your mysql table schema?
>
>
> Markus Kemper
> Customer Operations Engineer
> [image: www.cloudera.com] <http://www.cloudera.com>
>
>
> On Thu, Aug 4, 2016 at 4:11 PM, Mark Wagoner <[email protected]>
> wrote:
>
>> Hi Markus,
>>
>> Had not tried that yet, but just tested it and no luck, get the same "not
>> a specific class" error for char.
>>
>> -Mark
>>
>> On Thu, Aug 4, 2016 at 5:02 PM, Markus Kemper <[email protected]>
>> wrote:
>>
>>> Hello Mark,
>>>
>>> I will try to test this but, am curious if you tried:
>>>
>>> sqoop import --connect jdbc:mysql://DATABASE_ENDPOINT --query "select *
>>> from ci_84adea33-9194-4753-925f-529a87656048 where \$CONDITIONS"
>>> --as-parquetfile --class-name
>>> mydata1 --username USERNAME -P --map-column-java <CHAR_COL>=String
>>>
>>>
>>> Markus Kemper
>>> Customer Operations Engineer
>>> [image: www.cloudera.com] <http://www.cloudera.com>
>>>
>>>
>>> On Thu, Aug 4, 2016 at 3:48 PM, Mark Wagoner <[email protected]>
>>> wrote:
>>>
>>>> Hi Markus,
>>>>
>>>> Thanks for the feedback.
>>>>
>>>> I was able to use a slight variation of your suggestion to get past the
>>>> table name issue, but then encountered a new error related apparently to
>>>> the class of the column type I am importing (varchar), which is still
>>>> unresolved as of today (https://issues.apache.org/jir
>>>> a/browse/SQOOP-2408) so I'll likely be importing in another format and
>>>> then converting until this bug is fixed.
>>>>
>>>> [hadoop@ip-172-31-25-10 ~]$ sqoop import --connect
>>>> jdbc:mysql://DATABASE_ENDPOINT --query 'select * from
>>>> `ci_84adea33-9194-4753-925f-529a87656048` t1 where $CONDITIONS'
>>>> --class-name mydata0 --as-parquetfile --target-dir /home/hadoop/mydata0
>>>> --split-by t1.rsid --username USERNAME -P
>>>> Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports
>>>> will fail.
>>>> Please set $ACCUMULO_HOME to the root of your Accumulo installation.
>>>> 16/08/04 19:14:00 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
>>>> Enter password:
>>>> 16/08/04 19:14:08 INFO manager.MySQLManager: Preparing to use a MySQL
>>>> streaming resultset.
>>>> 16/08/04 19:14:08 INFO tool.CodeGenTool: Beginning code generation
>>>> 16/08/04 19:14:08 INFO manager.SqlManager: Executing SQL statement:
>>>> select * from `ci_84adea33-9194-4753-925f-529a87656048` t1 where  (1 =
>>>> 0)
>>>> 16/08/04 19:14:08 INFO manager.SqlManager: Executing SQL statement:
>>>> select * from `ci_84adea33-9194-4753-925f-529a87656048` t1 where  (1 =
>>>> 0)
>>>> 16/08/04 19:14:08 INFO manager.SqlManager: Executing SQL statement:
>>>> select * from `ci_84adea33-9194-4753-925f-529a87656048` t1 where  (1 =
>>>> 0)
>>>> 16/08/04 19:14:08 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is
>>>> /usr/lib/hadoop-mapreduce
>>>> Note: 
>>>> /tmp/sqoop-hadoop/compile/d06024b7489332878d1144702c6a7923/mydata0.java
>>>> uses or overrides a deprecated API.
>>>> Note: Recompile with -Xlint:deprecation for details.
>>>> 16/08/04 19:14:12 INFO orm.CompilationManager: Writing jar file:
>>>> /tmp/sqoop-hadoop/compile/d06024b7489332878d1144702c6a7923/mydata0.jar
>>>> 16/08/04 19:14:12 INFO mapreduce.ImportJobBase: Beginning query import.
>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>> SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/
>>>> slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/lo
>>>> g4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>> explanation.
>>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>>> 16/08/04 19:14:13 INFO Configuration.deprecation: mapred.jar is
>>>> deprecated. Instead, use mapreduce.job.jar
>>>> 16/08/04 19:14:13 INFO manager.SqlManager: Executing SQL statement:
>>>> select * from `ci_84adea33-9194-4753-925f-529a87656048` t1 where  (1 =
>>>> 0)
>>>> 16/08/04 19:14:13 INFO manager.SqlManager: Executing SQL statement:
>>>> select * from `ci_84adea33-9194-4753-925f-529a87656048` t1 where  (1 =
>>>> 0)
>>>> 16/08/04 19:14:15 INFO Configuration.deprecation: mapred.map.tasks is
>>>> deprecated. Instead, use mapreduce.job.maps
>>>> 16/08/04 19:14:16 INFO impl.TimelineClientImpl: Timeline service
>>>> address: http://ip-172-31-25-105.us-west-2.compute.internal:8188/ws/v
>>>> 1/timeline/
>>>> 16/08/04 19:14:16 INFO client.RMProxy: Connecting to ResourceManager at
>>>> ip-172-31-25-105.us-west-2.compute.internal/172.31.25.105:8032
>>>> 16/08/04 19:14:17 INFO db.DBInputFormat: Using read commited
>>>> transaction isolation
>>>> 16/08/04 19:14:17 INFO db.DataDrivenDBInputFormat: BoundingValsQuery:
>>>> SELECT MIN(t1.rsid), MAX(t1.rsid) FROM (select * from
>>>> `ci_84adea33-9194-4753-925f-529a87656048` t1 where  (1 = 1) ) AS t1
>>>> 16/08/04 19:14:18 WARN db.TextSplitter: Generating splits for a textual
>>>> index column.
>>>> 16/08/04 19:14:18 WARN db.TextSplitter: If your database sorts in a
>>>> case-insensitive order, this may result in a partial import or duplicate
>>>> records.
>>>> 16/08/04 19:14:18 WARN db.TextSplitter: You are strongly encouraged to
>>>> choose an integral split column.
>>>> 16/08/04 19:14:18 INFO mapreduce.JobSubmitter: number of splits:5
>>>> 16/08/04 19:14:18 INFO mapreduce.JobSubmitter: Submitting tokens for
>>>> job: job_1470337090744_0001
>>>> 16/08/04 19:14:19 INFO impl.YarnClientImpl: Submitted application
>>>> application_1470337090744_0001
>>>> 16/08/04 19:14:19 INFO mapreduce.Job: The url to track the job:
>>>> http://ip-172-31-25-105.us-west-2.compute.internal:20888/pro
>>>> xy/application_1470337090744_0001/
>>>> 16/08/04 19:14:19 INFO mapreduce.Job: Running job:
>>>> job_1470337090744_0001
>>>> 16/08/04 19:14:30 INFO mapreduce.Job: Job job_1470337090744_0001
>>>> running in uber mode : false
>>>> 16/08/04 19:14:30 INFO mapreduce.Job:  map 0% reduce 0%
>>>> 16/08/04 19:14:43 INFO mapreduce.Job:  map 20% reduce 0%
>>>> 16/08/04 19:14:43 INFO mapreduce.Job: Task Id :
>>>> attempt_1470337090744_0001_m_000000_0, Status : FAILED
>>>> Error: org.apache.avro.AvroRuntimeException: Not a Specific class: char
>>>>     at org.apache.avro.specific.SpecificData.createSchema(SpecificD
>>>> ata.java:213)
>>>>     at org.apache.avro.reflect.ReflectData.createSchema(ReflectData
>>>> .java:303)
>>>>     at org.apache.avro.reflect.ReflectData.createFieldSchema(Reflec
>>>> tData.java:430)
>>>>     at org.kitesdk.data.spi.DataModelUtil$AllowNulls.createFieldSch
>>>> ema(DataModelUtil.java:55)
>>>>     at org.apache.avro.reflect.ReflectData.createSchema(ReflectData
>>>> .java:354)
>>>>     at org.apache.avro.reflect.ReflectData.createFieldSchema(Reflec
>>>> tData.java:430)
>>>>     at org.kitesdk.data.spi.DataModelUtil$AllowNulls.createFieldSch
>>>> ema(DataModelUtil.java:55)
>>>>     at org.apache.avro.reflect.ReflectData.createSchema(ReflectData
>>>> .java:354)
>>>>     at org.apache.avro.reflect.ReflectData.createFieldSchema(Reflec
>>>> tData.java:430)
>>>>     at org.kitesdk.data.spi.DataModelUtil$AllowNulls.createFieldSch
>>>> ema(DataModelUtil.java:55)
>>>>     at org.apache.avro.reflect.ReflectData.createSchema(ReflectData
>>>> .java:354)
>>>>     at org.apache.avro.specific.SpecificData.getSchema(SpecificData
>>>> .java:154)
>>>>     at org.kitesdk.data.spi.DataModelUtil.getReaderSchema(DataModel
>>>> Util.java:171)
>>>>     at org.kitesdk.data.spi.DataModelUtil.resolveType(DataModelUtil
>>>> .java:148)
>>>>     at org.kitesdk.data.spi.AbstractDataset.<init>(AbstractDataset.
>>>> java:44)
>>>>     at org.kitesdk.data.spi.filesystem.FileSystemDataset.<init>(Fil
>>>> eSystemDataset.java:85)
>>>>     at org.kitesdk.data.spi.filesystem.FileSystemDataset.<init>(Fil
>>>> eSystemDataset.java:115)
>>>>     at org.kitesdk.data.spi.filesystem.FileSystemDataset$Builder.bu
>>>> ild(FileSystemDataset.java:541)
>>>>     at org.kitesdk.data.spi.filesystem.FileSystemDatasetRepository.
>>>> load(FileSystemDatasetRepository.java:194)
>>>>     at org.kitesdk.data.spi.AbstractDatasetRepository.load(Abstract
>>>> DatasetRepository.java:40)
>>>>     at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.loadJobDat
>>>> aset(DatasetKeyOutputFormat.java:544)
>>>>     at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.loadOrCrea
>>>> teTaskAttemptDataset(DatasetKeyOutputFormat.java:555)
>>>>     at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.loadOrCrea
>>>> teTaskAttemptView(DatasetKeyOutputFormat.java:568)
>>>>     at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.getRecordW
>>>> riter(DatasetKeyOutputFormat.java:426)
>>>>     at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<i
>>>> nit>(MapTask.java:656)
>>>>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:776)
>>>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>>>>     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>>     at javax.security.auth.Subject.doAs(Subject.java:422)
>>>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
>>>> upInformation.java:1657)
>>>>     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>>>>
>>>> Best,
>>>> Mark
>>>>
>>>> On Wed, Aug 3, 2016 at 4:10 PM, Markus Kemper <[email protected]>
>>>> wrote:
>>>>
>>>>> Hello Mark,
>>>>>
>>>>> Have you tried the following:
>>>>>
>>>>> sqoop import --connect jdbc:mysql://DATABASE_ENDPOINT --query "select
>>>>> * from ci_84adea33-9194-4753-925f-529a87656048 where \$CONDITIONS"
>>>>> --as-parquetfile --class-name
>>>>> mydata1 --username USERNAME -P
>>>>>
>>>>>
>>>>> Markus Kemper
>>>>> Customer Operations Engineer
>>>>> [image: www.cloudera.com] <http://www.cloudera.com>
>>>>>
>>>>>
>>>>> On Tue, Aug 2, 2016 at 4:59 PM, Mark Wagoner <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> hi,
>>>>>>
>>>>>> I need to be able to import table names with hypens but keep getting
>>>>>> the
>>>>>> following error. Is there any way to specify a table alias to rename a
>>>>>> table replacing hypens with underscores?
>>>>>>
>>>>>> sqoop import --connect jdbc:mysql://DATABASE_ENDPOINT --table
>>>>>> ci_84adea33-9194-4753-925f-529a87656048 --as-parquetfile --class-name
>>>>>> mydata1 --username USERNAME -P
>>>>>>
>>>>>>
>>>>>> 16/08/02 20:11:22 INFO manager.SqlManager: Executing SQL statement:
>>>>>> SELECT t.* FROM `ci_84adea33-9194-4753-925f-529a87656048` AS t LIMIT
>>>>>> 1
>>>>>> 16/08/02 20:11:23 ERROR sqoop.Sqoop: Got exception running Sqoop:
>>>>>> org.apache.avro.SchemaParseException: Illegal character in:
>>>>>> ci_84adea33-9194-4753-925f-529a87656048
>>>>>> org.apache.avro.SchemaParseException: Illegal character in:
>>>>>> ci_84adea33-9194-4753-925f-529a87656048
>>>>>>     at org.apache.avro.Schema.validateName(Schema.java:1042)
>>>>>>     at org.apache.avro.Schema.access$200(Schema.java:78)
>>>>>>     at org.apache.avro.Schema$Name.<init>(Schema.java:431)
>>>>>>     at org.apache.avro.Schema.createRecord(Schema.java:144)
>>>>>>     at
>>>>>> org.apache.sqoop.orm.AvroSchemaGenerator.generate(AvroSchema
>>>>>> Generator.java:83)
>>>>>>     at
>>>>>> org.apache.sqoop.mapreduce.DataDrivenImportJob.generateAvroS
>>>>>> chema(DataDrivenImportJob.java:133)
>>>>>>     at
>>>>>> org.apache.sqoop.mapreduce.DataDrivenImportJob.configureMapp
>>>>>> er(DataDrivenImportJob.java:106)
>>>>>>     at
>>>>>> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJob
>>>>>> Base.java:260)
>>>>>>     at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.j
>>>>>> ava:673)
>>>>>>     at
>>>>>> org.apache.sqoop.manager.MySQLManager.importTable(MySQLManag
>>>>>> er.java:118)
>>>>>>     at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java
>>>>>> :497)
>>>>>>     at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
>>>>>>     at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
>>>>>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>>>     at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
>>>>>>     at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
>>>>>>     at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
>>>>>>     at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
>>>>>>
>>>>>> Thanks,
>>>>>> Mark
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to