Doing this, with the appropriate substitutions for my table, jarClass, etc:

> 2. To get the table schema... I assume that you are after HCat schema  
> 
> 
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.mapreduce.InputSplit;
> import org.apache.hadoop.mapreduce.Job;
> import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
> import org.apache.hcatalog.data.schema.HCatSchemaUtils;
> import org.apache.hcatalog.mapreduce.HCatInputFormat;
> import org.apache.hcatalog.mapreduce.HCatSplit;
> import org.apache.hcatalog.mapreduce.InputJobInfo;
> 
> 
>   Job job = new Job(config);
>   job.setJarByClass(XXXXXX.class); // this will be your class 
> job.setInputFormatClass(HCatInputFormat.class);
> job.setOutputFormatClass(TextOutputFormat.class);
>   InputJobInfo inputJobInfo = InputJobInfo.create("my_data_base", "my_table", 
> "partition filter");
> HCatInputFormat.setInput(job, inputJobInfo);
> HCatSchema s =  HCatInputFormat.getTableSchema(job);

results in:

Exception in thread "main" java.lang.IncompatibleClassChangeError: Found 
interface org.apache.hadoop.mapreduce.JobContext, but class was expected
        at 
org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getTableSchema(HCatBaseInputFormat.java:234)


Reply via email to