On Tue, Jan 22, 2013 at 12:56 PM, Lalaji Sureshika <lal...@wso2.com> wrote:

> Hi,
>
> On Tue, Jan 22, 2013 at 11:19 AM, Amila Suriarachchi <am...@wso2.com>wrote:
>
>>
>>
>> On Tue, Jan 22, 2013 at 12:14 PM, Buddhika Chamith <buddhi...@wso2.com>wrote:
>>
>>> This usually happens when an event not conforming to the stream
>>> definition registered is sent. May be there has been some change to stream
>>> definition in the toolbox or at agent side in versions being used?
>>>
>>
>> I ran with some default settings. I'll check with a MySQL DB.
>>
>
> One additional note.When I invoke the API with authentication level as
> none,I could see stats on some graphs as in attached image,but for two
> graphs,I'm not getting data and below error[1] shown in BAM side.
>
> When going through the code again,at the Authentication Handler level,when
> the authentication level set to 'NONE' for an API, the username of
> authentication context has set as null.Because of that at the Usage Handler
> level,we are getting the username from Authentication Context to pass with
> the event as null and thus there seems the sending event is not conforming
> to the stream definition as Buddhika mentioned.
>

We should be able to define those fields as null in Trift IDL and handle
Cassendra, Hive levels, reports accordingly.

thanks,
Amila.


>
> SanjeewaM is currently looking in to this..
>
> Thanks;
>
> [1]  TID: [0] [BAM] [2013-01-22 10:06:13,488] ERROR
> {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation} -  Failed to
> write data to database
> {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation}
> org.h2.jdbc.JdbcSQLException: NULL not allowed for column "USERID"; SQL
> statement:
> INSERT INTO API_REQUEST_SUMMARY
> (context,version,consumerkey,total_request_count,userid,max_request_time,api_version,api)
> VALUES (?,?,?,?,?,?,?,?) [90006-140]
>  at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
> at org.h2.message.DbException.get(DbException.java:167)
>  at org.h2.message.DbException.get(DbException.java:144)
> at org.h2.table.Column.validateConvertUpdateSequence(Column.java:294)
>  at org.h2.table.Table.validateConvertUpdateSequence(Table.java:621)
> at org.h2.command.dml.Insert.insertRows(Insert.java:116)
>  at org.h2.command.dml.Insert.update(Insert.java:82)
> at org.h2.command.CommandContainer.update(CommandContainer.java:70)
>  at org.h2.command.Command.executeUpdate(Command.java:199)
> at
> org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:141)
>  at
> org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:127)
> at
> org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.insertData(DBOperation.java:141)
>  at
> org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.writeToDB(DBOperation.java:62)
> at
> org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBRecordWriter.write(DBRecordWriter.java:35)
>  at
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:589)
> at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
> at
> org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
>  at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
>  at
> org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
> at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
> at
> org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:959)
>  at
> org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1012)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
>  at org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303)
> at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:528)
>  at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:419)
> at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:256)
> TID: [0] [BAM] [2013-01-22 10:06:13,497] ERROR {ExecReducer} -  Hit error
> while closing operators - failing tree {ExecReducer}
> TID: [0] [BAM] [2013-01-22 10:06:14,466] ERROR
> {org.apache.hadoop.hive.ql.exec.ExecDriver} -  Ended Job = job_local_0001
> with errors
>
>
>>
>> thanks,
>> Amila.
>>
>>
>>>
>>> Regards
>>> Buddhika
>>>
>>>
>>> On Tue, Jan 22, 2013 at 12:07 PM, Lalaji Sureshika <lal...@wso2.com>wrote:
>>>
>>>>  extract data from the incoming request to the gateway,without
>>>> depending on the security scheme attached to the particular API resource
>>>> verb..
>>>> I tried same scenario with keeping the security level as none for a
>>>> particular API resource and without subscribe it to any app.I could able to
>>>> view stats from publisher side.[AM 1.3.0 and BAM 2.0.1]
>>>>
>>>
>>>
>>>
>>
>>
>> --
>> *Amila Suriarachchi*
>>
>> Software Architect
>> WSO2 Inc. ; http://wso2.com
>> lean . enterprise . middleware
>>
>> phone : +94 71 3082805
>>
>
>
>
> --
> Lalaji Sureshika
> Software Engineer; Development Technologies Team;WSO2, Inc.;
> http://wso2.com/
> email: lal...@wso2.com; cell: +94 71 608 6811
> blog: http://lalajisureshika.blogspot.com
>
>
>


-- 
*Amila Suriarachchi*

Software Architect
WSO2 Inc. ; http://wso2.com
lean . enterprise . middleware

phone : +94 71 3082805
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to