here is the stack trace:

log4j:WARN No appenders could be found for logger 
(org.springframework.core.env.StandardEnvironment).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for 
more info.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
......Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2882)
at 
java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
at java.lang.StringBuilder.append(StringBuilder.java:119)
at 
org.hibernate.engine.internal.StatefulPersistenceContext.toString(StatefulPersistenceContext.java:1232)
at java.lang.String.valueOf(String.java:2826)
at java.lang.StringBuilder.append(StringBuilder.java:115)
at org.hibernate.internal.SessionImpl.toString(SessionImpl.java:1920)
at java.lang.String.valueOf(String.java:2826)
at java.lang.StringBuilder.append(StringBuilder.java:115)
at 
org.springframework.orm.hibernate4.HibernateTransactionManager.doCleanupAfterCompletion(HibernateTransactionManager.java:632)
at 
org.springframework.transaction.support.AbstractPlatformTransactionManager.cleanupAfterCompletion(AbstractPlatformTransactionManager.java:1009)
at 
org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:805)
at 
org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:724)
at 
org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:475)
at 
org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:270)
at 
org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy17.persistReportEntities(Unknown Source)
at 
com.google.api.ads.adwords.jaxws.extensions.processors.ReportProcessor.parseRowsAndPersist(ReportProcessor.java:201)
at 
com.google.api.ads.adwords.jaxws.extensions.processors.ReportProcessor.processFiles(ReportProcessor.java:154)
at 
com.google.api.ads.adwords.jaxws.extensions.processors.ReportProcessor.processLocalFiles(ReportProcessor.java:572)
at 
com.google.api.ads.adwords.jaxws.extensions.processors.ReportProcessor.downloadAndProcess(ReportProcessor.java:548)
at 
com.google.api.ads.adwords.jaxws.extensions.processors.ReportProcessor.generateReportsForMCC(ReportProcessor.java:491)
at 
com.google.api.ads.adwords.jaxws.extensions.AwReporting.main(AwReporting.java:166)

On Thursday, November 21, 2013 11:30:14 AM UTC-3, Daniel Altman wrote:
>
> Hello, just wanted to point out how disappointing we are with AwReporting 
> and why.
>  
> We were told AwReporting would be the solution to our problems regarding 
> downloading large scale performance information for all our accounts.
> We currently do nightly downloads using the API, calling it in parallel 
> and inserting the reports directly into external Hive tables.
>
> We were facing a lot problems like ERROR_GETTING_RESPONSE_FROM_BACKEND due 
> to the fact that we have accounts with for example 7 million ads.
> Of course we retry, but sometimes, we get the error 10 times or more and 
> the API es very slow also when downloading those accounts. It seems times 
> grow exponentially compared to account with half the size.
>
>
> We were expecting AwReporting to solve that, but:
>
> 1) It uses the same API, so basically it is the same code we have already 
> written => same problems with large accounts
> 2) It does not allow us to save directly to CSV, or TSV files that could 
> be directly used in Hive tables, as many of the one managing large datasets 
> are probably doing. We cannot save the data to a DB just to transform it 
> later.
> 3) It creates all the objects in memory and uses Hibernate to save the 
> objects into the DB?!?! We are getting "Out of memory" exceptions for 
> almost all our accounts.
>
>
> Simply put, I don´t think this will work for any large-scale API consumer.
> Either you don´t care, or you didn´t make the proper effort into 
> understanding our real needs.
>
> In any case, is very disappointing.
> Let me know if we can help in any way to make this a real useful solution.
>
>
> Daniel Altman
> Despegar.com
>
>
>  
>

-- 
-- 
=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
Also find us on our blog and discussion group:
http://googleadsdeveloper.blogspot.com
http://groups.google.com/group/adwords-api
=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~

You received this message because you are subscribed to the Google
Groups "AdWords API Forum" group.
To post to this group, send email to adwords-api@googlegroups.com
To unsubscribe from this group, send email to
adwords-api+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/adwords-api?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"AdWords API Forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to adwords-api+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to