Time and again I get this error and as a result the segment remains
incomplete. This wastes one iteration of the for() loop in which I am
doing generate, fetch and update.
Can someone please tell me what are the measures I can take to avoid
this error? And isn't it possible to make some code
You can change the -Xms and -Xmx settings in the mapred.child.java.opts
variable in your hadoop-site.xml file to allow more memory for your
tasks. Are you trying to parse extremely big pages or files such as
PDFs. If you are you can also set maximum size limits for downloaded
content using