oracle - Increasing number of mappers in sqoop command gives java heap space error -


i m using sqoop 1.4.5-cdh5.2.1 , oracle .

i m importing small set of records of 115k oracle . sqoop command works fine on setting --num-mappers 5. when set more 5 , error of java heap space.

can 1 tell ,that why happening so.

log exception in thread "main" java.lang.outofmemoryerror: java heap space @ java.math.biginteger.(biginteger.java:394) @ java.math.bigdecimal.bigtentothe(bigdecimal.java:3380) @ java.math.bigdecimal.bigdigitlength(bigdecimal.java:3635) @ java.math.bigdecimal.precision(bigdecimal.java:2189) @ java.math.bigdecimal.comparemagnitude(bigdecimal.java:2585) @ java.math.bigdecimal.compareto(bigdecimal.java:2566) @ org.apache.sqoop.mapreduce.db.bigdecimalsplitter.split(bigdecimalsplitter.java:138) @ org.apache.sqoop.mapreduce.db.bigdecimalsplitter.split(bigdecimalsplitter.java:69) @ org.apache.sqoop.mapreduce.db.datadrivendbinputformat.getsplits(datadrivendbinputformat.java:171) @ org.apache.hadoop.mapreduce.jobsubmitter.writenewsplits(jobsubmitter.java:498) @ org.apache.hadoop.mapreduce.jobsubmitter.writesplits(jobsubmitter.java:515) @ org.apache.hadoop.mapreduce.jobsubmitter.submitjobinternal(jobsubmitter.java:399) @ org.apache.hadoop.mapreduce.job$10.run(job.java:1295) @ org.apache.hadoop.mapreduce.job$10.run(job.java:1292) @ java.security.accesscontroller.doprivileged(native method) @ javax.security.auth.subject.doas(subject.java:415) @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1614) @ org.apache.hadoop.mapreduce.job.submit(job.java:1292) @ org.apache.hadoop.mapreduce.job.waitforcompletion(job.java:1313) @ org.apache.sqoop.mapreduce.importjobbase.dosubmitjob(importjobbase.java:198) @ org.apache.sqoop.mapreduce.importjobbase.runjob(importjobbase.java:171) @ org.apache.sqoop.mapreduce.importjobbase.runimport(importjobbase.java:268) @ org.apache.sqoop.manager.sqlmanager.importquery(sqlmanager.java:721) @ org.apache.sqoop.tool.importtool.importtable(importtool.java:499) @ org.apache.sqoop.tool.importtool.run(importtool.java:605) @ org.apache.sqoop.sqoop.run(sqoop.java:143) @ org.apache.hadoop.util.toolrunner.run(toolrunner.java:70) @ org.apache.sqoop.sqoop.runsqoop(sqoop.java:179) @ org.apache.sqoop.sqoop.runtool(sqoop.java:218) @ org.apache.sqoop.sqoop.runtool(sqoop.java:227) @ org.apache.sqoop.sqoop.main(sqoop.java:236) 2015-06-25 13:48:59 status: 1 2015-06-25 13:48:59 error error (1) sqoop failed. 2015-06-25 13:48:59 error error (1) run_sqoop

by default, each map , reduce task runs in own jvm. hence, each mapper consume amount of physical memory. keep increasing number of mappers, memory requirement keep growing. if java process cannot allocate enough memory throws java.lang.outofmemoryerror

in case, system(or vm, if running vm) might have memory enough max 5 mappers only.

you can run top command while launching >5 mappers , monitor free memory.


Comments

Popular posts from this blog

python - No exponential form of the z-axis in matplotlib-3D-plots -

php - Best Light server (Linux + Web server + Database) for Raspberry Pi -

c# - "Newtonsoft.Json.JsonSerializationException unable to find constructor to use for types" error when deserializing class -