[jira] [Commented] (HIVE-3576) Regression: ALTER TABLE DROP IF EXISTS PARTITION throws a SemanticException if Partition is not found
[ https://issues.apache.org/jira/browse/HIVE-3576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13708263#comment-13708263 ] Kanwaljit Singh commented on HIVE-3576: --- Any updates on this issue? > Regression: ALTER TABLE DROP IF EXISTS PARTITION throws a SemanticException > if Partition is not found > - > > Key: HIVE-3576 > URL: https://issues.apache.org/jira/browse/HIVE-3576 > Project: Hive > Issue Type: Bug > Components: Metastore, Query Processor >Affects Versions: 0.9.0 >Reporter: Harsh J > > Doing a simple "{{ALTER TABLE testtable DROP IF EXISTS > PARTITION(dt=NONEXISTENTPARTITION)}}" fails with a SemanticException of the > 10006 kind (INVALID_PARTITION). > This does not respect the {{hive.exec.drop.ignorenonexistent}} condition > either, since there are no if-check-wraps around this area, when fetching > partitions from the store. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4825) Separate MapredWork into MapWork and ReduceWork
[ https://issues.apache.org/jira/browse/HIVE-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-4825: - Status: Patch Available (was: Open) > Separate MapredWork into MapWork and ReduceWork > --- > > Key: HIVE-4825 > URL: https://issues.apache.org/jira/browse/HIVE-4825 > Project: Hive > Issue Type: Bug >Reporter: Gunther Hagleitner >Assignee: Gunther Hagleitner > Attachments: HIVE-4825.1.patch, HIVE-4825.2.code.patch, > HIVE-4825.2.testfiles.patch > > > Right now all the information needed to run an MR job is captured in > MapredWork. This class has aliases, tagging info, table descriptors etc. > For Tez and MRR it will be useful to break this into map and reduce specific > pieces. The separation is natural and I think has value in itself, it makes > the code easier to understand. However, it will also allow us to reuse these > abstractions in Tez where you'll have a graph of these instead of just 1M and > 0-1R. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4825) Separate MapredWork into MapWork and ReduceWork
[ https://issues.apache.org/jira/browse/HIVE-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13708256#comment-13708256 ] Gunther Hagleitner commented on HIVE-4825: -- Ran tests on 1 & 2 line. Came back clean. Split in code + test, because there's lots of whitespace only diffs. I've also updated the review. > Separate MapredWork into MapWork and ReduceWork > --- > > Key: HIVE-4825 > URL: https://issues.apache.org/jira/browse/HIVE-4825 > Project: Hive > Issue Type: Bug >Reporter: Gunther Hagleitner >Assignee: Gunther Hagleitner > Attachments: HIVE-4825.1.patch, HIVE-4825.2.code.patch, > HIVE-4825.2.testfiles.patch > > > Right now all the information needed to run an MR job is captured in > MapredWork. This class has aliases, tagging info, table descriptors etc. > For Tez and MRR it will be useful to break this into map and reduce specific > pieces. The separation is natural and I think has value in itself, it makes > the code easier to understand. However, it will also allow us to reuse these > abstractions in Tez where you'll have a graph of these instead of just 1M and > 0-1R. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4825) Separate MapredWork into MapWork and ReduceWork
[ https://issues.apache.org/jira/browse/HIVE-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-4825: - Attachment: HIVE-4825.2.testfiles.patch HIVE-4825.2.code.patch > Separate MapredWork into MapWork and ReduceWork > --- > > Key: HIVE-4825 > URL: https://issues.apache.org/jira/browse/HIVE-4825 > Project: Hive > Issue Type: Bug >Reporter: Gunther Hagleitner >Assignee: Gunther Hagleitner > Attachments: HIVE-4825.1.patch, HIVE-4825.2.code.patch, > HIVE-4825.2.testfiles.patch > > > Right now all the information needed to run an MR job is captured in > MapredWork. This class has aliases, tagging info, table descriptors etc. > For Tez and MRR it will be useful to break this into map and reduce specific > pieces. The separation is natural and I think has value in itself, it makes > the code easier to understand. However, it will also allow us to reuse these > abstractions in Tez where you'll have a graph of these instead of just 1M and > 0-1R. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4851) Fix flaky tests
[ https://issues.apache.org/jira/browse/HIVE-4851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-4851: --- Description: I see the following tests fail quite often: * TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20 * TestOrcHCatLoader.testReadDataBasic * TestMinimrCliDriver.testCliDriver_bucketmpjoin6 * TestNotificationListener.testAMQListener This one is less often, but still fails randomly: * TestMinimrCliDriver.testCliDriver_bucket4 * TestHCatHiveCompatibility.testUnpartedReadWrite was: I see the following tests fail quite often: * TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20 * TestOrcHCatLoader.testReadDataBasic * TestMinimrCliDriver.testCliDriver_bucketmpjoin6 * TestNotificationListener.testAMQListener This one is less often, but still fails randomly: * TestMinimrCliDriver.testCliDriver_bucket4 > Fix flaky tests > --- > > Key: HIVE-4851 > URL: https://issues.apache.org/jira/browse/HIVE-4851 > Project: Hive > Issue Type: Bug >Reporter: Brock Noland >Assignee: Brock Noland > > I see the following tests fail quite often: > * > TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20 > * TestOrcHCatLoader.testReadDataBasic > * TestMinimrCliDriver.testCliDriver_bucketmpjoin6 > * TestNotificationListener.testAMQListener > This one is less often, but still fails randomly: > * TestMinimrCliDriver.testCliDriver_bucket4 > * TestHCatHiveCompatibility.testUnpartedReadWrite -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4317) StackOverflowError when add jar concurrently
[ https://issues.apache.org/jira/browse/HIVE-4317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13708205#comment-13708205 ] Navis commented on HIVE-4317: - I think JDBC connection is not thread-safe and should not be used like that. > StackOverflowError when add jar concurrently > - > > Key: HIVE-4317 > URL: https://issues.apache.org/jira/browse/HIVE-4317 > Project: Hive > Issue Type: Bug >Affects Versions: 0.9.0, 0.10.0 >Reporter: wangwenli > Attachments: hive-4317.1.patch > > > scenario: multiple thread add jar and do select operation by jdbc > concurrently , when hiveserver serializeMapRedWork sometimes, it will throw > StackOverflowError from XMLEncoder. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4841) Add partition level hook to HiveMetaHook
[ https://issues.apache.org/jira/browse/HIVE-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-4841: Status: Open (was: Patch Available) > Add partition level hook to HiveMetaHook > > > Key: HIVE-4841 > URL: https://issues.apache.org/jira/browse/HIVE-4841 > Project: Hive > Issue Type: Improvement > Components: StorageHandler >Reporter: Navis >Assignee: Navis >Priority: Minor > Attachments: HIVE-4841.D11673.1.patch > > > Current HiveMetaHook provides hooks for tables only. With partition level > hook, external storages also could be revised to exploit PPR. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2608) Do not require AS a,b,c part in LATERAL VIEW
[ https://issues.apache.org/jira/browse/HIVE-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13708173#comment-13708173 ] Phabricator commented on HIVE-2608: --- navis has commented on the revision "HIVE-2608 [jira] Do not require AS a,b,c part in LATERAL VIEW". INLINE COMMENTS ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java:2503 How about marking it with "@Deprecated" instead of removing? REVISION DETAIL https://reviews.facebook.net/D4317 BRANCH HIVE-2608 ARCANIST PROJECT hive To: JIRA, ashutoshc, navis Cc: ikabiljo > Do not require AS a,b,c part in LATERAL VIEW > > > Key: HIVE-2608 > URL: https://issues.apache.org/jira/browse/HIVE-2608 > Project: Hive > Issue Type: Improvement > Components: Query Processor, UDF >Reporter: Igor Kabiljo >Assignee: Navis >Priority: Minor > Attachments: HIVE-2608.D4317.5.patch, HIVE-2608.D4317.6.patch > > > Currently, it is required to state column names when LATERAL VIEW is used. > That shouldn't be necessary, since UDTF returns struct which contains column > names - and they should be used by default. > For example, it would be great if this was possible: > SELECT t.*, t.key1 + t.key4 > FROM some_table > LATERAL VIEW JSON_TUPLE(json, 'key1', 'key2', 'key3', 'key3') t; -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2206) add a new optimizer for query correlation discovery and optimization
[ https://issues.apache.org/jira/browse/HIVE-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2206: -- Attachment: HIVE-2206.D11097.18.patch yhuai updated the revision "HIVE-2206 [jira] add a new optimizer for query correlation discovery and optimization". - Merge remote-tracking branch 'upstream/trunk' into HIVE-2206-3671-20130711 - Left semi join should be handled in analyzeReduceSinkOperatorsOfJoinOperator. Also, use instanceof instead of using the operator's name to check the type of an Operator. Reviewers: JIRA REVISION DETAIL https://reviews.facebook.net/D11097 CHANGE SINCE LAST DIFF https://reviews.facebook.net/D11097?vs=35661&id=35721#toc AFFECTED FILES common/src/java/org/apache/hadoop/hive/conf/HiveConf.java conf/hive-default.xml.template ql/if/queryplan.thrift ql/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/OperatorType.java ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java ql/src/java/org/apache/hadoop/hive/ql/exec/DemuxOperator.java ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java ql/src/java/org/apache/hadoop/hive/ql/exec/MuxOperator.java ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecReducer.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRUnion1.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/ReduceSinkDeDuplication.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/AbstractCorrelationProcCtx.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/CorrelationOptimizer.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/CorrelationUtilities.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/IntraQueryCorrelation.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/QueryPlanTreeTransformation.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/ReduceSinkDeDuplication.java ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinTaskDispatcher.java ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java ql/src/java/org/apache/hadoop/hive/ql/plan/DemuxDesc.java ql/src/java/org/apache/hadoop/hive/ql/plan/MuxDesc.java ql/src/java/org/apache/hadoop/hive/ql/plan/ReduceSinkDesc.java ql/src/java/org/apache/hadoop/hive/ql/plan/UnionDesc.java ql/src/test/queries/clientpositive/correlationoptimizer1.q ql/src/test/queries/clientpositive/correlationoptimizer10.q ql/src/test/queries/clientpositive/correlationoptimizer11.q ql/src/test/queries/clientpositive/correlationoptimizer12.q ql/src/test/queries/clientpositive/correlationoptimizer13.q ql/src/test/queries/clientpositive/correlationoptimizer14.q ql/src/test/queries/clientpositive/correlationoptimizer2.q ql/src/test/queries/clientpositive/correlationoptimizer3.q ql/src/test/queries/clientpositive/correlationoptimizer4.q ql/src/test/queries/clientpositive/correlationoptimizer5.q ql/src/test/queries/clientpositive/correlationoptimizer6.q ql/src/test/queries/clientpositive/correlationoptimizer7.q ql/src/test/queries/clientpositive/correlationoptimizer8.q ql/src/test/queries/clientpositive/correlationoptimizer9.q ql/src/test/results/clientpositive/correlationoptimizer1.q.out ql/src/test/results/clientpositive/correlationoptimizer10.q.out ql/src/test/results/clientpositive/correlationoptimizer11.q.out ql/src/test/results/clientpositive/correlationoptimizer12.q.out ql/src/test/results/clientpositive/correlationoptimizer13.q.out ql/src/test/results/clientpositive/correlationoptimizer14.q.out ql/src/test/results/clientpositive/correlationoptimizer2.q.out ql/src/test/results/clientpositive/correlationoptimizer3.q.out ql/src/test/results/clientpositive/correlationoptimizer4.q.out ql/src/test/results/clientpositive/correlationoptimizer5.q.out ql/src/test/results/clientpositive/correlationoptimizer6.q.out ql/src/test/results/clientpositive/correlationoptimizer7.q.out ql/src/test/results/clientpositive/correlationoptimizer8.q.out ql/src/test/results/clientpositive/correlationoptimizer9.q.out ql/src/test/results/compiler/plan/groupby2.q.xml ql/src/test/results/compiler/plan/groupby3.q.xml To: JIRA, yhuai Cc: brock > add a new optimizer for query correlation discovery and optimization > > > Key: HIVE-2206 > URL: https://issues.apache.org/jira/browse/HIVE-2206 > Project: Hive > Issue Type: New Feature > Components: Query Process
[jira] [Assigned] (HIVE-4388) HBase tests fail against Hadoop 2
[ https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland reassigned HIVE-4388: -- Assignee: Brock Noland > HBase tests fail against Hadoop 2 > - > > Key: HIVE-4388 > URL: https://issues.apache.org/jira/browse/HIVE-4388 > Project: Hive > Issue Type: Bug >Reporter: Gunther Hagleitner >Assignee: Brock Noland > > Currently we're building by default against 0.92. When you run against hadoop > 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963. > HIVE-3861 upgrades the version of hbase used. This will get you past the > problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HIVE-4009) CLI Tests fail randomly due to MapReduce LocalJobRunner race condition
[ https://issues.apache.org/jira/browse/HIVE-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland resolved HIVE-4009. Resolution: Cannot Reproduce I haven't seen this reproduce in some time. Closing for now. > CLI Tests fail randomly due to MapReduce LocalJobRunner race condition > -- > > Key: HIVE-4009 > URL: https://issues.apache.org/jira/browse/HIVE-4009 > Project: Hive > Issue Type: Bug >Reporter: Brock Noland >Assignee: Brock Noland > Attachments: HIVE-4009-0.patch > > > Hadoop has a race condition MAPREDUCE-5001 which causes tests to fail > randomly when using LocalJobRunner. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4200) Consolidate submodule dependencies using ivy inheritance
[ https://issues.apache.org/jira/browse/HIVE-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-4200: - Attachment: HIVE-4200.4.patch > Consolidate submodule dependencies using ivy inheritance > > > Key: HIVE-4200 > URL: https://issues.apache.org/jira/browse/HIVE-4200 > Project: Hive > Issue Type: Bug >Reporter: Gunther Hagleitner >Assignee: Gunther Hagleitner > Attachments: HIVE-4200.1.patch.txt, HIVE-4200.2.patch, > HIVE-4200.3.patch, HIVE-4200.4.patch > > > As discussed in 4187: > For easier maintenance of ivy dependencies across submodules: Create parent > ivy file with consolidated dependencies and include into submodules via > inheritance. This way we're not relying on transitive dependencies, but also > have the dependencies in a single place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4200) Consolidate submodule dependencies using ivy inheritance
[ https://issues.apache.org/jira/browse/HIVE-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-4200: - Status: Patch Available (was: Open) > Consolidate submodule dependencies using ivy inheritance > > > Key: HIVE-4200 > URL: https://issues.apache.org/jira/browse/HIVE-4200 > Project: Hive > Issue Type: Bug >Reporter: Gunther Hagleitner >Assignee: Gunther Hagleitner > Attachments: HIVE-4200.1.patch.txt, HIVE-4200.2.patch, > HIVE-4200.3.patch, HIVE-4200.4.patch > > > As discussed in 4187: > For easier maintenance of ivy dependencies across submodules: Create parent > ivy file with consolidated dependencies and include into submodules via > inheritance. This way we're not relying on transitive dependencies, but also > have the dependencies in a single place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4200) Consolidate submodule dependencies using ivy inheritance
[ https://issues.apache.org/jira/browse/HIVE-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13708098#comment-13708098 ] Gunther Hagleitner commented on HIVE-4200: -- .4 rebased against trunk. did some testing and that was fine. > Consolidate submodule dependencies using ivy inheritance > > > Key: HIVE-4200 > URL: https://issues.apache.org/jira/browse/HIVE-4200 > Project: Hive > Issue Type: Bug >Reporter: Gunther Hagleitner >Assignee: Gunther Hagleitner > Attachments: HIVE-4200.1.patch.txt, HIVE-4200.2.patch, > HIVE-4200.3.patch, HIVE-4200.4.patch > > > As discussed in 4187: > For easier maintenance of ivy dependencies across submodules: Create parent > ivy file with consolidated dependencies and include into submodules via > inheritance. This way we're not relying on transitive dependencies, but also > have the dependencies in a single place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4675) Create new parallel unit test environment
[ https://issues.apache.org/jira/browse/HIVE-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13708067#comment-13708067 ] Brock Noland commented on HIVE-4675: Sounds good, will do! I said it earlier but I'll say it again, sorry for the large patch! > Create new parallel unit test environment > - > > Key: HIVE-4675 > URL: https://issues.apache.org/jira/browse/HIVE-4675 > Project: Hive > Issue Type: Improvement > Components: Testing Infrastructure >Reporter: Brock Noland >Assignee: Brock Noland > Fix For: 0.12.0 > > Attachments: HIVE-4675.patch > > > The current ptest tool is great, but it has the following limitations: > -Requires an NFS filer > -Unless the NFS filer is dedicated ptests can become IO bound easily > -Investigating of failures is troublesome because the source directory for > the failure is not saved > -Ignoring or isolated tests is not supported > -No unit tests for the ptest framework exist > It'd be great to have a ptest tool that addresses this limitations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4388) HBase tests fail against Hadoop 2
[ https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13708066#comment-13708066 ] Brock Noland commented on HIVE-4388: Yeah I agree that it's not great. Maybe it's a better idea to just do the work to upgrade to 0.96 which will get published built on hadoop2. > HBase tests fail against Hadoop 2 > - > > Key: HIVE-4388 > URL: https://issues.apache.org/jira/browse/HIVE-4388 > Project: Hive > Issue Type: Bug >Reporter: Gunther Hagleitner > > Currently we're building by default against 0.92. When you run against hadoop > 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963. > HIVE-3861 upgrades the version of hbase used. This will get you past the > problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-4855) Failed in equality check of PrimitiveTypeInfo after ser/de
Cheng Hao created HIVE-4855: --- Summary: Failed in equality check of PrimitiveTypeInfo after ser/de Key: HIVE-4855 URL: https://issues.apache.org/jira/browse/HIVE-4855 Project: Hive Issue Type: Bug Reporter: Cheng Hao Priority: Minor The "equals" method in PrimitiveTypeInfo.java as shown below /** * Compare if 2 TypeInfos are the same. We use TypeInfoFactory to cache * TypeInfos, so we only need to compare the Object pointer. */ @Override public boolean equals(Object other) { return this == other; } But, it may still fails the equality checking of PrimitiveTypeInfo instances, before and after its de-serialization. I met that bug as I was trying call the method of ExprNodeGenericFuncDesc.isSame(Object obj), which are actually 2 de-serialized instances from the exactly same source, and I always got false. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira