Fwd: [jira] [Commented] (GEODE-4061) Adding coordinator in list member command output
Any one vl merge this pull request? Thanks Dinesh -- Forwarded message -- From: "ASF GitHub Bot (JIRA)"Date: 14-Dec-2017 9:45 pm Subject: [jira] [Commented] (GEODE-4061) Adding coordinator in list member command output To: Cc: > [ https://issues.apache.org/jira/browse/GEODE-4061?page= > com.atlassian.jira.plugin.system.issuetabpanels:comment- > tabpanel=16291101#comment-16291101 ] > > ASF GitHub Bot commented on GEODE-4061: > --- > > dineshpune2006 commented on issue #1138: Feature/GEODE-4061 : Adding > coordinator in list member command output > URL: https://github.com/apache/geode/pull/1138#issuecomment-351757529 > > >when this pull request is going to be merge > > > This is an automated message from the Apache Git Service. > To respond to the message, please log on GitHub and use the > URL above to go to the specific comment. > > For queries about this service, please contact Infrastructure at: > us...@infra.apache.org > > > > Adding coordinator in list member command output > > > > > > Key: GEODE-4061 > > URL: https://issues.apache.org/jira/browse/GEODE-4061 > > Project: Geode > > Issue Type: Bug > > Components: messaging > >Reporter: dinesh ak > > > > no way to find out the coordinator apart from log messages. > > i have added coordinator in list member command output. > > > > -- > This message was sent by Atlassian JIRA > (v6.4.14#64029) >
Re: need suggestion about below exception
Hi All, Is there any geode test class which populate data in *ParallelGatewaySenderQueue and create some back log. * Insert/remove operation i can test. Thanks, Dinesh Akhand On Fri, Oct 27, 2017 at 9:24 PM, Dinesh Akhandwrote: > Any one suggest about below exception: > > at org.apache.geode.internal.cache.DiskEntry$Helper. > removeFromDisk(DiskEntry.java:1505) > at org.apache.geode.internal.cache.AbstractOplogDiskRegionEntry. > removePhase1(AbstractOplogDiskRegionEntry.java:42) > at org.apache.geode.internal.cache.AbstractRegionEntry. > destroy(AbstractRegionEntry.java:896) > at org.apache.geode.internal.cache.AbstractRegionMap.destroyEntry( > AbstractRegionMap.java:3084) > at org.apache.geode.internal.cache.AbstractRegionMap. > destroy(AbstractRegionMap.java:1422) > at org.apache.geode.internal.cache.LocalRegion.mapDestroy( > LocalRegion.java:6566) > at org.apache.geode.internal.cache.LocalRegion.mapDestroy( > LocalRegion.java:6540) > at org.apache.geode.internal.cache.BucketRegion. > basicDestroy(BucketRegion.java:1183) > at org.apache.geode.internal.cache.AbstractBucketRegionQueue. > basicDestroy(AbstractBucketRegionQueue.java:352) > at org.apache.geode.internal.cache.BucketRegionQueue.basicDestroy( > BucketRegionQueue.java:363) > at org.apache.geode.internal.cache.LocalRegion. > validatedDestroy(LocalRegion.java:) > at org.apache.geode.internal.cache.DistributedRegion. > validatedDestroy(DistributedRegion.java:904) > at org.apache.geode.internal.cache.LocalRegion.destroy( > LocalRegion.java:1096) > at org.apache.geode.internal.cache.AbstractRegion.destroy( > AbstractRegion.java:315) > at org.apache.geode.internal.cache.LocalRegion.remove( > LocalRegion.java:8976) > at org.apache.geode.internal.cache.wan.parallel. > ParallelGatewaySenderQueue.clearPartitionedRegion( > ParallelGatewaySenderQueue.java:1830) > at org.apache.geode.internal.cache.wan.parallel. > ParallelGatewaySenderQueue.clearQueue(ParallelGatewaySenderQueue. > java:1800) > at org.apache.geode.internal.cache.wan.parallel. > ConcurrentParallelGatewaySenderQueue.clearQueue( > ConcurrentParallelGatewaySenderQueue.java:237) > at org.apa > > In geode 1.1 Clear queue function is working correctly for us. > But with same configuration in geode 1.2 is not working. > > I tried to add the debug log and printed disk region which is created > inside the LocalRegion.java, diskregion is not null. > > [info 2017/10/27 12:26:43.196 IDT eaasrt-server1 Processor 3> tid=0xf2] akhand diskregion/__PR/_B__AsyncEventQueue > __PWInfoQueue__PARALLEL__GATEWAY__SENDER__QUEUE_4fullpath:=/__PR/_B__ > AsyncEventQueue__PWInfoQueue__PARALLEL__GATEWAY__SENDER__QUEUE_4isuse > > Which is not Null , also DiskStoreImpl is Not null > > info 2017/10/27 13:59:36.407 IDT eaasrt-server1 Processor 2> tid=0xe8] akhand diskregion/__PR/_B__AsyncEventQueue > __PWInfoQueue__PARALLEL__GATEWAY__SENDER__QUEUE_109fullpath:=/__PR/_B__ > AsyncEventQueue__PWInfoQueue__PARALLEL__GATEWAY__SENDER__QUEUE_109i > suse:false,dsi:PWInfo-queue-overflow > > Thanks, > Dinesh Akhand > This message and the information contained herein is proprietary and > confidential and subject to the Amdocs policy statement, > > you may review at https://www.amdocs.com/about/email-disclaimer < > https://www.amdocs.com/about/email-disclaimer> >
RE: [DISCUSS] Bug while parsing the JSON "key which having short data type" in locate command "https://github.com/apache/geode/pull/752"
Need suggestions on same pull request Thanks, Dinesh On 1 Sep 2017 09:46, "Dinesh Akhand"wrote: > Hi Team, > > Please reply over below mail chain. > > Need you focus on the issue. > > Thanks, > Dinesh Akhand > > -Original Message- > From: Dinesh Akhand > Sent: Thursday, August 31, 2017 7:04 PM > To: dev@geode.apache.org > Subject: [DISCUSS] Bug while parsing the JSON "key which having short data > type" in locate command "https://github.com/apache/geode/pull/752; > > Hi, > > > > I have created the pull request for the same . > > https://github.com/apache/geode/pull/752 > > > > Jira ticket GEODE-3544. > > > > Case 1) > > > > Short data type is getting converted into integer & geode is looking for > set method with integer > > And it throws the exception. > > > > So I am converting the value with parameterType using > ConvertUtils.convert which solve the problem for all primitive/wrapper type > > Example 2) > > If key data type in short i =5 > > It will look for method seti(integer ) > > Case 2) If key having the base class it check only key class set method . > > > > So I change getDeclaredMethods to getMethods() > > > > > > Thanks, > > Dinesh Akhand > > > > -Original Message- > From: Dinesh Akhand > Sent: Monday, August 28, 2017 5:46 PM > To: dev@geode.apache.org > Subject: Bug while parsing the JSON "key which having short data type" in > locate command > > > > Hi Team, > > > > I have found one bug in geode 1.2 . > > > > If in the key we having the short data type > > > > Example: > > > > public class EmpData implements Serializable{ private short empid; > > > > public short getEmpid() { > >return empid; > > } > > > > public void setEmpid(short empid) { > >this.empid = empid; > > } > > > > > > EmpData d1 = new EmpData(); > > D1. setEmpid((short)1); > > > > Region.put(d1,"value1"); > > > > Now try locate command on this . > > > > > > Problem in code: file JSONTokener.java. it always return short to int value > > > > try { > > long longValue = Long.parseLong(number, base); > > if(longValue <= Short.MAX_VALUE && longValue >= Short.MIN_VALUE) > > { > > return (short) longValue; > > } > > else if (longValue <= Integer.MAX_VALUE && longValue >= > Integer.MIN_VALUE) { > > return (int) longValue; > > } else { > > return longValue; > > } > > > > Later it cause the problem of java.lang.IllegalArgumentException: > argument type mismatch. > > locate entry --key=--key=('empid ':1) --region=CUSTOMER_1 > > > > alternate way : changes the DataCommandFunctionJUnitTest.java changes > the testLocateKeyIsObject method > > > > due to same problem, we are facing problem with all commands where we > usage the key. > > > > Thanks, > > Dinesh Akhand > > > > > > This message and the information contained herein is proprietary and > confidential and subject to the Amdocs policy statement, > > > > you may review at https://www.amdocs.com/about/email-disclaimer < > https://www.amdocs.com/about/email-disclaimer> > This message and the information contained herein is proprietary and > confidential and subject to the Amdocs policy statement, > > you may review at https://www.amdocs.com/about/email-disclaimer < > https://www.amdocs.com/about/email-disclaimer> > This message and the information contained herein is proprietary and > confidential and subject to the Amdocs policy statement, > > you may review at https://www.amdocs.com/about/email-disclaimer < > https://www.amdocs.com/about/email-disclaimer> > >
Re: Need suggestion for exception expected but was
I am using geode 1.2 Thanks Dinesh On 6 Sep 2017 20:51, "Anthony Baker"wrote: > What version of Geode is this? Did you change the log4j version? > > Anthony > > > On Sep 6, 2017, at 7:47 AM, Dinesh Akhand wrote: > > > > java.lang.Exception: Unexpected exception, expected exceptions.IMDGRuntimeException> but was > > at org.apache.geode.internal.logging.LogService$ > PropertyChangeListenerImpl.propertyChange(LogService.java:279) > > at org.apache.logging.log4j.core.LoggerContext. > firePropertyChangeEvent(LoggerContext.java:519) > > at org.apache.logging.log4j.core.LoggerContext. > setConfiguration(LoggerContext.java:500) > > at org.apache.logging.log4j.core. > LoggerContext.reconfigure(LoggerContext.java:562) > > at org.apache.logging.log4j.core. > LoggerContext.reconfigure(LoggerContext.java:578) > > at org.apache.geode.internal.logging.LogService.init( > LogService.java:84) > > at org.apache.geode.internal.logging.LogService.( > LogService.java:72) > > at org.apache.geode.distributed.internal. > InternalDistributedSystem.(InternalDistributedSystem.java:129) > > > > > > Can any one suggest me about this exception. > > > > While creating logger inInternalDistributedSystem > > private static final boolean ALLOW_MEMORY_LOCK_WHEN_OVERCOMMITTED = > > Boolean.getBoolean(DistributionConfig.GEMFIRE_PREFIX + > "Cache.ALLOW_MEMORY_OVERCOMMIT"); > > private static final Logger logger = LogService.getLogger(); > > > > public static final String DISABLE_MANAGEMENT_PROPERTY = > > DistributionConfig.GEMFIRE_PREFIX + "disableManagement"; > > > > /** > > Thanks, > > Dinesh Akhand > > This message and the information contained herein is proprietary and > confidential and subject to the Amdocs policy statement, > > > > you may review at https://www.amdocs.com/about/email-disclaimer < > https://www.amdocs.com/about/email-disclaimer> > >