RE: Re: [Question] Apache Beam Spark Runner Support - Spark 3.5 Environment

2023-11-04 Thread Giridhar Addepalli
 Thank you Alexey for the response.


We are using Beam 2.41.0 with Spark 3.3.0 cluster.
We did not run into any issues.

is it because in Beam 2.41.0, compatibility tests were run against spark
3.3.0 ?
https://github.com/apache/beam/blob/release-2.41.0/runners/spark/3/build.gradle

If so, since compatibility tests were not run against Spark 3.5.0 even in
latest release of Beam 2.52.0, is it not advised to use Beam 2.52.0 with
Spark 3.5.0 cluster ?

Thanks,

Giridhar.


On 2023/11/03 13:05:45 Alexey Romanenko wrote:

> AFAICT, the latest tested (compatibility tests) version for now is 3.4.1
[1] We may try to add 3.5.x version there.

>

> I believe that ValidateRunners tests are run only against default Spark
3.2.2 version.

>

> —

> Alexey

>

> [1]
https://github.com/apache/beam/blob/2aaf09c0eb6928390d861ba228447338b8ca92d3/runners/spark/3/build.gradle#L36

>

>

> > On 3 Nov 2023, at 05:06, Sri Ganesh Venkataraman 
wrote:

> >

> > Does Apache Beam version (2.41.0)  or latest (2.51.0) support Spark 3.5
environment for spark runner ?

> >

> > Apache Beam - Spark Runner Documentation states -

> > The Spark runner currently supports Spark’s 3.2.x branch

> >

> > Thanks

> > Sri Ganesh V

>

>


[bug #64752] [request] make the "next" hyperlink take you the next page, even if deeper in the hierarchy

2023-10-13 Thread Arun Giridhar
Update of bug #64752 (project octave):

Severity:  3 - Normal => 2 - Minor  
Priority:  5 - Normal => 3 - Low
  Item Group:None => Feature Request

___

Follow-up Comment #5:

I'm adding the Texinfo help email to the cc for this bug report, if they
accept cross-project links.

Texinfo users: Is there a way to make HTML links point to "reading order next"
instead of "hierarchical next", as described at
https://savannah.gnu.org/bugs/?64752#comment3 in more detail?


___

Reply to this item at:

  

___
Message sent via Savannah
https://savannah.gnu.org/




[sr #110789] Prevent misuse of canned responses in bug trackers

2022-11-28 Thread Arun Giridhar
URL:
  <https://savannah.nongnu.org/support/?110789>

 Summary: Prevent misuse of canned responses in bug trackers
 Project: Savannah Administration
   Submitter: arungiridhar
   Submitted: Mon 28 Nov 2022 10:48:19 AM EST
Category: Savannah trackers - bugs, tasks, etc.
Priority: 5 - Normal
Severity: 3 - Normal
  Status: None
 Assigned to: None
Originator Email: 
Operating System: None
 Open/Closed: Open
 Discussion Lock: Any


___

Follow-up Comments:


---
Date: Mon 28 Nov 2022 10:48:19 AM EST By: Arun Giridhar 
We have a list of canned responses on the GNU Octave bug tracker hosted at
Savannah here: https://savannah.gnu.org/bugs/?group=octave=additem

The question for Savannah admins is: can we limit the use of those canned
responses to project members who are logged in and not by end users? We have
had some cases over time of people submitting bugs and also selecting a canned
response, causing confusion. https://savannah.gnu.org/bugs/?63430

Discussion link:
https://octave.discourse.group/t/canned-responses-in-the-bug-tracker/3637







___

Reply to this item at:

  <https://savannah.nongnu.org/support/?110789>

___
Message sent via Savannah
https://savannah.nongnu.org/




Re: Potential Bug in 2.3 version (leading to deletion of state directories)

2019-11-14 Thread Giridhar Addepalli
Hi John,

Can you please point us to code where Thread-2 will be able to recreate the 
state directory once cleaner is done ?

Also, we see that in https://issues.apache.org/jira/browse/KAFKA-6122, retries 
around locks is removed. Please let us know why retry mechanism is removed?

Also can you please explain below comment in 
AssignedTasks.java#initializeNewTasks function
catch (final LockException e) {
// made this trace as it will spam the logs in the poll 
loop.
log.trace("Could not create {} {} due to {}; will retry", 
taskTypeName, entry.getKey(), e.getMessage());
}
Thanks,
Giridhar.

On 2019/11/14 21:25:28, John Roesler  wrote: 
> Hey Navinder,
> 
> I think what's happening is a little different. Let's see if my
> worldview also explains your experiences.
> 
> There is no such thing as "mark for deletion". When a thread loses a
> task, it simply releases its lock on the directory. If no one else on
> the instance claims that lock within `state.cleanup.delay.ms` amount
> of milliseconds, then the state cleaner will itself grab the lock and
> delete the directory. On the other hand, if another thread (or the
> same thread) gets the task back and claims the lock before the
> cleaner, it will be able to re-open the store and use it.
> 
> The default for `state.cleanup.delay.ms` is 10 minutes, which is
> actually short enough that it could pass during a single rebalance (if
> Streams starts recovering a lot of state). I recommend you increase
> `state.cleanup.delay.ms` by a lot, like maybe set it to one hour.
> 
> One thing I'm curious about... You didn't mention if Thread-2
> eventually is able to re-create the state directory (after the cleaner
> is done) and transition to RUNNING. This should be the case. If not, I
> would consider it a bug.
> 
> Thanks,
> -John
> 
> On Thu, Nov 14, 2019 at 3:02 PM Navinder Brar
>  wrote:
> >
> > Hi,
> > We are facing a peculiar situation in the 2.3 version of Kafka Streams. 
> > First of all, I want to clarify if it is possible that a Stream Thread (say 
> > Stream Thread-1) which had got an assignment for a standby task (say 0_0) 
> > can change to Stream Thread-2 on the same host post rebalancing. The issue 
> > we are facing is this is happening for us and post rebalancing since the 
> > Stream Thread-1 had 0_0 and is not assigned back to it, it closes that task 
> > and marks it for deletion(after cleanup delay time), and meanwhile, the 
> > task gets assigned to Stream Thread-2. When the Stream Thread-2 tries to 
> > transition this task to Running, it gets a LockException which is caught in 
> > AssignedTasks#initializeNewTasks(). This makes 0_0 stay in Created state on 
> > Stream Thread-2 and after the cleanup delay is over the task directories 
> > for 0_0 get deleted.
> > Can someone please comment on this behavior.
> > Thanks,Navinder
> 


Standby Tasks stays in “created” hash map in AssignedTasks

2019-11-14 Thread Giridhar Addepalli
We are using kakfa streams version 1.1.0

We made some changes to kafka streams code. We are observing following sequence 
of events in our production environment. We want to understand if following 
sequence of events is possible in 1.1.0 version also.

time T0

StreamThread-1 : got assigned 0_1, 0_2 standby tasks
StreamThread-2 : got assigned 0_3 standby task

time T1 -

Now let us say there is a consumer group rebalance.

And task 0_1 got assigned to StreamThread-2 (i.e; it 0_1 standby task moved 
from StreamThread-1 to StreamThread-2).

time T2 --

StreamThread-2 sees that new standby task, 0_1, is assigned to it. 
Tries to initializeStateStores for 0_1, but gets *LockException* because 
*owningThread* for the lock is StreamThread-1.

But LockException is being swallowed in *initializeNewTasks* function of 
*AssignedTasks.java*

And 0_1 remains in *created* map inside *AssignedTasks.java*

time T3 --

StreamThread-1 realizes that 0_1 is not re-assigned to it and closes the 
suspended task. 
As part of closing suspended task, entry for 0_1 is deleted from *locks* map in 
*unlock* function in StateDirectory.java

time T4 --

 *CleanupThread* came along after *cleanupDelayMs* time and decided 0_1 
directory in local 
 file system is obsolete and deleted the directory !!!
Since local directory is deleted for the task, and 0_1 is under created map, 
changelog topic-partitions won't be read for 0_1 standby task until next 
rebalance !!!


Please let us know if this is valid sequence. If not, what are the guards to 
prevent this sequence.

We see that in https://issues.apache.org/jira/browse/KAFKA-6122, retries around 
locks is removed. Please let us know why retry mechanism is removed?


Re: Potential Bug in 2.3 version (leading to deletion of state directories)

2019-11-14 Thread Giridhar Addepalli
Hi John,

i see in https://github.com/apache/kafka/pull/3653
there is discussion around swallowing of LockException and retry not being 
there.
but dguy replied saying that "The retry doesn't happen in this block of code. 
It will happen the next time the runLoop executes."

but state of thread is being changed to RUNNING, hence 
updateNewAndRestoringTasks won't be called again inside runOnce of StreamThread

In TaskManager#updateNewAndRestoringTasks at the end, there is IF condition 
which checks whether all active tasks are running.

Do you we should change 
from 
if (active.allTasksRunning()) { ... } 
to
if (active.allTasksRunning() && standby.allTasksRunning()) { ... }

Thanks,
Giridhar.

On 2019/11/15 03:09:17, Navinder Brar  wrote: 
> Hi John,
> Thanks for the response. Yeah, by "marked for deletion" I meant the unlocking 
> of the store(by which in a way it is marked for deletion). From what I have 
> seen the standby task gets stuck in Created state and doesn't move to Running 
> and is not able to recreate the directory. Also, the point is not just that. 
> With the new KIP to support serving from replicas we want to have very less 
> downtime on replicas and in this case we already have a completely built 
> state directory which is getting deleted just because of the assignment 
> change on the thread(the host is still same). We have 
> StreamsMetadataState#allMetadata() which would be common for all threads of 
> all instances. Can't we have a conditional check during unlocking which 
> checks allMetadata and finds out that the partition we are about to unlock is 
> assigned to this host(we don't care which thread of this host) and then we 
> don't unlock the task, meanwhile the Stream Thread-2 will take the lock on 
> its own when it moves to Running.
> Thanks,Navinder
> On Friday, 15 November, 2019, 02:55:40 am IST, John Roesler 
>  wrote:  
>  
>  Hey Navinder,
> 
> I think what's happening is a little different. Let's see if my
> worldview also explains your experiences.
> 
> There is no such thing as "mark for deletion". When a thread loses a
> task, it simply releases its lock on the directory. If no one else on
> the instance claims that lock within `state.cleanup.delay.ms` amount
> of milliseconds, then the state cleaner will itself grab the lock and
> delete the directory. On the other hand, if another thread (or the
> same thread) gets the task back and claims the lock before the
> cleaner, it will be able to re-open the store and use it.
> 
> The default for `state.cleanup.delay.ms` is 10 minutes, which is
> actually short enough that it could pass during a single rebalance (if
> Streams starts recovering a lot of state). I recommend you increase
> `state.cleanup.delay.ms` by a lot, like maybe set it to one hour.
> 
> One thing I'm curious about... You didn't mention if Thread-2
> eventually is able to re-create the state directory (after the cleaner
> is done) and transition to RUNNING. This should be the case. If not, I
> would consider it a bug.
> 
> Thanks,
> -John
> 
> On Thu, Nov 14, 2019 at 3:02 PM Navinder Brar
>  wrote:
> >
> > Hi,
> > We are facing a peculiar situation in the 2.3 version of Kafka Streams. 
> > First of all, I want to clarify if it is possible that a Stream Thread (say 
> > Stream Thread-1) which had got an assignment for a standby task (say 0_0) 
> > can change to Stream Thread-2 on the same host post rebalancing. The issue 
> > we are facing is this is happening for us and post rebalancing since the 
> > Stream Thread-1 had 0_0 and is not assigned back to it, it closes that task 
> > and marks it for deletion(after cleanup delay time), and meanwhile, the 
> > task gets assigned to Stream Thread-2. When the Stream Thread-2 tries to 
> > transition this task to Running, it gets a LockException which is caught in 
> > AssignedTasks#initializeNewTasks(). This makes 0_0 stay in Created state on 
> > Stream Thread-2 and after the cleanup delay is over the task directories 
> > for 0_0 get deleted.
> > Can someone please comment on this behavior.
> > Thanks,Navinder  


Re: Potential Bug in 2.3 version (leading to deletion of state directories)

2019-11-14 Thread Giridhar Addepalli
Hi John,

i see in https://github.com/apache/kafka/pull/3653
there is discussion around swallowing of LockException and retry not being 
there.
but dguy replied saying that "The retry doesn't happen in this block of code. 
It will happen the next time the runLoop executes."

but state of thread is being changed to RUNNING, hence 
updateNewAndRestoringTasks won't be called again inside runOnce of StreamThread

In TaskManager#updateNewAndRestoringTasks at the end, there is IF condition 
which checks whether all active tasks are running.

Do you we should change 
from 
if (active.allTasksRunning()) { ... } 
to
if (active.allTasksRunning() && standby.allTasksRunning()) { ... }

Thanks,
Giridhar.

On 2019/11/15 03:09:17, Navinder Brar  wrote: 
> Hi John,
> Thanks for the response. Yeah, by "marked for deletion" I meant the unlocking 
> of the store(by which in a way it is marked for deletion). From what I have 
> seen the standby task gets stuck in Created state and doesn't move to Running 
> and is not able to recreate the directory. Also, the point is not just that. 
> With the new KIP to support serving from replicas we want to have very less 
> downtime on replicas and in this case we already have a completely built 
> state directory which is getting deleted just because of the assignment 
> change on the thread(the host is still same). We have 
> StreamsMetadataState#allMetadata() which would be common for all threads of 
> all instances. Can't we have a conditional check during unlocking which 
> checks allMetadata and finds out that the partition we are about to unlock is 
> assigned to this host(we don't care which thread of this host) and then we 
> don't unlock the task, meanwhile the Stream Thread-2 will take the lock on 
> its own when it moves to Running.
> Thanks,Navinder
> On Friday, 15 November, 2019, 02:55:40 am IST, John Roesler 
>  wrote:  
>  
>  Hey Navinder,
> 
> I think what's happening is a little different. Let's see if my
> worldview also explains your experiences.
> 
> There is no such thing as "mark for deletion". When a thread loses a
> task, it simply releases its lock on the directory. If no one else on
> the instance claims that lock within `state.cleanup.delay.ms` amount
> of milliseconds, then the state cleaner will itself grab the lock and
> delete the directory. On the other hand, if another thread (or the
> same thread) gets the task back and claims the lock before the
> cleaner, it will be able to re-open the store and use it.
> 
> The default for `state.cleanup.delay.ms` is 10 minutes, which is
> actually short enough that it could pass during a single rebalance (if
> Streams starts recovering a lot of state). I recommend you increase
> `state.cleanup.delay.ms` by a lot, like maybe set it to one hour.
> 
> One thing I'm curious about... You didn't mention if Thread-2
> eventually is able to re-create the state directory (after the cleaner
> is done) and transition to RUNNING. This should be the case. If not, I
> would consider it a bug.
> 
> Thanks,
> -John
> 
> On Thu, Nov 14, 2019 at 3:02 PM Navinder Brar
>  wrote:
> >
> > Hi,
> > We are facing a peculiar situation in the 2.3 version of Kafka Streams. 
> > First of all, I want to clarify if it is possible that a Stream Thread (say 
> > Stream Thread-1) which had got an assignment for a standby task (say 0_0) 
> > can change to Stream Thread-2 on the same host post rebalancing. The issue 
> > we are facing is this is happening for us and post rebalancing since the 
> > Stream Thread-1 had 0_0 and is not assigned back to it, it closes that task 
> > and marks it for deletion(after cleanup delay time), and meanwhile, the 
> > task gets assigned to Stream Thread-2. When the Stream Thread-2 tries to 
> > transition this task to Running, it gets a LockException which is caught in 
> > AssignedTasks#initializeNewTasks(). This makes 0_0 stay in Created state on 
> > Stream Thread-2 and after the cleanup delay is over the task directories 
> > for 0_0 get deleted.
> > Can someone please comment on this behavior.
> > Thanks,Navinder  


[PATCH] staging: isdn: hysdn_procconf_init() remove parantheses from return value

2019-08-06 Thread Giridhar Prasath R
ERROR: return is not a function, parentheses are not required
FILE: git/kernels/staging/drivers/staging/isdn/hysdn/hysdn_procconf.c:385
+   return (0);

Signed-off-by: Giridhar Prasath R 
---
 drivers/staging/isdn/hysdn/hysdn_procconf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/staging/isdn/hysdn/hysdn_procconf.c 
b/drivers/staging/isdn/hysdn/hysdn_procconf.c
index 73079213ec94..48afd9f5316e 100644
--- a/drivers/staging/isdn/hysdn/hysdn_procconf.c
+++ b/drivers/staging/isdn/hysdn/hysdn_procconf.c
@@ -382,7 +382,7 @@ hysdn_procconf_init(void)
}
 
printk(KERN_NOTICE "HYSDN: procfs initialised\n");
-   return (0);
+   return 0;
 }  /* hysdn_procconf_init */
 
 
/*/
-- 
2.22.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH] staging: isdn: hysdn_procconf_init() remove parantheses from return value

2019-08-06 Thread Giridhar Prasath R
ERROR: return is not a function, parentheses are not required
FILE: git/kernels/staging/drivers/staging/isdn/hysdn/hysdn_procconf.c:385
+   return (0);

Signed-off-by: Giridhar Prasath R 
---
 drivers/staging/isdn/hysdn/hysdn_procconf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/staging/isdn/hysdn/hysdn_procconf.c 
b/drivers/staging/isdn/hysdn/hysdn_procconf.c
index 73079213ec94..48afd9f5316e 100644
--- a/drivers/staging/isdn/hysdn/hysdn_procconf.c
+++ b/drivers/staging/isdn/hysdn/hysdn_procconf.c
@@ -382,7 +382,7 @@ hysdn_procconf_init(void)
}
 
printk(KERN_NOTICE "HYSDN: procfs initialised\n");
-   return (0);
+   return 0;
 }  /* hysdn_procconf_init */
 
 
/*/
-- 
2.22.0



[PATCH] staging: pi433 line over 80 characters in multiple places

2019-08-04 Thread Giridhar Prasath R
Fix the following checkpatch warnings:

WARNING: line over 80 characters
FILE: drivers/staging/pi433/pi433_if.h

Signed-off-by: Giridhar Prasath R 
---
 drivers/staging/pi433/pi433_if.h | 23 ---
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/drivers/staging/pi433/pi433_if.h b/drivers/staging/pi433/pi433_if.h
index 9feb95c431cb..915bd96910c6 100644
--- a/drivers/staging/pi433/pi433_if.h
+++ b/drivers/staging/pi433/pi433_if.h
@@ -117,9 +117,14 @@ struct pi433_rx_cfg {
 
/* packet format */
enum option_on_off  enable_sync;
-   enum option_on_off  enable_length_byte;   /* should be used in 
combination with sync, only */
-   enum address_filtering  enable_address_filtering; /* operational with 
sync, only */
-   enum option_on_off  enable_crc;   /* only operational, 
if sync on and fixed length or length byte is used */
+   /* should be used in combination with sync, only */
+   enum option_on_off  enable_length_byte;
+   /* operational with sync, only */
+   enum address_filtering  enable_address_filtering;
+   /* only operational,
+* if sync on and fixed length or length byte is used
+*/
+   enum option_on_off  enable_crc;
 
__u8sync_length;
__u8fixed_message_length;
@@ -132,10 +137,14 @@ struct pi433_rx_cfg {
 
 #define PI433_IOC_MAGIC'r'
 
-#define PI433_IOC_RD_TX_CFG_IOR(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_tx_cfg)])
-#define PI433_IOC_WR_TX_CFG_IOW(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_tx_cfg)])
+#define PI433_IOC_RD_TX_CFG_IOR(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_tx_cfg)])
+#define PI433_IOC_WR_TX_CFG_IOW(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_tx_cfg)])
 
-#define PI433_IOC_RD_RX_CFG_IOR(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_rx_cfg)])
-#define PI433_IOC_WR_RX_CFG_IOW(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_rx_cfg)])
+#define PI433_IOC_RD_RX_CFG_IOR(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_rx_cfg)])
+#define PI433_IOC_WR_RX_CFG_IOW(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_rx_cfg)])
 
 #endif /* PI433_H */
-- 
2.22.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH] staging: pi433 line over 80 characters in multiple places

2019-08-04 Thread Giridhar Prasath R
Fix the following checkpatch warnings:

WARNING: line over 80 characters
FILE: drivers/staging/pi433/pi433_if.h

Signed-off-by: Giridhar Prasath R 
---
 drivers/staging/pi433/pi433_if.h | 23 ---
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/drivers/staging/pi433/pi433_if.h b/drivers/staging/pi433/pi433_if.h
index 9feb95c431cb..915bd96910c6 100644
--- a/drivers/staging/pi433/pi433_if.h
+++ b/drivers/staging/pi433/pi433_if.h
@@ -117,9 +117,14 @@ struct pi433_rx_cfg {
 
/* packet format */
enum option_on_off  enable_sync;
-   enum option_on_off  enable_length_byte;   /* should be used in 
combination with sync, only */
-   enum address_filtering  enable_address_filtering; /* operational with 
sync, only */
-   enum option_on_off  enable_crc;   /* only operational, 
if sync on and fixed length or length byte is used */
+   /* should be used in combination with sync, only */
+   enum option_on_off  enable_length_byte;
+   /* operational with sync, only */
+   enum address_filtering  enable_address_filtering;
+   /* only operational,
+* if sync on and fixed length or length byte is used
+*/
+   enum option_on_off  enable_crc;
 
__u8sync_length;
__u8fixed_message_length;
@@ -132,10 +137,14 @@ struct pi433_rx_cfg {
 
 #define PI433_IOC_MAGIC'r'
 
-#define PI433_IOC_RD_TX_CFG_IOR(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_tx_cfg)])
-#define PI433_IOC_WR_TX_CFG_IOW(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_tx_cfg)])
+#define PI433_IOC_RD_TX_CFG_IOR(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_tx_cfg)])
+#define PI433_IOC_WR_TX_CFG_IOW(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_tx_cfg)])
 
-#define PI433_IOC_RD_RX_CFG_IOR(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_rx_cfg)])
-#define PI433_IOC_WR_RX_CFG_IOW(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_rx_cfg)])
+#define PI433_IOC_RD_RX_CFG_IOR(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_rx_cfg)])
+#define PI433_IOC_WR_RX_CFG_IOW(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_rx_cfg)])
 
 #endif /* PI433_H */
-- 
2.22.0



[PATCH] Fix the following checkpatch warnings:

2019-08-04 Thread Giridhar Prasath R
WARNING: line over 80 characters
FILE: drivers/staging/pi433/pi433_if.h

Signed-off-by: Giridhar Prasath R 
---
 drivers/staging/pi433/pi433_if.h | 23 ---
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/drivers/staging/pi433/pi433_if.h b/drivers/staging/pi433/pi433_if.h
index 9feb95c431cb..915bd96910c6 100644
--- a/drivers/staging/pi433/pi433_if.h
+++ b/drivers/staging/pi433/pi433_if.h
@@ -117,9 +117,14 @@ struct pi433_rx_cfg {
 
/* packet format */
enum option_on_off  enable_sync;
-   enum option_on_off  enable_length_byte;   /* should be used in 
combination with sync, only */
-   enum address_filtering  enable_address_filtering; /* operational with 
sync, only */
-   enum option_on_off  enable_crc;   /* only operational, 
if sync on and fixed length or length byte is used */
+   /* should be used in combination with sync, only */
+   enum option_on_off  enable_length_byte;
+   /* operational with sync, only */
+   enum address_filtering  enable_address_filtering;
+   /* only operational,
+* if sync on and fixed length or length byte is used
+*/
+   enum option_on_off  enable_crc;
 
__u8sync_length;
__u8fixed_message_length;
@@ -132,10 +137,14 @@ struct pi433_rx_cfg {
 
 #define PI433_IOC_MAGIC'r'
 
-#define PI433_IOC_RD_TX_CFG_IOR(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_tx_cfg)])
-#define PI433_IOC_WR_TX_CFG_IOW(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_tx_cfg)])
+#define PI433_IOC_RD_TX_CFG_IOR(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_tx_cfg)])
+#define PI433_IOC_WR_TX_CFG_IOW(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_tx_cfg)])
 
-#define PI433_IOC_RD_RX_CFG_IOR(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_rx_cfg)])
-#define PI433_IOC_WR_RX_CFG_IOW(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_rx_cfg)])
+#define PI433_IOC_RD_RX_CFG_IOR(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_rx_cfg)])
+#define PI433_IOC_WR_RX_CFG_IOW(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_rx_cfg)])
 
 #endif /* PI433_H */
-- 
2.22.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH] Fix the following checkpatch warnings:

2019-08-04 Thread Giridhar Prasath R
WARNING: line over 80 characters
FILE: drivers/staging/pi433/pi433_if.h

Signed-off-by: Giridhar Prasath R 
---
 drivers/staging/pi433/pi433_if.h | 23 ---
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/drivers/staging/pi433/pi433_if.h b/drivers/staging/pi433/pi433_if.h
index 9feb95c431cb..915bd96910c6 100644
--- a/drivers/staging/pi433/pi433_if.h
+++ b/drivers/staging/pi433/pi433_if.h
@@ -117,9 +117,14 @@ struct pi433_rx_cfg {
 
/* packet format */
enum option_on_off  enable_sync;
-   enum option_on_off  enable_length_byte;   /* should be used in 
combination with sync, only */
-   enum address_filtering  enable_address_filtering; /* operational with 
sync, only */
-   enum option_on_off  enable_crc;   /* only operational, 
if sync on and fixed length or length byte is used */
+   /* should be used in combination with sync, only */
+   enum option_on_off  enable_length_byte;
+   /* operational with sync, only */
+   enum address_filtering  enable_address_filtering;
+   /* only operational,
+* if sync on and fixed length or length byte is used
+*/
+   enum option_on_off  enable_crc;
 
__u8sync_length;
__u8fixed_message_length;
@@ -132,10 +137,14 @@ struct pi433_rx_cfg {
 
 #define PI433_IOC_MAGIC'r'
 
-#define PI433_IOC_RD_TX_CFG_IOR(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_tx_cfg)])
-#define PI433_IOC_WR_TX_CFG_IOW(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_tx_cfg)])
+#define PI433_IOC_RD_TX_CFG_IOR(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_tx_cfg)])
+#define PI433_IOC_WR_TX_CFG_IOW(PI433_IOC_MAGIC, PI433_TX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_tx_cfg)])
 
-#define PI433_IOC_RD_RX_CFG_IOR(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_rx_cfg)])
-#define PI433_IOC_WR_RX_CFG_IOW(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR, 
char[sizeof(struct pi433_rx_cfg)])
+#define PI433_IOC_RD_RX_CFG_IOR(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_rx_cfg)])
+#define PI433_IOC_WR_RX_CFG_IOW(PI433_IOC_MAGIC, PI433_RX_CFG_IOCTL_NR,\
+char[sizeof(struct pi433_rx_cfg)])
 
 #endif /* PI433_H */
-- 
2.22.0



[BangPypers] [Jobs] Walk-In for Python/Django developers at CallHub, May 11th

2019-05-07 Thread Chetan Giridhar
Hi,


We are planning a scheduled walk-in for Python/Django developers (2-3
years) on May 11th, Saturday. If you fit the bill or know someone who does,
please pass on the message.


*Location* - 840, 17th Main, 3rd Sector, 1st Floor, above MS Shelters, HSR
Layout, Bengaluru, Karnataka 560102


*Application Process* - Kindly forward your updated resumes to
sanch...@callhub.io and we will take it from there. Below are the job role
details:


*About Us:*


At CallHub, we help advocacy groups in their campaigns and causes, using
our award-winning cloud-based telephony platform. Customers around the
globe use CallHub to reach people via phone calls and text messages. Our
customers include Uber, Democrat Party, major political parties in the US,
Canada, UK, France and Australia. Please visit www.callhub.io for more
details.


*What we’re looking for:*


   1. Around 2-3 years of experience in software development, understands
   process and focuses on quality and timely deliverables
   2. Customer focused mindset with a passion for solving technical issues
   3. Has worked on backend tools like Python/Django, Rest APIs, DBs
   (Postgres/MySQL)
   4. Proven experience in frontend with JavaScript, HTML and CSS
   5. Exceptional written and verbal communication skills
   6. Team player with strong interpersonal skills, willing to ask for help
   and offer support to the rest of the team
   7. Detail oriented. Ability to empathize with customers, Ability to pick
   up new technologies and assess situations quickly

Do send an email to sanch...@callhub.io if you have more questions.

-- 
Regards,
Chetan
___
BangPypers mailing list
BangPypers@python.org
https://mail.python.org/mailman/listinfo/bangpypers


Re: Interrupts during page fault exceptions

2019-05-07 Thread Shrikant Giridhar
>> interrupts being serviced while the page fault is in progress on x86 but
not on ARM or did I miss something in my reading of the code?

Turns out I did miss something in my reading of the code. Interrupts are
indeed disabled on x86 on entry into the page fault handler until we save
the CR2 register (which holds the faulting address). Local IRQs are enabled
in the fault handler after that. This prevents an interrupt (or maybe
another thread too?) clobbering the CR2 value before the faulting address
is saved.

However, this still leaves me with my original "guesses":

>> I would have guessed that (non-threaded) interrupts be disabled during
page faults because of the possibility of a recursive lock acquire or stack
overflow if the interrupt handler itself page faults.

I assume this is not a problem because we don't actually "handle" page
faults that happen because of interrupts? Going off of this comment:

_If we're in an interrupt, have no user context or are running in a region
with pagefaults disabled then we must not take the fault_

in arch/x86/mm/fault.c I does indeed look like that. I'd be glad if someone
can clarify this.


Shrikant

On Tue, May 7, 2019, 10:19 AM Shrikant Giridhar 
wrote:

> Hi,
>
> I was looking at arch code setting up page fault handling in the kernel
> and came away with a couple of questions.
>
> Can hardware interrupts (non-NMI) occur during page faults? On x86, I
> notice that the page fault handler is set up with an interrupt gate which
> should clear the IF (Interrupt Enable) bit - disabling maskable interrupts
> in the process. I also don't see interrupts being enabled later in the
> handler (arch/x86/mm/fault.c:do_page_fault).
>
> However, from a quick skim, it doesn't look like the same rule is followed
> on ARM (32-bit) where local IRQs are enabled after we enter the page fault
> handler (arch/arm/mm/fault.c:do_page_fault).
>
> Is there a general policy for interrupt handling during page faults? I
> would have guessed that (non-threaded) interrupts be disabled during page
> faults because of the possibility of a recursive lock acquire or stack
> overflow if the interrupt handler itself page faults.
>
> Is there an arch-specific factor involved which prevents (AFAICT)
> interrupts being serviced while the page fault is in progress on x86 but
> not on ARM or did I miss something in my reading of the code?
>
>
> Shrikant
>
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Interrupts during page fault exceptions

2019-05-07 Thread Shrikant Giridhar
Hi,

I was looking at arch code setting up page fault handling in the kernel and
came away with a couple of questions.

Can hardware interrupts (non-NMI) occur during page faults? On x86, I
notice that the page fault handler is set up with an interrupt gate which
should clear the IF (Interrupt Enable) bit - disabling maskable interrupts
in the process. I also don't see interrupts being enabled later in the
handler (arch/x86/mm/fault.c:do_page_fault).

However, from a quick skim, it doesn't look like the same rule is followed
on ARM (32-bit) where local IRQs are enabled after we enter the page fault
handler (arch/arm/mm/fault.c:do_page_fault).

Is there a general policy for interrupt handling during page faults? I
would have guessed that (non-threaded) interrupts be disabled during page
faults because of the possibility of a recursive lock acquire or stack
overflow if the interrupt handler itself page faults.

Is there an arch-specific factor involved which prevents (AFAICT)
interrupts being serviced while the page fault is in progress on x86 but
not on ARM or did I miss something in my reading of the code?


Shrikant
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: [Gen-art] Gen-ART Last Call review of draft-ietf-payload-flexible-fec-scheme-16

2019-02-04 Thread Giridhar Mandyam
Hi Merai,

Slight revision follows.  I assume it is still OK.


“… an application should avoid sending/receiving FEC repair streams if it knows 
that sending/receiving those FEC repair streams would not help at all in 
recovering the missing packets.  Examples of where FEC would not be beneficial 
are:  (1) if the successful recovery rate as determined by RTCP feedback is low 
(see RFC 5725 and RFC 7509),  and (2) the application has a smaller latency 
requirement than the repair window adopted by the FEC configuration based on 
the expected burst loss duration and the target FEC overhead.  It is 
RECOMMENDED that the amount and type (row, column, or both) of FEC protection 
is adjusted  dynamically based on the packet loss rate and burst loss length 
observed by the applications."
-Giri

From: Varun Singh 
Sent: Monday, February 4, 2019 3:13 PM
To: Giridhar Mandyam 
Cc: Meral Shirazipour ; 
draft-ietf-payload-flexible-fec-scheme@ietf.org; gen-art@ietf.org
Subject: Re: Gen-ART Last Call review of 
draft-ietf-payload-flexible-fec-scheme-16


CAUTION: This email originated from outside of the organization.
Hi Giri,

Both post-repair loss metrics should be referenced, one provides run-length 
perpacket feedback and the other provides loss and repair counters. 
Webrtc-stats reports on the latter and can already be used to toggle FEC in 
webrtc applications.

On Mon, 4 Feb 2019 at 0.34, Giridhar Mandyam 
mailto:mand...@qti.qualcomm.com>> wrote:
>-Section 8, "an application should avoid sending/receiving FEC repair streams 
>if it knows that sending/receiving those FEC repair streams would not help at 
>all in recovering the missing packets. It is RECOMMENDED that the amount and 
>type (row, column, or both) of FEC protection is adjusted dynamically based on 
>the packet loss rate and burst loss length observed by the applications."

>How would the application know that sending/receiving those FEC repair streams 
>would not help at all? any rule of thumb to add here?

The editors propose that the above text be revised as follows:


“… an application should avoid sending/receiving FEC repair streams if it knows 
that sending/receiving those FEC repair streams would not help at all in 
recovering the missing packets.  Examples of where FEC would not be beneficial 
are:  (1) if the successful recovery rate as determined by RTCP feedback is low 
(see RFC 5725),  and (2) the application has a smaller latency requirement than 
the repair window adopted by the FEC configuration based on the expected burst 
loss duration and the target FEC overhead.  It is RECOMMENDED that the amount 
and type (row, column, or both) of FEC protection is adjusted  dynamically 
based on the packet loss rate and burst loss length observed by the 
applications."

Please let us know if this is acceptable.

Thanks,

-Giri Mandyam

From: Meral Shirazipour 
mailto:meral.shirazip...@ericsson.com>>
Sent: Thursday, January 31, 2019 11:50 PM
To: 
draft-ietf-payload-flexible-fec-scheme@ietf.org<mailto:draft-ietf-payload-flexible-fec-scheme@ietf.org>;
 gen-art@ietf.org<mailto:gen-art@ietf.org>
Subject: Gen-ART Last Call review of draft-ietf-payload-flexible-fec-scheme-16


CAUTION: This email originated from outside of the organization.
I am the assigned Gen-ART reviewer for this draft. The General Area Review Team 
(Gen-ART) reviews all IETF documents being processed by the IESG for the IETF 
Chair.  Please treat these comments just like any other last call comments.

For more information, please see the FAQ at 
<https://trac.ietf.org/trac/gen/wiki/GenArtfaq>.

Document: draft-ietf-payload-flexible-fec-scheme-16

Reviewer: Meral Shirazipour
Review Date: 2019-01-31
IETF LC End Date: 2019-02-01
IESG Telechat date: NA


Summary: This draft is ready to be published as Standards Track RFC .
Major issues:

Minor issues:

Nits/editorial comments:
-Section 1.1.1 Title "One-Dimensionsal "--->"One-Dimensional"

-[Page 14] 3.2.  , "signficant"--->"significant"

-[Page 16], 4.2.1.  , "pakcets"--->"packets"

-[Page 35], 6.3.1.  , "reciever"--->"receiver"

-[Page 35], 6.3.1.1.  , "signficant"--->"significant"

-[Page 43], 7., "several Sesssion "--->"several Session "

-Section 8, "an application should avoid
   sending/receiving FEC repair streams if it knows that sending/
   receiving those FEC repair streams would not help at all in
   recovering the missing packets. It is RECOMMENDED that the amount
   and type (row, column, or both) of FEC protection is adjusted
   dynamically based on the packet loss rate and burst loss length
   observed by the applications."

How would the application know that sending/receiving those FEC repair streams 
would not help at all? any rule of thumb to add here?


Best Regards,
Meral
---
Meral 

Re: [Gen-art] Gen-ART Last Call review of draft-ietf-payload-flexible-fec-scheme-16

2019-02-03 Thread Giridhar Mandyam
>-Section 8, "an application should avoid sending/receiving FEC repair streams 
>if it knows that sending/receiving those FEC repair streams would not help at 
>all in recovering the missing packets. It is RECOMMENDED that the amount and 
>type (row, column, or both) of FEC protection is adjusted dynamically based on 
>the packet loss rate and burst loss length observed by the applications."

>How would the application know that sending/receiving those FEC repair streams 
>would not help at all? any rule of thumb to add here?

The editors propose that the above text be revised as follows:


"... an application should avoid sending/receiving FEC repair streams if it 
knows that sending/receiving those FEC repair streams would not help at all in 
recovering the missing packets.  Examples of where FEC would not be beneficial 
are:  (1) if the successful recovery rate as determined by RTCP feedback is low 
(see RFC 5725),  and (2) the application has a smaller latency requirement than 
the repair window adopted by the FEC configuration based on the expected burst 
loss duration and the target FEC overhead.  It is RECOMMENDED that the amount 
and type (row, column, or both) of FEC protection is adjusted  dynamically 
based on the packet loss rate and burst loss length observed by the 
applications."

Please let us know if this is acceptable.

Thanks,

-Giri Mandyam

From: Meral Shirazipour 
Sent: Thursday, January 31, 2019 11:50 PM
To: draft-ietf-payload-flexible-fec-scheme@ietf.org; gen-art@ietf.org
Subject: Gen-ART Last Call review of draft-ietf-payload-flexible-fec-scheme-16


CAUTION: This email originated from outside of the organization.
I am the assigned Gen-ART reviewer for this draft. The General Area Review Team 
(Gen-ART) reviews all IETF documents being processed by the IESG for the IETF 
Chair.  Please treat these comments just like any other last call comments.

For more information, please see the FAQ at 
.

Document: draft-ietf-payload-flexible-fec-scheme-16

Reviewer: Meral Shirazipour
Review Date: 2019-01-31
IETF LC End Date: 2019-02-01
IESG Telechat date: NA


Summary: This draft is ready to be published as Standards Track RFC .
Major issues:

Minor issues:

Nits/editorial comments:
-Section 1.1.1 Title "One-Dimensionsal "--->"One-Dimensional"

-[Page 14] 3.2.  , "signficant"--->"significant"

-[Page 16], 4.2.1.  , "pakcets"--->"packets"

-[Page 35], 6.3.1.  , "reciever"--->"receiver"

-[Page 35], 6.3.1.1.  , "signficant"--->"significant"

-[Page 43], 7., "several Sesssion "--->"several Session "

-Section 8, "an application should avoid
   sending/receiving FEC repair streams if it knows that sending/
   receiving those FEC repair streams would not help at all in
   recovering the missing packets. It is RECOMMENDED that the amount
   and type (row, column, or both) of FEC protection is adjusted
   dynamically based on the packet loss rate and burst loss length
   observed by the applications."

How would the application know that sending/receiving those FEC repair streams 
would not help at all? any rule of thumb to add here?


Best Regards,
Meral
---
Meral Shirazipour
Ericsson
Research
www.ericsson.com
___
Gen-art mailing list
Gen-art@ietf.org
https://www.ietf.org/mailman/listinfo/gen-art


[Yahoo-eng-team] [Bug 1807297] [NEW] Add support to configure policy based routing

2018-12-06 Thread Giridhar Jayavelu
Public bug reported:

When there are multiple interfaces with their own default gateways, this
could result in asymmetric routing. In order to solve this, policy based
routing would have to be configured.

- There is already support to configure routes in cloud-init, but it misses 
ability to specify routing table.
- Need ability to create routing table entries in /etc/iproute2/rt_tables
- Configure routing table lookup for a given network.

Here is a sample configuration:
echo "100 eth0" >> /etc/iproute2/rt_tables
echo "101 eth1" >> /etc/iproute2/rt_tables

auto eth0
iface eth0 inet static
address 10.172.142.37/24
dns-nameservers 10.172.40.1
gateway 10.172.142.253
up ip rule add from 10.172.142.0/24 lookup eth0
up ip route add 10.172.142.0/24 dev eth0 table eth0
up ip route add default via 10.172.142.253 table eth0

auto eth1
iface eth1 inet static
address 10.172.144.56/24
dns-nameservers 10.172.40.1
gateway 10.172.144.253
up ip rule add from 10.172.144.0/24 lookup eth1
up ip route add 10.172.144.0/24 dev eth1 table eth1
up ip route add default via 10.172.144.253 table eth1

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1807297

Title:
  Add support to configure policy based routing

Status in cloud-init:
  New

Bug description:
  When there are multiple interfaces with their own default gateways,
  this could result in asymmetric routing. In order to solve this,
  policy based routing would have to be configured.

  - There is already support to configure routes in cloud-init, but it misses 
ability to specify routing table.
  - Need ability to create routing table entries in /etc/iproute2/rt_tables
  - Configure routing table lookup for a given network.

  Here is a sample configuration:
  echo "100 eth0" >> /etc/iproute2/rt_tables
  echo "101 eth1" >> /etc/iproute2/rt_tables

  auto eth0
  iface eth0 inet static
  address 10.172.142.37/24
  dns-nameservers 10.172.40.1
  gateway 10.172.142.253
  up ip rule add from 10.172.142.0/24 lookup eth0
  up ip route add 10.172.142.0/24 dev eth0 table eth0
  up ip route add default via 10.172.142.253 table eth0

  auto eth1
  iface eth1 inet static
  address 10.172.144.56/24
  dns-nameservers 10.172.40.1
  gateway 10.172.144.253
  up ip rule add from 10.172.144.0/24 lookup eth1
  up ip route add 10.172.144.0/24 dev eth1 table eth1
  up ip route add default via 10.172.144.253 table eth1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1807297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [PATCH v4 0/5] qla2xxx: Add FC-NVMe Target support

2018-11-08 Thread Malavali, Giridhar
Thanks James. Please let us know. 

-- Giri

On 11/8/18, 9:18 AM, "linux-scsi-ow...@vger.kernel.org on behalf of James 
Smart"  
wrote:

External Email

Madhani,

I'll be looking through it over the weekend.

-- james


On 11/8/2018 8:58 AM, Madhani, Himanshu wrote:
> Hi James,
>
> Any more review comments?
>
>> On Oct 31, 2018, at 9:40 AM, Himanshu Madhani 
 wrote:
>>
>> Hi Martin,
>>
>> This series adds support for FC-NVMe Target.
>>
>> Patch #1 adds infrastructure to support FC-NVMeT Link Service processing.
>> Patch #2 adds addes new qla_nvmet.[ch] files for FC-NVMe Target support.
>> Patch #3 has bulk of changes to add hooks into common code infrastucture 
and
>>  adds support for FC-NVMe Target LS4 processing via Purex path.
>> Patch #4 adds SysFS hook to enable NVMe Target for the port.
>>
>> Please apply them to 4.21/scsi-queue at your earliest convenience.
>>
>> Changes from v3 -> v4
>> o Rebased Series on current 4.20/scsi-queue
>> o Removed NVMET_FCTGTFEAT_{CMD|OPDONE}_IN_ISR as per James Smart's 
review comment.
>>
>> Changes from v2 -> v3
>> o Reordered patches so that each patch compiles individually and is 
bisectable.
>>
>> Changes from v1 -> v2
>> o Addressed all comments from Bart.
>> o Consolidated Patch 1 and Patch 2 into single patch.
>> o Fixed smatch warning reported by kbuild autommation.
>> o NVMe Target mode is exclusive at the moment. Cavium driver does not 
support both
>>   FCP Target and NVMe Target at the same time. This will be fixed in 
later updates.
>>
>> Thanks,
>> Himanshu
>>
>> Anil Gurumurthy (4):
>>   qla2xxx_nvmet: Add FC-NVMe Target Link Service request handling
>>   qla2xxx_nvmet: Add files for FC-NVMe Target support
>>   qla2xxx_nvmet: Add FC-NVMe Target handling
>>   qla2xxx_nvmet: Add SysFS node for FC-NVMe Target
>>
>> Himanshu Madhani (1):
>>   qla2xxx: Update driver version to 11.00.00.00-k
>>
>> drivers/scsi/qla2xxx/Makefile  |   3 +-
>> drivers/scsi/qla2xxx/qla_attr.c|  33 ++
>> drivers/scsi/qla2xxx/qla_dbg.c |   1 +
>> drivers/scsi/qla2xxx/qla_dbg.h |   2 +
>> drivers/scsi/qla2xxx/qla_def.h |  35 +-
>> drivers/scsi/qla2xxx/qla_fw.h  | 263 ++
>> drivers/scsi/qla2xxx/qla_gbl.h |  24 +-
>> drivers/scsi/qla2xxx/qla_gs.c  |  16 +-
>> drivers/scsi/qla2xxx/qla_init.c|  49 +-
>> drivers/scsi/qla2xxx/qla_iocb.c|   8 +-
>> drivers/scsi/qla2xxx/qla_isr.c | 112 -
>> drivers/scsi/qla2xxx/qla_mbx.c | 101 +++-
>> drivers/scsi/qla2xxx/qla_nvme.h|  33 --
>> drivers/scsi/qla2xxx/qla_nvmet.c   | 831 +++
>> drivers/scsi/qla2xxx/qla_nvmet.h   | 129 +
>> drivers/scsi/qla2xxx/qla_os.c  |  75 ++-
>> drivers/scsi/qla2xxx/qla_target.c  | 977 
-
>> drivers/scsi/qla2xxx/qla_target.h  |  90 
>> drivers/scsi/qla2xxx/qla_version.h |   4 +-
>> 19 files changed, 2711 insertions(+), 75 deletions(-)
>> create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.c
>> create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.h
>>
>> --
>> 2.12.0
>>
> Hi Martin,
>
> if there are no more review comments. Can we merge this into 
4.21/scsi-queue.
>
> Thanks,
> - Himanshu
>






[KSHST-IT '5088'] Need maths blue print please

2018-11-03 Thread mahalakshmi giridhar
It's my humble request to send maths blue print as asap

-- 
You received this message because you are subscribed to the Google Groups 
"KSHST" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kshst+unsubscr...@googlegroups.com.
To post to this group, send email to kshst@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/kshst/3d5787c8-4958-4402-bfc3-1df6a1ac169f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Unit testing framework to test impala queries

2018-10-31 Thread Giridhar Yalavarthi
Hello Team,

Is there any framework to test impala queries in local.

Regards,
Giridhar


[openstack-dev] [helm] multiple nova compute nodes

2018-10-02 Thread Giridhar Jayavelu
Hi,
Currently, all nova components are packaged in same helm chart "nova". Are 
there any plans to separate nova-compute from rest of the services ?
What should be the approach for deploying multiple nova computes nodes using 
OpenStack helm charts?

Thanks,
Giri

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[jira] [Created] (IMPALA-7563) Hive runner runs hive in local.. Any similar thing related to this which runs impala in local

2018-09-12 Thread Naga Venkata Giridhar (JIRA)
Naga Venkata Giridhar created IMPALA-7563:
-

 Summary: Hive runner runs hive in local.. Any similar thing 
related to this which runs impala in local
 Key: IMPALA-7563
 URL: https://issues.apache.org/jira/browse/IMPALA-7563
 Project: IMPALA
  Issue Type: Question
  Components: Backend, Infrastructure
Reporter: Naga Venkata Giridhar


Hive runner runs hive in local.. Any similar thing related to hive runner which 
runs impala in local.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-7563) Hive runner runs hive in local.. Any similar thing related to this which runs impala in local

2018-09-12 Thread Naga Venkata Giridhar (JIRA)
Naga Venkata Giridhar created IMPALA-7563:
-

 Summary: Hive runner runs hive in local.. Any similar thing 
related to this which runs impala in local
 Key: IMPALA-7563
 URL: https://issues.apache.org/jira/browse/IMPALA-7563
 Project: IMPALA
  Issue Type: Question
  Components: Backend, Infrastructure
Reporter: Naga Venkata Giridhar


Hive runner runs hive in local.. Any similar thing related to hive runner which 
runs impala in local.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7529) UNIT TESTING FRAMEWORK FOR IMPALA

2018-09-05 Thread Naga Venkata Giridhar (JIRA)
Naga Venkata Giridhar created IMPALA-7529:
-

 Summary: UNIT TESTING FRAMEWORK FOR IMPALA
 Key: IMPALA-7529
 URL: https://issues.apache.org/jira/browse/IMPALA-7529
 Project: IMPALA
  Issue Type: New Feature
  Components: Infrastructure
Affects Versions: Impala 2.11.0
Reporter: Naga Venkata Giridhar
 Fix For: Not Applicable


we are looking for a unit testing framework to test queriers written in impala 
similar to hive runner.. please let us know if there is anything already exists



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-7529) UNIT TESTING FRAMEWORK FOR IMPALA

2018-09-05 Thread Naga Venkata Giridhar (JIRA)
Naga Venkata Giridhar created IMPALA-7529:
-

 Summary: UNIT TESTING FRAMEWORK FOR IMPALA
 Key: IMPALA-7529
 URL: https://issues.apache.org/jira/browse/IMPALA-7529
 Project: IMPALA
  Issue Type: New Feature
  Components: Infrastructure
Affects Versions: Impala 2.11.0
Reporter: Naga Venkata Giridhar
 Fix For: Not Applicable


we are looking for a unit testing framework to test queriers written in impala 
similar to hive runner.. please let us know if there is anything already exists



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7196) Impala is not avoiding footers even after mentioning 'skip.footer.line.count' in ddl

2018-06-21 Thread Naga Venkata Giridhar (JIRA)
Naga Venkata Giridhar created IMPALA-7196:
-

 Summary: Impala is not avoiding footers even after mentioning 
'skip.footer.line.count' in ddl
 Key: IMPALA-7196
 URL: https://issues.apache.org/jira/browse/IMPALA-7196
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 2.11.0
Reporter: Naga Venkata Giridhar


we have noticed that skip.footer.line.count is not working in impala and we are 
getting different counts in impala and hive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7196) Impala is not avoiding footers even after mentioning 'skip.footer.line.count' in ddl

2018-06-21 Thread Naga Venkata Giridhar (JIRA)
Naga Venkata Giridhar created IMPALA-7196:
-

 Summary: Impala is not avoiding footers even after mentioning 
'skip.footer.line.count' in ddl
 Key: IMPALA-7196
 URL: https://issues.apache.org/jira/browse/IMPALA-7196
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 2.11.0
Reporter: Naga Venkata Giridhar


we have noticed that skip.footer.line.count is not working in impala and we are 
getting different counts in impala and hive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[Yahoo-eng-team] [Bug 1777491] [NEW] Avoid redundant compute node update

2018-06-18 Thread Giridhar Jayavelu
Public bug reported:

_update_available_resource() in nova/compute/resource_tracker.py invokes 
_init_compute_node() which internally calls _update() and once again _update() 
is invoked at the end of _update_available_resource().
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L762

This triggers update_provider_tree() or get_inventory() on the virt
driver, scanning all resources twice within same method.

** Affects: nova
 Importance: Undecided
 Assignee: Giridhar Jayavelu (gjayavelu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Giridhar Jayavelu (gjayavelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777491

Title:
  Avoid redundant compute node update

Status in OpenStack Compute (nova):
  New

Bug description:
  _update_available_resource() in nova/compute/resource_tracker.py invokes 
_init_compute_node() which internally calls _update() and once again _update() 
is invoked at the end of _update_available_resource().
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L762

  This triggers update_provider_tree() or get_inventory() on the virt
  driver, scanning all resources twice within same method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[jira] [Resolved] (KAFKA-6645) Host Affinity to facilitate faster restarts of kafka streams applications

2018-04-02 Thread Giridhar Addepalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridhar Addepalli resolved KAFKA-6645.
---
Resolution: Information Provided

> Host Affinity to facilitate faster restarts of kafka streams applications
> -
>
> Key: KAFKA-6645
> URL: https://issues.apache.org/jira/browse/KAFKA-6645
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>    Reporter: Giridhar Addepalli
>Priority: Major
>
> Since Kafka Streams applications have lot of state in the stores in general, 
> it would be good to remember the assignment of partitions to machines. So 
> that when whole application is restarted for some reason, there is a way to 
> use past assignment of partitions to machines and there won't be need to 
> build up whole state by reading off of changelog kafka topic. This would 
> result in faster start-up.
> Samza has support for Host Affinity 
> ([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])
> KIP-54 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
>  , handles cases where some members of consumer group goes down / comes up, 
> and KIP-54 ensures there is minimal diff between assignments before and after 
> rebalance. 
> But to handle whole restart use case, we need to remember past assignment 
> somewhere, and use it after restart.
> Please let us know if this is already solved problem / some cleaner way of 
> achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6645) Host Affinity to facilitate faster restarts of kafka streams applications

2018-04-02 Thread Giridhar Addepalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridhar Addepalli resolved KAFKA-6645.
---
Resolution: Information Provided

> Host Affinity to facilitate faster restarts of kafka streams applications
> -
>
> Key: KAFKA-6645
> URL: https://issues.apache.org/jira/browse/KAFKA-6645
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>    Reporter: Giridhar Addepalli
>Priority: Major
>
> Since Kafka Streams applications have lot of state in the stores in general, 
> it would be good to remember the assignment of partitions to machines. So 
> that when whole application is restarted for some reason, there is a way to 
> use past assignment of partitions to machines and there won't be need to 
> build up whole state by reading off of changelog kafka topic. This would 
> result in faster start-up.
> Samza has support for Host Affinity 
> ([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])
> KIP-54 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
>  , handles cases where some members of consumer group goes down / comes up, 
> and KIP-54 ensures there is minimal diff between assignments before and after 
> rebalance. 
> But to handle whole restart use case, we need to remember past assignment 
> somewhere, and use it after restart.
> Please let us know if this is already solved problem / some cleaner way of 
> achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6645) Host Affinity to facilitate faster restarts of kafka streams applications

2018-04-02 Thread Giridhar Addepalli (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422107#comment-16422107
 ] 

Giridhar Addepalli commented on KAFKA-6645:
---

[~guozhang]  & [~mjsax]

Thank you for comments.

We noticed the same behavior as you explained.

Might be good idea to include a line or two about host affinity feature in 
kafka streams documentation.

> Host Affinity to facilitate faster restarts of kafka streams applications
> -
>
> Key: KAFKA-6645
> URL: https://issues.apache.org/jira/browse/KAFKA-6645
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Giridhar Addepalli
>Priority: Major
>
> Since Kafka Streams applications have lot of state in the stores in general, 
> it would be good to remember the assignment of partitions to machines. So 
> that when whole application is restarted for some reason, there is a way to 
> use past assignment of partitions to machines and there won't be need to 
> build up whole state by reading off of changelog kafka topic. This would 
> result in faster start-up.
> Samza has support for Host Affinity 
> ([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])
> KIP-54 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
>  , handles cases where some members of consumer group goes down / comes up, 
> and KIP-54 ensures there is minimal diff between assignments before and after 
> rebalance. 
> But to handle whole restart use case, we need to remember past assignment 
> somewhere, and use it after restart.
> Please let us know if this is already solved problem / some cleaner way of 
> achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6645) Host Affinity to facilitate faster restarts of kafka streams applications

2018-03-14 Thread Giridhar Addepalli (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16398148#comment-16398148
 ] 

Giridhar Addepalli commented on KAFKA-6645:
---

Thank you for your reply [~mjsax]

Can you please provide us with code pointer for this.

Is there some duration of time for which leader of consumer group waits for all 
the consumers to join the group ?

If Kafka Streams application is running on say 10 machines before, and we 
stopped the application on all machines now.

Now, say we are in the process of bringing up the application on the machines. 
During this it should not be the case that kafka streams thinks that other 
machines are down and try to assign partitions within the machines that are 
currently up.

> Host Affinity to facilitate faster restarts of kafka streams applications
> -
>
> Key: KAFKA-6645
> URL: https://issues.apache.org/jira/browse/KAFKA-6645
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>    Reporter: Giridhar Addepalli
>Priority: Major
>
> Since Kafka Streams applications have lot of state in the stores in general, 
> it would be good to remember the assignment of partitions to machines. So 
> that when whole application is restarted for some reason, there is a way to 
> use past assignment of partitions to machines and there won't be need to 
> build up whole state by reading off of changelog kafka topic. This would 
> result in faster start-up.
> Samza has support for Host Affinity 
> ([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])
> KIP-54 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
>  , handles cases where some members of consumer group goes down / comes up, 
> and KIP-54 ensures there is minimal diff between assignments before and after 
> rebalance. 
> But to handle whole restart use case, we need to remember past assignment 
> somewhere, and use it after restart.
> Please let us know if this is already solved problem / some cleaner way of 
> achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6645) Host Affinity to facilitate faster restarts of kafka streams applications

2018-03-13 Thread Giridhar Addepalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridhar Addepalli updated KAFKA-6645:
--
Description: 
Since Kafka Streams applications have lot of state in the stores in general, it 
would be good to remember the assignment of partitions to machines. So that 
when whole application is restarted for some reason, there is a way to use past 
assignment of partitions to machines and there won't be need to build up whole 
state by reading off of changelog kafka topic. This would result in faster 
start-up.

Samza has support for Host Affinity 
([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])

KIP-54 
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
 , handles cases where some members of consumer group goes down / comes up, and 
KIP-54 ensures there is minimal diff between assignments before and after 
rebalance. 

But to handle whole restart use case, we need to remember past assignment 
somewhere, and use it after restart.

Please let us know if this is already solved problem / some cleaner way of 
achieving this objective

  was:
Since Kafka Streams applications have lot of state in the stores in general, it 
would be good to remember the assignment of partitions to machines. So that 
when whole application is restarted for whatever reason, there is a way to use 
past assignment of partitions to machines and there won't be need to build up 
state by reading off of changelog kafka topic and would result in faster 
start-up.

Samza has support for Host Affinity 
([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])

KIP-54 
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
 , handles cases where some members of consumer group goes down / comes up, and 
KIP-54 ensures there is minimal diff between assignments before and after 
rebalance. 

But to handle whole restart use case, we need to remember past assignment 
somewhere, and use it after restart.

Please let us know if this is already solved problem / some cleaner way of 
achieving this objective


> Host Affinity to facilitate faster restarts of kafka streams applications
> -
>
> Key: KAFKA-6645
> URL: https://issues.apache.org/jira/browse/KAFKA-6645
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>    Reporter: Giridhar Addepalli
>Priority: Major
>
> Since Kafka Streams applications have lot of state in the stores in general, 
> it would be good to remember the assignment of partitions to machines. So 
> that when whole application is restarted for some reason, there is a way to 
> use past assignment of partitions to machines and there won't be need to 
> build up whole state by reading off of changelog kafka topic. This would 
> result in faster start-up.
> Samza has support for Host Affinity 
> ([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])
> KIP-54 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
>  , handles cases where some members of consumer group goes down / comes up, 
> and KIP-54 ensures there is minimal diff between assignments before and after 
> rebalance. 
> But to handle whole restart use case, we need to remember past assignment 
> somewhere, and use it after restart.
> Please let us know if this is already solved problem / some cleaner way of 
> achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6645) Host Affinity to facilitate faster restarts of kafka streams applications

2018-03-13 Thread Giridhar Addepalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridhar Addepalli updated KAFKA-6645:
--
Summary: Host Affinity to facilitate faster restarts of kafka streams 
applications  (was: Sticky Partition Assignment to facilitate faster restarts 
of kafka streams applications)

> Host Affinity to facilitate faster restarts of kafka streams applications
> -
>
> Key: KAFKA-6645
> URL: https://issues.apache.org/jira/browse/KAFKA-6645
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>    Reporter: Giridhar Addepalli
>Priority: Major
>
> Since Kafka Streams applications have lot of state in the stores in general, 
> it would be good to remember the assignment of partitions to machines. So 
> that when whole application is restarted for whatever reason, there is a way 
> to use past assignment of partitions to machines and there won't be need to 
> build up state by reading off of changelog kafka topic and would result in 
> faster start-up.
> Samza has support for Host Affinity 
> ([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])
> KIP-54 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
>  , handles cases where some members of consumer group goes down / comes up, 
> and KIP-54 ensures there is minimal diff between assignments before and after 
> rebalance. 
> But to handle whole restart use case, we need to remember past assignment 
> somewhere, and use it after restart.
> Please let us know if this is already solved problem / some cleaner way of 
> achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6645) Sticky Partition Assignment to facilitate faster restarts of kafka streams applications

2018-03-13 Thread Giridhar Addepalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridhar Addepalli updated KAFKA-6645:
--
Description: 
Since Kafka Streams applications have lot of state in the stores in general, it 
would be good to remember the assignment of partitions to machines. So that 
when whole application is restarted for whatever reason, there is a way to use 
past assignment of partitions to machines and there won't be need to build up 
state by reading off of changelog kafka topic and would result in faster 
start-up.

Samza has support for Host Affinity 
([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])

KIP-54 
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
 , handles cases where some members of consumer group goes down / comes up, and 
KIP-54 ensures there is minimal diff between assignments before and after 
rebalance. 

But to handle whole restart use case, we need to remember past assignment 
somewhere, and use it after restart.

Please let us know if this is already solved problem / some cleaner way of 
achieving this objective

  was:
Since Kafka Streams applications have lot of state in the stores in the 
general, it would be good to remember the assignment of partitions to machines. 
So that when whole application is restarted for whatever reason, there is a way 
to use past assignment of partitions to machines and there won't be need to 
build up state by reading off of changelog kafka topic and would result in 
faster start-up.

Samza has support for Host Affinity 
(https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html)

KIP-54 
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
 , handles cases where some members of consumer group goes down / comes up, and 
KIP-54 ensures there is minimal diff between assignments before and after 
rebalance. 

But to handle whole restart use case, we need to remember past assignment 
somewhere, and use it after restart.

Please let us know if this is already solved problem / some cleaner way of 
achieving this objective


> Sticky Partition Assignment to facilitate faster restarts of kafka streams 
> applications
> ---
>
> Key: KAFKA-6645
> URL: https://issues.apache.org/jira/browse/KAFKA-6645
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Giridhar Addepalli
>Priority: Major
>
> Since Kafka Streams applications have lot of state in the stores in general, 
> it would be good to remember the assignment of partitions to machines. So 
> that when whole application is restarted for whatever reason, there is a way 
> to use past assignment of partitions to machines and there won't be need to 
> build up state by reading off of changelog kafka topic and would result in 
> faster start-up.
> Samza has support for Host Affinity 
> ([https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html])
> KIP-54 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
>  , handles cases where some members of consumer group goes down / comes up, 
> and KIP-54 ensures there is minimal diff between assignments before and after 
> rebalance. 
> But to handle whole restart use case, we need to remember past assignment 
> somewhere, and use it after restart.
> Please let us know if this is already solved problem / some cleaner way of 
> achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6645) Sticky Partition Assignment to facilitate faster restarts of kafka streams applications

2018-03-13 Thread Giridhar Addepalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridhar Addepalli updated KAFKA-6645:
--
Summary: Sticky Partition Assignment to facilitate faster restarts of kafka 
streams applications  (was: Sticky Partition Assignment across Kafka Streams 
application restarts)

> Sticky Partition Assignment to facilitate faster restarts of kafka streams 
> applications
> ---
>
> Key: KAFKA-6645
> URL: https://issues.apache.org/jira/browse/KAFKA-6645
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Giridhar Addepalli
>Priority: Major
>
> Since Kafka Streams applications have lot of state in the stores in the 
> general, it would be good to remember the assignment of partitions to 
> machines. So that when whole application is restarted for whatever reason, 
> there is a way to use past assignment of partitions to machines and there 
> won't be need to build up state by reading off of changelog kafka topic and 
> would result in faster start-up.
> Samza has support for Host Affinity 
> (https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html)
> KIP-54 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
>  , handles cases where some members of consumer group goes down / comes up, 
> and KIP-54 ensures there is minimal diff between assignments before and after 
> rebalance. 
> But to handle whole restart use case, we need to remember past assignment 
> somewhere, and use it after restart.
> Please let us know if this is already solved problem / some cleaner way of 
> achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6645) Sticky Partition Assignment across Kafka Streams application restarts

2018-03-13 Thread Giridhar Addepalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridhar Addepalli updated KAFKA-6645:
--
Issue Type: New Feature  (was: Bug)

> Sticky Partition Assignment across Kafka Streams application restarts
> -
>
> Key: KAFKA-6645
> URL: https://issues.apache.org/jira/browse/KAFKA-6645
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>    Reporter: Giridhar Addepalli
>Priority: Major
>
> Since Kafka Streams applications have lot of state in the stores in the 
> general, it would be good to remember the assignment of partitions to 
> machines. So that when whole application is restarted for whatever reason, 
> there is a way to use past assignment of partitions to machines and there 
> won't be need to build up state by reading off of changelog kafka topic and 
> would result in faster start-up.
> Samza has support for Host Affinity 
> (https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html)
> KIP-54 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
>  , handles cases where some members of consumer group goes down / comes up, 
> and KIP-54 ensures there is minimal diff between assignments before and after 
> rebalance. 
> But to handle whole restart use case, we need to remember past assignment 
> somewhere, and use it after restart.
> Please let us know if this is already solved problem / some cleaner way of 
> achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6645) Sticky Partition Assignment across Kafka Streams application restarts

2018-03-13 Thread Giridhar Addepalli (JIRA)
Giridhar Addepalli created KAFKA-6645:
-

 Summary: Sticky Partition Assignment across Kafka Streams 
application restarts
 Key: KAFKA-6645
 URL: https://issues.apache.org/jira/browse/KAFKA-6645
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Giridhar Addepalli


Since Kafka Streams applications have lot of state in the stores in the 
general, it would be good to remember the assignment of partitions to machines. 
So that when whole application is restarted for whatever reason, there is a way 
to use past assignment of partitions to machines and there won't be need to 
build up state by reading off of changelog kafka topic and would result in 
faster start-up.

Samza has support for Host Affinity 
(https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html)

KIP-54 
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
 , handles cases where some members of consumer group goes down / comes up, and 
KIP-54 ensures there is minimal diff between assignments before and after 
rebalance. 

But to handle whole restart use case, we need to remember past assignment 
somewhere, and use it after restart.

Please let us know if this is already solved problem / some cleaner way of 
achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6645) Sticky Partition Assignment across Kafka Streams application restarts

2018-03-13 Thread Giridhar Addepalli (JIRA)
Giridhar Addepalli created KAFKA-6645:
-

 Summary: Sticky Partition Assignment across Kafka Streams 
application restarts
 Key: KAFKA-6645
 URL: https://issues.apache.org/jira/browse/KAFKA-6645
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Giridhar Addepalli


Since Kafka Streams applications have lot of state in the stores in the 
general, it would be good to remember the assignment of partitions to machines. 
So that when whole application is restarted for whatever reason, there is a way 
to use past assignment of partitions to machines and there won't be need to 
build up state by reading off of changelog kafka topic and would result in 
faster start-up.

Samza has support for Host Affinity 
(https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html)

KIP-54 
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
 , handles cases where some members of consumer group goes down / comes up, and 
KIP-54 ensures there is minimal diff between assignments before and after 
rebalance. 

But to handle whole restart use case, we need to remember past assignment 
somewhere, and use it after restart.

Please let us know if this is already solved problem / some cleaner way of 
achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Kafka streams - regarding WordCountInteractiveQueriesExample

2017-11-29 Thread Giridhar Addepalli
Hi,

I am newbie to Kafka streams.

Tried below example :

https://github.com/confluentinc/kafka-streams-examples/blob/4.0.x/src/main/java/io/confluent/examples/streams/interactivequeries/WordCountInteractiveQueriesExample.java

http://localhost:7070/state/instances

[
{
"host": "localhost",
"port": 7070,
"storeNames": [
"windowed-word-count",
"word-count"
]
},
{
"host": "localhost",
"port": 7071,
"storeNames": [
"windowed-word-count",
"word-count"
]
}
]



I was able to query for count of a given word :

http://localhost:7070/state/keyvalue/word-count/hello

{
"key": "hello",
"value": 444
}



http://localhost:7071/state/keyvalue/word-count/world

{
"key": "world",
"value": 906
}


But i am not able to replicate following part(highlighted) of advertised
behavior in the example :

* 5) Use your browser to hit the REST endpoint of the app instance you
started in step 3 to query
* the state managed by this application.  Note: If you are running
multiple app instances, you can
* query them arbitrarily -- if an app instance cannot satisfy a query
itself, it will fetch the
* results from the other instances.


For example , following gave 404 error
http://localhost:7070/state/keyvalue/word-count/world

HTTP ERROR: 404

Problem accessing /state/keyvalue/word-count/world. Reason:

Not Found

Please let me know where my expectations are going wrong.
Please note that i have tried both 3.3.0-post & 4.0.0-post branches.

Thanks,
Giridhar.


Comparison between Samza and Kafka Streams

2017-11-23 Thread Giridhar Addepalli
Hi,

Thank you for providing comparison between Samza and Spark Streaming,
Mupd8, Storm.
Looks like there is new player in the field : Kafka Streams (
https://docs.confluent.io/current/streams/index.html).

It will good to have comparison between Samza and Kafka Streams as well.

>From high-level it looks like "Samza when used as a library" is similar to
"Kafka Streams".

Thanks,
Giridhar.


Samza Standalone from multiple machines

2017-10-18 Thread Giridhar Addepalli
Hi,

i am new to Samza.
We are evaluating using Samza in Standalone mode.

Was able to run "Hello Samza" using Zookeeper Deployment Model , on single
machine
http://samza.apache.org/learn/tutorials/latest/hello-samza-high-level-zk.html

We are wondering how to run Samza Job using  Zookeeper Deployment Model
from multiple machines.

Please point to relevant documentation or suggestions.

Thanks,
Giridhar.


Re: [PATCH] qla2xxx: Protect access to qpair members with qpair->qp_lock

2017-06-23 Thread Malavali, Giridhar
Yes, it¹s needed. We will post a patch.

On 6/23/17, 9:34 AM, "Julia Lawall"  wrote:

>Please check on whether an unlock is neeed before line 1965.
>
>julia
>
>-- Forwarded message --
>Date: Fri, 23 Jun 2017 15:23:00 +0800
>From: kbuild test robot 
>To: kbu...@01.org
>Cc: Julia Lawall 
>Subject: Re: [PATCH] qla2xxx: Protect access to qpair members with
>qpair->qp_lock
>
>CC: kbuild-...@01.org
>In-Reply-To: <20170622134325.26931-1-jthumsh...@suse.de>
>
>Hi Johannes,
>
>[auto build test WARNING on scsi/for-next]
>[also build test WARNING on v4.12-rc6 next-20170622]
>[if your patch is applied to the wrong git tree, please drop us a note to
>help improve the system]
>
>url:
>https://github.com/0day-ci/linux/commits/Johannes-Thumshirn/qla2xxx-Protec
>t-access-to-qpair-members-with-qpair-qp_lock/20170623-123844
>base:   https://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi.git
>for-next
>:: branch date: 3 hours ago
>:: commit date: 3 hours ago
>
>>> drivers/scsi/qla2xxx/qla_iocb.c:1965:3-9: preceding lock on line 1952
>
>git remote add linux-review https://github.com/0day-ci/linux
>git remote update linux-review
>git checkout 4a35d720268dbe9ac016957a3c4fc644398d68ba
>vim +1965 drivers/scsi/qla2xxx/qla_iocb.c
>
>d74595278 Michael Hernandez  2016-12-12  1946  /* Only process
>protection or >16 cdb in this routine */
>d74595278 Michael Hernandez  2016-12-12  1947  if 
>(scsi_get_prot_op(cmd)
>== SCSI_PROT_NORMAL) {
>d74595278 Michael Hernandez  2016-12-12  1948  if 
>(cmd->cmd_len <= 16)
>d74595278 Michael Hernandez  2016-12-12  1949  return
>qla2xxx_start_scsi_mq(sp);
>d74595278 Michael Hernandez  2016-12-12  1950  }
>d74595278 Michael Hernandez  2016-12-12  1951
>4a35d7202 Johannes Thumshirn 2017-06-22 @1952
>   spin_lock_irqsave(>qp_lock, flags);
>4a35d7202 Johannes Thumshirn 2017-06-22  1953
>d74595278 Michael Hernandez  2016-12-12  1954  /* Setup qpair pointers 
>*/
>d74595278 Michael Hernandez  2016-12-12  1955  rsp = qpair->rsp;
>d74595278 Michael Hernandez  2016-12-12  1956  req = qpair->req;
>d74595278 Michael Hernandez  2016-12-12  1957
>d74595278 Michael Hernandez  2016-12-12  1958  /* So we know we haven't
>pci_map'ed anything yet */
>d74595278 Michael Hernandez  2016-12-12  1959  tot_dsds = 0;
>d74595278 Michael Hernandez  2016-12-12  1960
>d74595278 Michael Hernandez  2016-12-12  1961  /* Send marker if
>required */
>d74595278 Michael Hernandez  2016-12-12  1962  if (vha->marker_needed 
>!=
>0) {
>d74595278 Michael Hernandez  2016-12-12  1963  if 
>(qla2x00_marker(vha,
>req, rsp, 0, 0, MK_SYNC_ALL) !=
>d74595278 Michael Hernandez  2016-12-12  1964  QLA_SUCCESS)
>d74595278 Michael Hernandez  2016-12-12 @1965  return
>QLA_FUNCTION_FAILED;
>d74595278 Michael Hernandez  2016-12-12  1966  
>vha->marker_needed = 0;
>d74595278 Michael Hernandez  2016-12-12  1967  }
>d74595278 Michael Hernandez  2016-12-12  1968
>
>:: The code at line 1965 was first introduced by commit
>:: d74595278f4ab192af66d9e60a9087464638beee scsi: qla2xxx: Add
>multiple queue pair functionality.
>
>:: TO: Michael Hernandez 
>:: CC: Martin K. Petersen 
>
>---
>0-DAY kernel test infrastructureOpen Source Technology
>Center
>https://lists.01.org/pipermail/kbuild-all   Intel
>Corporation



Re: [PATCH] qla2xxx: Protect access to qpair members with qpair->qp_lock

2017-06-23 Thread Malavali, Giridhar
Yes, it¹s needed. We will post a patch.

On 6/23/17, 9:34 AM, "Julia Lawall"  wrote:

>Please check on whether an unlock is neeed before line 1965.
>
>julia
>
>-- Forwarded message --
>Date: Fri, 23 Jun 2017 15:23:00 +0800
>From: kbuild test robot 
>To: kbu...@01.org
>Cc: Julia Lawall 
>Subject: Re: [PATCH] qla2xxx: Protect access to qpair members with
>qpair->qp_lock
>
>CC: kbuild-...@01.org
>In-Reply-To: <20170622134325.26931-1-jthumsh...@suse.de>
>
>Hi Johannes,
>
>[auto build test WARNING on scsi/for-next]
>[also build test WARNING on v4.12-rc6 next-20170622]
>[if your patch is applied to the wrong git tree, please drop us a note to
>help improve the system]
>
>url:
>https://github.com/0day-ci/linux/commits/Johannes-Thumshirn/qla2xxx-Protec
>t-access-to-qpair-members-with-qpair-qp_lock/20170623-123844
>base:   https://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi.git
>for-next
>:: branch date: 3 hours ago
>:: commit date: 3 hours ago
>
>>> drivers/scsi/qla2xxx/qla_iocb.c:1965:3-9: preceding lock on line 1952
>
>git remote add linux-review https://github.com/0day-ci/linux
>git remote update linux-review
>git checkout 4a35d720268dbe9ac016957a3c4fc644398d68ba
>vim +1965 drivers/scsi/qla2xxx/qla_iocb.c
>
>d74595278 Michael Hernandez  2016-12-12  1946  /* Only process
>protection or >16 cdb in this routine */
>d74595278 Michael Hernandez  2016-12-12  1947  if 
>(scsi_get_prot_op(cmd)
>== SCSI_PROT_NORMAL) {
>d74595278 Michael Hernandez  2016-12-12  1948  if 
>(cmd->cmd_len <= 16)
>d74595278 Michael Hernandez  2016-12-12  1949  return
>qla2xxx_start_scsi_mq(sp);
>d74595278 Michael Hernandez  2016-12-12  1950  }
>d74595278 Michael Hernandez  2016-12-12  1951
>4a35d7202 Johannes Thumshirn 2017-06-22 @1952
>   spin_lock_irqsave(>qp_lock, flags);
>4a35d7202 Johannes Thumshirn 2017-06-22  1953
>d74595278 Michael Hernandez  2016-12-12  1954  /* Setup qpair pointers 
>*/
>d74595278 Michael Hernandez  2016-12-12  1955  rsp = qpair->rsp;
>d74595278 Michael Hernandez  2016-12-12  1956  req = qpair->req;
>d74595278 Michael Hernandez  2016-12-12  1957
>d74595278 Michael Hernandez  2016-12-12  1958  /* So we know we haven't
>pci_map'ed anything yet */
>d74595278 Michael Hernandez  2016-12-12  1959  tot_dsds = 0;
>d74595278 Michael Hernandez  2016-12-12  1960
>d74595278 Michael Hernandez  2016-12-12  1961  /* Send marker if
>required */
>d74595278 Michael Hernandez  2016-12-12  1962  if (vha->marker_needed 
>!=
>0) {
>d74595278 Michael Hernandez  2016-12-12  1963  if 
>(qla2x00_marker(vha,
>req, rsp, 0, 0, MK_SYNC_ALL) !=
>d74595278 Michael Hernandez  2016-12-12  1964  QLA_SUCCESS)
>d74595278 Michael Hernandez  2016-12-12 @1965  return
>QLA_FUNCTION_FAILED;
>d74595278 Michael Hernandez  2016-12-12  1966  
>vha->marker_needed = 0;
>d74595278 Michael Hernandez  2016-12-12  1967  }
>d74595278 Michael Hernandez  2016-12-12  1968
>
>:: The code at line 1965 was first introduced by commit
>:: d74595278f4ab192af66d9e60a9087464638beee scsi: qla2xxx: Add
>multiple queue pair functionality.
>
>:: TO: Michael Hernandez 
>:: CC: Martin K. Petersen 
>
>---
>0-DAY kernel test infrastructureOpen Source Technology
>Center
>https://lists.01.org/pipermail/kbuild-all   Intel
>Corporation



Re: [PATCH] qla2xxx: Protect access to qpair members with qpair->qp_lock

2017-06-23 Thread Malavali, Giridhar
Yes, it¹s needed. We will post a patch.

On 6/23/17, 9:34 AM, "Julia Lawall"  wrote:

>Please check on whether an unlock is neeed before line 1965.
>
>julia
>
>-- Forwarded message --
>Date: Fri, 23 Jun 2017 15:23:00 +0800
>From: kbuild test robot 
>To: kbu...@01.org
>Cc: Julia Lawall 
>Subject: Re: [PATCH] qla2xxx: Protect access to qpair members with
>qpair->qp_lock
>
>CC: kbuild-...@01.org
>In-Reply-To: <20170622134325.26931-1-jthumsh...@suse.de>
>
>Hi Johannes,
>
>[auto build test WARNING on scsi/for-next]
>[also build test WARNING on v4.12-rc6 next-20170622]
>[if your patch is applied to the wrong git tree, please drop us a note to
>help improve the system]
>
>url:
>https://github.com/0day-ci/linux/commits/Johannes-Thumshirn/qla2xxx-Protec
>t-access-to-qpair-members-with-qpair-qp_lock/20170623-123844
>base:   https://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi.git
>for-next
>:: branch date: 3 hours ago
>:: commit date: 3 hours ago
>
>>> drivers/scsi/qla2xxx/qla_iocb.c:1965:3-9: preceding lock on line 1952
>
>git remote add linux-review https://github.com/0day-ci/linux
>git remote update linux-review
>git checkout 4a35d720268dbe9ac016957a3c4fc644398d68ba
>vim +1965 drivers/scsi/qla2xxx/qla_iocb.c
>
>d74595278 Michael Hernandez  2016-12-12  1946  /* Only process
>protection or >16 cdb in this routine */
>d74595278 Michael Hernandez  2016-12-12  1947  if 
>(scsi_get_prot_op(cmd)
>== SCSI_PROT_NORMAL) {
>d74595278 Michael Hernandez  2016-12-12  1948  if 
>(cmd->cmd_len <= 16)
>d74595278 Michael Hernandez  2016-12-12  1949  return
>qla2xxx_start_scsi_mq(sp);
>d74595278 Michael Hernandez  2016-12-12  1950  }
>d74595278 Michael Hernandez  2016-12-12  1951
>4a35d7202 Johannes Thumshirn 2017-06-22 @1952
>   spin_lock_irqsave(>qp_lock, flags);
>4a35d7202 Johannes Thumshirn 2017-06-22  1953
>d74595278 Michael Hernandez  2016-12-12  1954  /* Setup qpair pointers 
>*/
>d74595278 Michael Hernandez  2016-12-12  1955  rsp = qpair->rsp;
>d74595278 Michael Hernandez  2016-12-12  1956  req = qpair->req;
>d74595278 Michael Hernandez  2016-12-12  1957
>d74595278 Michael Hernandez  2016-12-12  1958  /* So we know we haven't
>pci_map'ed anything yet */
>d74595278 Michael Hernandez  2016-12-12  1959  tot_dsds = 0;
>d74595278 Michael Hernandez  2016-12-12  1960
>d74595278 Michael Hernandez  2016-12-12  1961  /* Send marker if
>required */
>d74595278 Michael Hernandez  2016-12-12  1962  if (vha->marker_needed 
>!=
>0) {
>d74595278 Michael Hernandez  2016-12-12  1963  if 
>(qla2x00_marker(vha,
>req, rsp, 0, 0, MK_SYNC_ALL) !=
>d74595278 Michael Hernandez  2016-12-12  1964  QLA_SUCCESS)
>d74595278 Michael Hernandez  2016-12-12 @1965  return
>QLA_FUNCTION_FAILED;
>d74595278 Michael Hernandez  2016-12-12  1966  
>vha->marker_needed = 0;
>d74595278 Michael Hernandez  2016-12-12  1967  }
>d74595278 Michael Hernandez  2016-12-12  1968
>
>:: The code at line 1965 was first introduced by commit
>:: d74595278f4ab192af66d9e60a9087464638beee scsi: qla2xxx: Add
>multiple queue pair functionality.
>
>:: TO: Michael Hernandez 
>:: CC: Martin K. Petersen 
>
>---
>0-DAY kernel test infrastructureOpen Source Technology
>Center
>https://lists.01.org/pipermail/kbuild-all   Intel
>Corporation



Re: VFS: mount_bdev and fill_super

2017-06-18 Thread Shrikant Giridhar
On Fri, 16 Jun 2017, Rohan Puri wrote:

> If s_root is set it means the superblock is already filled up so call
> fill_super() only if s_root is NULL meaning superblock is not filled yet.

I tried looking at the d_name of the s_root dentry when mounting a device and
its always '/'. I presume this is due to it being the root dentry of the
subtree we're attaching to the main file hierarchy.

Can the dentry of a superblock be something other than '/' on mounting it?

Apologies if my question is a little vague. Any documentation about this would
help too.

---
Shrikant

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: VFS: mount_bdev and fill_super

2017-06-16 Thread Shrikant Giridhar
> I just had a quick look at mount_bdev() code for Linux v3.15, mount_bdev()
> can return only root dentry or error.

I'm sorry I should have been clearer. I was referring to the documentation
mount() where it says that the call can return something other than the root
dentry.

More specifically, I don't understand how the what's the purpose of the if
condition checking for s_root in mount_bdev(). Apologies if this is a trivial
question. Tracking the calls all the way down didn't seem to make it clearer
and I couldn't find specific documentation about the interface.

> Also please include the kernel version to which you are referring.

I have the 4.12-rc5 kernel.

---
Shrikant

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: [PATCH] qla2xxx: don't disable a not previously enabled PCI device

2017-05-24 Thread Malavali, Giridhar


On 5/24/17, 8:47 AM, "Bart Van Assche" <bart.vanass...@sandisk.com> wrote:

>On Tue, 2017-05-23 at 16:50 +0200, Johannes Thumshirn wrote:
>> When pci_enable_device() or pci_enable_device_mem() fail in
>> qla2x00_probe_one() we bail out but do a call to
>> pci_disable_device(). This causes the dev_WARN_ON() in
>> pci_disable_device() to trigger, as the device wasn't enabled
>> previously.
>> 
>> So instead of taking the 'probe_out' error path we can directly return
>> *iff* one of the pci_enable_device() calls fails.
>> 
>> Additionally rename the 'probe_out' goto label's name to the more
>> descriptive 'disable_device'.
>> 
>> Signed-off-by: Johannes Thumshirn <jthumsh...@suse.de>
>> Fixes: e315cd28b9ef ("[SCSI] qla2xxx: Code changes for qla data
>>structure refactoring")
>
>Hello Johannes,
>
>Please consider adding a Cc: stable tag to this patch. Since otherwise
>this
>patch looks fine to me:
>
>Reviewed-by: Bart Van Assche <bart.vanass...@sandisk.com>

Looks good to me. 

Reviewed-by: Giridhar Malavali <giridhar.malav...@cavium.com>



Re: [PATCH] qla2xxx: don't disable a not previously enabled PCI device

2017-05-24 Thread Malavali, Giridhar


On 5/24/17, 8:47 AM, "Bart Van Assche"  wrote:

>On Tue, 2017-05-23 at 16:50 +0200, Johannes Thumshirn wrote:
>> When pci_enable_device() or pci_enable_device_mem() fail in
>> qla2x00_probe_one() we bail out but do a call to
>> pci_disable_device(). This causes the dev_WARN_ON() in
>> pci_disable_device() to trigger, as the device wasn't enabled
>> previously.
>> 
>> So instead of taking the 'probe_out' error path we can directly return
>> *iff* one of the pci_enable_device() calls fails.
>> 
>> Additionally rename the 'probe_out' goto label's name to the more
>> descriptive 'disable_device'.
>> 
>> Signed-off-by: Johannes Thumshirn 
>> Fixes: e315cd28b9ef ("[SCSI] qla2xxx: Code changes for qla data
>>structure refactoring")
>
>Hello Johannes,
>
>Please consider adding a Cc: stable tag to this patch. Since otherwise
>this
>patch looks fine to me:
>
>Reviewed-by: Bart Van Assche 

Looks good to me. 

Reviewed-by: Giridhar Malavali 



Re: [PATCH] qla2xxx: don't disable a not previously enabled PCI device

2017-05-24 Thread Malavali, Giridhar


On 5/24/17, 8:47 AM, "Bart Van Assche" <bart.vanass...@sandisk.com> wrote:

>On Tue, 2017-05-23 at 16:50 +0200, Johannes Thumshirn wrote:
>> When pci_enable_device() or pci_enable_device_mem() fail in
>> qla2x00_probe_one() we bail out but do a call to
>> pci_disable_device(). This causes the dev_WARN_ON() in
>> pci_disable_device() to trigger, as the device wasn't enabled
>> previously.
>> 
>> So instead of taking the 'probe_out' error path we can directly return
>> *iff* one of the pci_enable_device() calls fails.
>> 
>> Additionally rename the 'probe_out' goto label's name to the more
>> descriptive 'disable_device'.
>> 
>> Signed-off-by: Johannes Thumshirn <jthumsh...@suse.de>
>> Fixes: e315cd28b9ef ("[SCSI] qla2xxx: Code changes for qla data
>>structure refactoring")
>
>Hello Johannes,
>
>Please consider adding a Cc: stable tag to this patch. Since otherwise
>this
>patch looks fine to me:
>
>Reviewed-by: Bart Van Assche <bart.vanass...@sandisk.com>

Looks good to me. 

Reviewed-by: Giridhar Malavali <giridhar.malav...@cavium.com>



[BangPypers] [Job] CallHub is hiring Python developers for multiple positions

2017-05-24 Thread Chetan Giridhar
Hello,

CallHub[1] is the next generation SaaS based marketing platform leveraging
cloud telephony. We have customers round the globe who love the product and
constantly push us to do more. We are scaling up the product and looking
for core members to join our Engineering team at Bangalore.

Senior Engineer (Python/Django):
https://hasjob.co/callhub.io/zsppm

Lead Engineer (Python/Django, Scalability):
https://hasjob.co/callhub.io/qbdpf

How to Apply:
Send your resumes to j...@callhub.io or me. Please do not reply all to this
thread.

[1]: https://callhub.io

-- 
Regards,
Chetan
___
BangPypers mailing list
BangPypers@python.org
https://mail.python.org/mailman/listinfo/bangpypers


Re: [PATCH v3 00/14] qla2xxx: Bug Fixes and updates for target.

2017-03-09 Thread Malavali, Giridhar


On 3/8/17, 7:42 PM, "Nicholas A. Bellinger" <n...@linux-iscsi.org> wrote:

>Hi Giri,
>
>On Wed, 2017-03-08 at 18:30 +, Malavali, Giridhar wrote:
>> 
>> On 3/8/17, 7:20 AM, "Bart Van Assche" <bart.vanass...@sandisk.com>
>>wrote:
>> 
>> >On Tue, 2017-03-07 at 23:34 -0800, Nicholas A. Bellinger wrote:
>> >> Btw, the regression reported here in v2:
>> >> 
>> >> http://www.spinics.net/lists/target-devel/msg14348.html
>> >> 
>> >> is completely different from what you've reported here.
>> >
>> >The call traces differ but the root cause is probably the same.
>> >
>> >> It would be useful to explain how you reproduced this, instead of
>>just
>> >> posting backtrace with zero context..?
>> >> 
>> >> Can we at least identify which patch in this series is causing
>>this..?
>> >> 
>> >> Also, I assume you are running this on stock v4.11-rc1 with only this
>> >> qla2xxx series applied, and not all of your other stuff, right..?
>> >
>> >The test I ran against v4.11-rc1 + this patch series is to start LIO
>>on a
>> >system equipped with two back-to-back connected QLogic FC HBAs (no
>>switch
>> >inbetween), to load the tcm_qla2xxx driver, to configure LUNs and to
>>wait
>> >until the SCSI stack reports that these LUNs have appeared. What I see
>>in
>> >the lsscsi output with both v2 and v3 of this patch series is that
>>these
>> >LUNs appear briefly and then disappear and that a little bit later the
>> >kernel reports that a hang occurred. Without this patch series the LUNs
>> >are
>> >detected and do not disappear automatically and no hang is reported. I
>> >think the next step is that Cavium verifies whether they can reproduce
>> >this
>> >behavior and if they can reproduce it to run a bisect. BTW, since there
>> >are
>> >login-related patches in this series I wouldn't be surprised if one of
>> >these
>> >patches introduced the regression.
>> 
>> We generally go through switch. We will try to reproduce with back to
>>back
>> and bisect the patches.
>> 
>
>Just a heads up given this series mixes bug-fixes with a few other
>miscellaneous improvements, I'd like to get it pushed to Linus with the
>next round of patches no later than -rc3.
>
>If it's beyond -rc3, then the bug-fixes will need to be split out for
>v4.11-rc separate from the other improvements.

Thanks for the heads-up. We are working on Bart reported issue. We are
able to recreate this internally.
We should be able to wrap it up in next few days and send the final
patches. I don¹t think anything else is outstanding.

‹ Giri

>



Re: [PATCH v3 00/14] qla2xxx: Bug Fixes and updates for target.

2017-03-08 Thread Malavali, Giridhar


On 3/8/17, 7:20 AM, "Bart Van Assche"  wrote:

>On Tue, 2017-03-07 at 23:34 -0800, Nicholas A. Bellinger wrote:
>> Btw, the regression reported here in v2:
>> 
>> http://www.spinics.net/lists/target-devel/msg14348.html
>> 
>> is completely different from what you've reported here.
>
>The call traces differ but the root cause is probably the same.
>
>> It would be useful to explain how you reproduced this, instead of just
>> posting backtrace with zero context..?
>> 
>> Can we at least identify which patch in this series is causing this..?
>> 
>> Also, I assume you are running this on stock v4.11-rc1 with only this
>> qla2xxx series applied, and not all of your other stuff, right..?
>
>The test I ran against v4.11-rc1 + this patch series is to start LIO on a
>system equipped with two back-to-back connected QLogic FC HBAs (no switch
>inbetween), to load the tcm_qla2xxx driver, to configure LUNs and to wait
>until the SCSI stack reports that these LUNs have appeared. What I see in
>the lsscsi output with both v2 and v3 of this patch series is that these
>LUNs appear briefly and then disappear and that a little bit later the
>kernel reports that a hang occurred. Without this patch series the LUNs
>are
>detected and do not disappear automatically and no hang is reported. I
>think the next step is that Cavium verifies whether they can reproduce
>this
>behavior and if they can reproduce it to run a bisect. BTW, since there
>are
>login-related patches in this series I wouldn't be surprised if one of
>these
>patches introduced the regression.

We generally go through switch. We will try to reproduce with back to back
and bisect the patches.

‹ Giri


>
>Bart.



[influxdb] Re: Use Kapacitor with ElasticSearch

2017-02-12 Thread Ramey Giridhar
Thanks Nathaniel, I will probably try to write a translation layer to
convert InfluxDB query to ES query. Will get back to you when I am done.

On Fri, Feb 10, 2017 at 10:57 PM,  wrote:

> Hey, thanks for posting the question here.
>
> My initial suggestion would be to explore writing a translation layer that
> would bulk query ES and then convert the data to something Kapacitor can
> consume( best option here is InfluxDB line protocol). Then bulk write that
> data to Kapacitor. This is obviously not ideal since it means that you
> don't get the benefit of Kapacitor batch task scheduling etc. But it might
> get you started for now.
>
> Another option is to write a translation layer the other way, meaning take
> an InfluxDB query and convert it into an ES query. This way you get the
> scheduling features etc of Kapacitor and basically hide the ES cluster
> behind the InfluxDB query API. This likely represents more work but would
> provide a better integration.
>
> Unfortunately a quick search didn't reveal any existing work for either of
> these approaches.
>
> This does bring up an interesting design question. Currently Kapacitor
> integrates with a handful of other technologies for accepting writes but
> only integrates with InfluxDB for batch work. Seems like it might be a nice
> feature to extend the batch task  capabilities so that it can query other
> systems as well. I'll start an internal discussion about this and see where
> it goes.
>
>
> Let me know if I can clarify more about how those translation layers would
> work
>
> Bests,
> Nathaniel Cook
>
>
> On Thursday, February 9, 2017 at 9:58:18 PM UTC-7, ra...@exotel.in wrote:
>>
>> Hello,
>>
>> We have all our Data in our ElasticSeacrh and we want some alerting
>> mechanism on top of it. We tried Kapacitor and loved it, but the limitation
>> is it doesn't work with ElasticSearch. We would like to move to Influxdb,
>> but currently it is not possible because that would require a lot of
>> changes to our code base. So how can we use Kapacitor to do bulk queries in
>> ElasticSearch and have alerting on top of it??
>>
>> Thanks
>>
>>

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to influxdb+unsubscr...@googlegroups.com.
To post to this group, send email to influxdb@googlegroups.com.
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/CAOff4rGcd6YsH8cWnQt8n-vErrkCTQ%2BMFXOea11Vhz7P1w9QOA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] facing issues while using fact hashes

2016-11-25 Thread giridhar kazama
Hello,

Thanks for your valuable suggestions
 but they didn't help me much.
Could you please help me with some brief documentation or existing examples
which I can use to create some other hashes in similar fashion.

Thanks
Giridhar

On 24 November 2016 at 23:12, R.I.Pienaar <r...@devco.net> wrote:

>
>
> - Original Message -----
> > From: "giridhar kazama" <giridhar0...@gmail.com>
> > To: "puppet-users" <puppet-users@googlegroups.com>
> > Sent: Thursday, 24 November, 2016 18:34:39
> > Subject: [Puppet Users] facing issues while using fact hashes
>
> > Hello,
> >
> >
> > We are trying to use fact hashes in satellite 6.2(puppet 3.8) and getting
> > some weird errors like Error: facts is not a hash or array when accessing
> > it with distro at /etc/puppet/modules/kazama/manifests/init.pp
> >
> > we are completely stuck and unable to proceed further.
> >
> > Could someone please help us with some examples of hashes or else
> redirect
> > us to a site where we can find more about the fact hashes.
>
> did you configure puppet for it? https://docs.puppet.com/
> puppet/3.8/reference/lang_facts_and_builtin_vars.html#
> the-factsfactname-hash
>
> probably also need to set stringify_facts false
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Puppet Users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/puppet-users/IInp5t1p-10/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/puppet-users/27712352.576929.1480009326714.JavaMail.
> zimbra%40devco.net.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Thanks & Regards

Kazama Giridhar



Call : 9160002737
 9059876488
Email : giridhar0...@gmail.com
   giridhar_...@yahoo.com

--
DISCLAIMER:
This e-mail and any files transmitted with it are for the sole use of the
intended recipient(s) and may contain confidential and privileged
information.If you are not the intended recipient, please contact the
sender by reply e-mail and destroy all copies of the original message.
Any unauthorized review, use, disclosure, dissemination, forwarding,
printing or copying of this email or any action taken in reliance on
this e-mail is strictly prohibited and may be unlawful.
---

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CADh-vjw%3DnZg07Ey6mKagxGPc3%2BrV_-FSW-HH7w_-X5z3oq1Uiw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] facing issues while using fact hashes

2016-11-24 Thread giridhar kazama
Hello,


We are trying to use fact hashes in satellite 6.2(puppet 3.8) and getting 
some weird errors like Error: facts is not a hash or array when accessing 
it with distro at /etc/puppet/modules/kazama/manifests/init.pp 

we are completely stuck and unable to proceed further.

Could someone please help us with some examples of hashes or else redirect 
us to a site where we can find more about the fact hashes.

Thanks in advance
Giridhar

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/0f7d20cf-4215-41f9-bcad-be9ec803554c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Yahoo-eng-team] [Bug 1623738] [NEW] Disabled state of host is not updated when reason is not provided.

2016-09-14 Thread Giridhar Jayavelu
Public bug reported:

When _set_host_enabled() in virt/libvirt/driver.py
is called to disable service status of a host without
providing disabled_reason, then "TypeError: cannot concatenate 'str' and
'NoneType' objects" is raised. This prevents the disabled state getting updated.

Before concatenating disable_reason with DISABLE_PREFIX, 
disabled_reason should be checked if it is defined or not.

** Affects: nova
 Importance: Undecided
 Assignee: Giridhar Jayavelu (gjayavelu)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Giridhar Jayavelu (gjayavelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623738

Title:
  Disabled state of host is not updated when reason is not provided.

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When _set_host_enabled() in virt/libvirt/driver.py
  is called to disable service status of a host without
  providing disabled_reason, then "TypeError: cannot concatenate 'str' and
  'NoneType' objects" is raised. This prevents the disabled state getting 
updated.

  Before concatenating disable_reason with DISABLE_PREFIX, 
  disabled_reason should be checked if it is defined or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [openstack-dev] [nova] VMware NSX CI - status?

2016-09-12 Thread Giridhar Jayavelu
Matt,
There was some cleanup done around 2.30pm (PST) and that's why the logs are 
missing.
And the previous failures were due to lower value set for FIXED_NETWORK_SIZE in 
devstack
configuration file. It has been increased to 126 now.

Thanks,
Giri

> On Sep 12, 2016, at 4:29 PM, Matt Riedemann  
> wrote:
> 
> I'd like to know the latest status on the VMware NSX CI. It's not 
> automatically reporting on nova changes, I had to manually check it several 
> times on this change:
> 
> https://review.openstack.org/#/c/281134/
> 
> And then when it does report it fails every time, and the logs are not 
> available.
> 
> Is someone from the VMware team investigating this?
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: Reason for keys parameter to reduce function instead of key

2016-09-01 Thread Giridhar Addepalli
Thank you so much Stefan.
Your reply is very helpful.


On Thu, Sep 1, 2016 at 4:09 PM, Stefan Klein <st.fankl...@gmail.com> wrote:

> Hi,
>
> 2016-09-01 12:22 GMT+02:00 Giridhar Addepalli <giridhar1...@gmail.com>:
>
> > Hi,
> >
> > Function declaration for reduce functions look like
> >
> > function(keys, values, rereduce) {
> > }
> >
> > My doubt is regarding the 'keys' parameter.
> > Can multiple keys be passed to single invocation of custom reduce
> function
> > ?
> >
>
> Yes. Each key corresponds to the value of the same index.
> The keys could be same but could also differ if the view was queried with a
> grouplevel.
>
> e.g. you have 3 documents, the view emits the keys:
> [1,2,3]
> [1,2,4]
> [1,3,5]
>
> if you call the view with "group_level" = 1, reduce will be called once¹
> and keys² will be:
> [
> [[1,2,3], documentid],
> [[1,2,4], documentid],
> [[1,3,5], documentid]
> ]
>
> the result is assigned to "[1]"
>
> if you call the view with "group_level" = 2, reduce will be called twice¹
> and keys² will be:
> [
> [[1,2,3], documentid],
> [[1,2,4], documentid]
> ]
>
> this result is assigned to "[1,2]"
>
> and
> [
> [[1,3,5], documentid]
> ]
>
> this result is assigned to "[1,3]"
>
>
> Hope this helps,
> Stefan
>
>
> [1]: ignoring rereduce
> [2]: CouchDB 1.6.1 here
>


Reason for keys parameter to reduce function instead of key

2016-09-01 Thread Giridhar Addepalli
Hi,

Function declaration for reduce functions look like

function(keys, values, rereduce) {
}

My doubt is regarding the 'keys' parameter.
Can multiple keys be passed to single invocation of custom reduce function ?
In Hadoop world, only one key is passed to reduce function along with all
the values against the key.

>From what i have read on internet so far, it seems that multiple keys can
be sent to reduce function.
if this is the case, then against which key does the result of reduce
function be assigned to ?

Thanks,
Giridhar.


Bug#836176: RFA: axel -- light command line download accelerator

2016-08-31 Thread Y Giridhar Appaji Nag
Package: wnpp
Severity: normal

I request an adopter for the axel package.

The package description is:
 Axel tries to accelerate the downloading process by using multiple
 connections for one file, similar to DownThemAll and other famous
 programs. It can also use multiple mirrors for one download.
 .
 Using Axel, you will get files faster from Internet. So, Axel can
 speed up a download up to 60% (approximately, according to some tests).
 .
 Axel tries to be as light as possible, so it might be useful as a
 wget clone (and other console based programs) on byte-critical systems.



Bug#836176: RFA: axel -- light command line download accelerator

2016-08-31 Thread Y Giridhar Appaji Nag
Package: wnpp
Severity: normal

I request an adopter for the axel package.

The package description is:
 Axel tries to accelerate the downloading process by using multiple
 connections for one file, similar to DownThemAll and other famous
 programs. It can also use multiple mirrors for one download.
 .
 Using Axel, you will get files faster from Internet. So, Axel can
 speed up a download up to 60% (approximately, according to some tests).
 .
 Axel tries to be as light as possible, so it might be useful as a
 wget clone (and other console based programs) on byte-critical systems.



Re: [Openstack] Issue with assignment of Intel’s QAT Card to VM (PCI-passthrough) using openstack-mitaka release on Cent OS 7.2 host

2016-06-07 Thread Giridhar Jayavelu
The alias has device_type as "type-PCI"
pci_alias = {"name": "QuickAssist", "product_id": "0435", "vendor_id": "8086", 
"device_type": "type-PCI"}

But from the maria DB output, you can see that the devices are "type-PF".
Please change the device_type in the alias if you intend to attach physical 
function to the instance.



> On Jun 6, 2016, at 11:22 PM, Chinmaya Dwibedy  wrote:
> 
> pci_alias = {"name": "QuickAssist", "product_id": "0435", "vendor_id": 
> "8086", "device_type": "type-PCI"}


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[jira] [Commented] (CASSANDRA-11602) Materialized View Doest Not Have Static Columns

2016-04-19 Thread giridhar muralibabu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248086#comment-15248086
 ] 

giridhar muralibabu commented on CASSANDRA-11602:
-

Adding static columns in MVs will help developers to maximum use of 
materialized views 

> Materialized View Doest Not Have Static Columns
> ---
>
> Key: CASSANDRA-11602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11602
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ravishankar Rajendran
>Assignee: Carl Yeksigian
>
> {quote}
> CREATE TABLE "team" (teamname text, manager text, location text static, 
> PRIMARY KEY ((teamname), manager));
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull1', 
> 'Ricciardo11', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo12', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo13', 'Australian');
> select * From team;
> CREATE MATERIALIZED VIEW IF NOT EXISTS "teamMV" AS SELECT "teamname", 
> "manager", "location" FROM "team" WHERE "teamname" IS NOT NULL AND "manager" 
> is NOT NULL AND "location" is NOT NULL PRIMARY KEY("manager", "teamname");  
> select * from "teamMV";
> {quote}
> The teamMV does not have "location" column. Static columns are not getting 
> created in MV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [SCSI] qla2xxx: Enhancements to support ISPFx00.

2016-04-15 Thread Giridhar Malavali


On 4/13/16, 5:39 AM, "Dan Carpenter" <dan.carpen...@oracle.com> wrote:

>Hello Giridhar Malavali,
>
>The patch 8ae6d9c7eb10: "[SCSI] qla2xxx: Enhancements to support
>ISPFx00." from Mar 28, 2013, leads to the following static checker
>warning:
>
>   drivers/scsi/qla2xxx/qla_mr.c:2264 qlafx00_ioctl_iosb_entry()
>   error: uninitialized symbol 'res'.

Dan,

We will rework the patch and post it. Thanks for bringing this to our
attention. 

‹ Giri


>
>drivers/scsi/qla2xxx/qla_mr.c
>  2210  struct srb_iocb *iocb_job;
>  2211  int res;
>^^^
>  2212  struct qla_mt_iocb_rsp_fx00 fstatus;
>  2213  uint8_t *fw_sts_ptr;
>  2214  
>  2215  sp = qla2x00_get_sp_from_handle(vha, func, req, pkt);
>  2216  if (!sp)
>  2217  return;
>  2218  
>  2219  if (sp->type == SRB_FXIOCB_DCMD) {
>  2220  iocb_job = >u.iocb_cmd;
>  2221  iocb_job->u.fxiocb.seq_number = pkt->seq_no;
>    iocb_job->u.fxiocb.fw_flags = pkt->fw_iotcl_flags;
>  2223  iocb_job->u.fxiocb.result = pkt->status;
>  2224  if (iocb_job->u.fxiocb.flags &
>SRB_FXDISC_RSP_DWRD_VALID)
>  2225  iocb_job->u.fxiocb.req_data =
>  2226  pkt->dataword_r;
>
>res isn't set on this path.
>
>  2227  } else {
>  2228  bsg_job = sp->u.bsg_job;
>  2229  
>  2230  memset(, 0, sizeof(struct
>qla_mt_iocb_rsp_fx00));
>  2231  
>  2232  fstatus.reserved_1 = pkt->reserved_0;
>  2233  fstatus.func_type = pkt->comp_func_num;
>  2234  fstatus.ioctl_flags = pkt->fw_iotcl_flags;
>  2235  fstatus.ioctl_data = pkt->dataword_r;
>  2236  fstatus.adapid = pkt->adapid;
>  2237  fstatus.reserved_2 = pkt->dataword_r_extra;
>  2238  fstatus.res_count = pkt->residuallen;
>  2239  fstatus.status = pkt->status;
>  2240  fstatus.seq_number = pkt->seq_no;
>  2241  memcpy(fstatus.reserved_3,
>  2242  pkt->reserved_2, 20 * sizeof(uint8_t));
>  2243  
>  2244  fw_sts_ptr = ((uint8_t *)bsg_job->req->sense) +
>  2245  sizeof(struct fc_bsg_reply);
>  2246  
>  2247  memcpy(fw_sts_ptr, (uint8_t *),
>  2248  sizeof(struct qla_mt_iocb_rsp_fx00));
>  2249  bsg_job->reply_len = sizeof(struct fc_bsg_reply) +
>  2250  sizeof(struct qla_mt_iocb_rsp_fx00) +
>sizeof(uint8_t);
>  2251  
>  2252  ql_dump_buffer(ql_dbg_user + ql_dbg_verbose,
>  2253  sp->fcport->vha, 0x5080,
>  2254  (uint8_t *)pkt, sizeof(struct
>ioctl_iocb_entry_fx00));
>  2255  
>  2256  ql_dump_buffer(ql_dbg_user + ql_dbg_verbose,
>  2257  sp->fcport->vha, 0x5074,
>  2258  (uint8_t *)fw_sts_ptr, sizeof(struct
>qla_mt_iocb_rsp_fx00));
>  2259  
>  2260  res = bsg_job->reply->result = DID_OK << 16;
>  2261  bsg_job->reply->reply_payload_rcv_len =
>  2262  bsg_job->reply_payload.payload_len;
>  2263  }
>  2264  sp->done(vha, sp, res);
>  ^^^
>Uninitialized.
>
>  2265  }
>
>regards,
>dan carpenter

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/4] qla2xxx: Add DebugFS node for target sess list.

2016-02-05 Thread Giridhar Malavali


On 2/5/16, 12:00 PM, "Bart Van Assche"  wrote:

>On 02/05/2016 09:26 AM, Himanshu Madhani wrote:
>> On 2/4/16, 10:16 AM, "Bart Van Assche" 
>>wrote:
>>> On 02/04/2016 08:45 AM, Himanshu Madhani wrote:
 From: Quinn Tran 

#cat  /sys/kernel/debug/qla2xxx/qla2xxx_31/tgt_sess
qla2xxx_31
Port ID   Port NameHandle
ff:fc:01  21:fd:00:05:33:c7:ec:16  0
01:0e:00  21:00:00:24:ff:7b:8a:e4  1
01:0f:00  21:00:00:24:ff:7b:8a:e5  2

>>>
>>> Hello Quinn and Himanshu,
>>>
>>> The above information is not only useful to people who are debugging
>>>the
>>> QLogic target driver but also to end users who want to check which
>>> initiator ports have already logged in to a target port. Hence my
>>> proposal to move this information from debugfs to another location
>>>(e.g.
>>> configfs or sysfs). Users of other target drivers are probably also
>>> interested in seeing which sessions are active. How about adding the
>>> functionality for reporting session information per target port in the
>>> LIO core in such a way that some attributes are available for all
>>>target
>>> drivers (e.g. initiator port name, SCSI command statistics) and such
>>> that target drivers can define additional attributes to exported for
>>> each session (e.g. port ID, handle, ...).
>>
>> We had initially implemented this as a sysfs hook, but knowing that
>>sysfs
>> is not encouraged, we decided to put this information via debugfs.
>>Would it
>> make more sense if we send a sysfs patch?
>
>Hello Himanshu,
>
>Let's try to reach agreement about the approach first before starting to
>rework this patch.
>
>Five years ago I explained in a message that I posted on the linux-scsi
>mailing list why LIO should use sysfs to export information that changes
>dynamically (see also http://thread.gmane.org/gmane.linux.scsi/65615/).
>I think this patch shows that there is a real need to have detailed
>session information from LIO target drivers in user space. We need one
>directory per session instead of exporting all session information
>through a single file. sysfs is the right filesystem to export such
>information because configfs directories should be created by the user
>and not from inside the kernel. If no agreement can be reached about
>this over e-mail my proposal is to discuss this further at the 2016
>LSF/MM summit.

Bart, 

I see lot of target customers requesting for such information and having
something that can be created inside kernel space will be helpful. Let us
discuss further at 2016 LSF/MM summit.

‹ Giri



>
>Thanks,
>
>Bart.

<>

[Yahoo-eng-team] [Bug 1517643] [NEW] VMware: metadata definition for QoS resource allocation

2015-11-18 Thread Giridhar Jayavelu
Public bug reported:

Nova VCDriver supports QoS resource allocation for memory, disk and vif in 
addition to cpu [1]. 
Resource allocation can be expressed using shares, limit, and reservation. 
Administrators can configure these properties using flavor extra specs or image 
metadata.

This lite-spec is for adding metadata definition for both flavor and
images in Glance.

The list of metadata properties to be added:
disk_io_reservation, disk_io_limit, disk_io_shares_share,
disk_io_shares_level, vif_reservation, vif_limit, vif_shares_share,
vif_shares_level, memory_reservation, memory_limit, memory_shares_share,
memory_shares_level, cpu_reservation, cpu_limit, cpu_shares_share,
cpu_shares_level

[1] http://specs.openstack.org/openstack/nova-
specs/specs/mitaka/approved/vmware-limits.html

** Affects: glance
 Importance: Undecided
 Assignee: Giridhar Jayavelu (gjayavelu)
 Status: New


** Tags: spec-lite wishlist

** Changed in: glance
 Assignee: (unassigned) => Giridhar Jayavelu (gjayavelu)

** Summary changed:

- VMware: metadata definition for VMware QoS 
+ VMware: metadata definition for QoS resource allocation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1517643

Title:
  VMware: metadata definition for QoS resource allocation

Status in Glance:
  New

Bug description:
  Nova VCDriver supports QoS resource allocation for memory, disk and vif in 
addition to cpu [1]. 
  Resource allocation can be expressed using shares, limit, and reservation. 
Administrators can configure these properties using flavor extra specs or image 
metadata.

  This lite-spec is for adding metadata definition for both flavor and
  images in Glance.

  The list of metadata properties to be added:
  disk_io_reservation, disk_io_limit, disk_io_shares_share,
  disk_io_shares_level, vif_reservation, vif_limit, vif_shares_share,
  vif_shares_level, memory_reservation, memory_limit, memory_shares_share,
  memory_shares_level, cpu_reservation, cpu_limit, cpu_shares_share,
  cpu_shares_level

  [1] http://specs.openstack.org/openstack/nova-
  specs/specs/mitaka/approved/vmware-limits.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1517643/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Qgis-user] QGIS supported server OS and System requirement

2015-11-03 Thread Sushim Giridhar Bhiwapurkar
Hi Everyone,

We are setting up a server where we would like to install QGIS 2.8. We would 
like to know about the server operating system that QGIS supports and the 
system requirement (server hardware) to manage 10 concurrent users.

Your inputs will be helpful for me to finalize the server and OS. We are open 
to use any OS like Windows / Linux / Mac etc.

Please also suggest some links if there is any information available online 
related to my query.


Regards,
SUSHIM





Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html externally 
http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.


___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user

API to know MTU size

2015-11-03 Thread AR, Giridhar
Do we have an API to know the MTU size of an interface of both client and 
server?

Thanks
Giridhar

--
___
Net-snmp-coders mailing list
Net-snmp-coders@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders


Re: SPARK SQL Error

2015-10-15 Thread Giridhar Maddukuri
Hi Dilip,

I tried this option also spark-submit --master yarn --class
org.spark.apache.CsvDataSource /home/cloudera/Desktop/TestMain.jar  --files
hdfs://quickstart.cloudera:8020/people_csv & getting similar error

Exception in thread "main" org.apache.spark.SparkException: Could not parse
Master URL: '--files'
 at
org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2244)
 at org.apache.spark.SparkContext.(SparkContext.scala:361)
 at org.apache.spark.SparkContext.(SparkContext.scala:154)
 at org.spark.apache.CsvDataSource$.main(CsvDataSource.scala:10)
 at org.spark.apache.CsvDataSource.main(CsvDataSource.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Thanks & Regards,
Giri.


On Thu, Oct 15, 2015 at 5:43 PM, Dilip Biswal  wrote:

> Hi Giri,
>
> You are perhaps  missing the "--files" option before the supplied hdfs
> file name ?
>
> spark-submit --master yarn --class org.spark.apache.CsvDataSource
> /home/cloudera/Desktop/TestMain.jar  *--files*
> hdfs://quickstart.cloudera:8020/people_csv
>
> Please refer to Ritchard's comments on why the --files option may be
> redundant in
> your case.
>
> Regards,
> Dilip Biswal
> Tel: 408-463-4980
> dbis...@us.ibm.com
>
>
>
> From:Giri 
> To:user@spark.apache.org
> Date:10/15/2015 02:44 AM
> Subject:Re: SPARK SQL Error
> --
>
>
>
> Hi Ritchard,
>
> Thank you so much  again for your input.This time I ran the command in the
> below way
> spark-submit --master yarn --class org.spark.apache.CsvDataSource
> /home/cloudera/Desktop/TestMain.jar
> hdfs://quickstart.cloudera:8020/people_csv
> But I am facing the new error "Could not parse Master URL:
> 'hdfs://quickstart.cloudera:8020/people_csv'"
> file path is correct
>
> hadoop fs -ls hdfs://quickstart.cloudera:8020/people_csv
> -rw-r--r--   1 cloudera supergroup 29 2015-10-10 00:02
> hdfs://quickstart.cloudera:8020/people_csv
>
> Can you help me to fix this new error
>
> 15/10/15 02:24:39 INFO spark.SparkContext: Added JAR
> file:/home/cloudera/Desktop/TestMain.jar at
> http://10.0.2.15:40084/jars/TestMain.jarwith timestamp 1444901079484
> Exception in thread "main" org.apache.spark.SparkException: Could not parse
> Master URL: 'hdfs://quickstart.cloudera:8020/people_csv'
> at
>
> org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2244)
> at
> org.apache.spark.SparkContext.(SparkContext.scala:361)
> at
> org.apache.spark.SparkContext.(SparkContext.scala:154)
> at
> org.spark.apache.CsvDataSource$.main(CsvDataSource.scala:10)
> at org.spark.apache.CsvDataSource.main(CsvDataSource.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
>
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
> at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
> at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
> at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
> at
> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
> Thanks & Regards,
> Giri.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-SQL-Error-tp25050p25075.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
>
>


Re: [PATCH for-next] scsi: qla2xxx: Hide unavailable firmware

2015-10-08 Thread Giridhar Malavali


On 10/8/15 4:10 PM, "Julian Calaby"  wrote:

>Hi James,
>
>On Fri, Oct 9, 2015 at 3:17 AM, James Bottomley
> wrote:
>> On Thu, 2015-10-08 at 15:46 +, Himanshu Madhani wrote:
>>>
>>> On 10/7/15, 4:41 PM, "Julian Calaby"  wrote:
>>>
>>> >Hi Xose,
>>> >
>>> >On Thu, Oct 8, 2015 at 2:13 AM, Xose Vazquez Perez
>>> > wrote:
>>> >> On Fri, May 22, 2015 at 10:00 AM, Julian Calaby
>>> >> wrote:
>>> >>
>>> >>> Some qla2xxx devices have firmware stored in flash on the device,
>>> >>> however for debugging and triage purposes, Qlogic staff like to
>>> >>> be able to load known-good versions of these firmwares through
>>> >>> request_firmware().
>>> >>>
>>> >>> These firmware files were never distributed and are unlikely to
>>>ever
>>> >>> be released publically, so to hide these missing firmware files
>>>from
>>> >>> scripts which check such things, (e.g. Debian's initramfs-tools)
>>>put
>>> >>> them behind a new EXPERT Kconfig option.
>>> >>
>>> >>
>>> >> What is state of this patch ?
>>> >
>>> >Apparently nobody cared, either from qLogic or linux-scsi.
>>> >
>>> >I'm not overly fussed whether it goes in or not, it was more a point
>>> >in the discussion that proceeded it, however it does solve the
>>> >problems in the discussion that preceded it.
>>>
>>> This patch Looks good.
>>>
>>> Acked-By: Himanshu Madhani 
>>
>> Actually, this isn't helpful.  You now add another option over which the
>> distributions have to make a choice.  Is this interface necessary and
>> useful?  If yes, then it should be compiled in and if not, just remove
>> it ... don't do death by 1000 Kconfig options.
>
>The original issue here was that the qla2xxx driver specifies two
>firmware files which are not publicly available.
>
>They aren't publicly available because the hardware that uses them
>always stores it's firmware in flash.
>
>The firmware files are specified in the driver because qLogic support
>likes to be able to load a known-good firmware file from disk for
>debugging faulty hardware.
>
>Some distributions (Debian is a good example) will complain vocally
>about missing firmware files when installing kernels or producing
>initramfs images.
>
>This patch was intended as an ugly compromise between all of this.

Julian, James, 

We will send a patch to remove such references in our driver.

-- Giri


>
>Thanks,
>
>-- 
>Julian Calaby
>
>Email: julian.cal...@gmail.com
>Profile: http://www.google.com/profiles/julian.calaby/

<>

RE: guidelines to increase the MTU size

2015-10-01 Thread AR, Giridhar
Hi Lee ,

Sorry I was occupied with something.
I do not the following  messages in the wireshark as you pointed out


Fragmented IP protocol (proto=UDP 17, off=0, ID=0251) [Reassembled in ...
Fragmented IP protocol (proto=UDP 17, off=1480, ID=0251) [Reassembled in ...
Fragmented IP protocol (proto=UDP 17, off=2960, ID=0251) [Reassembled in ...
Fragmented IP protocol (proto=UDP 17, off=4440, ID=0251) [Reassembled in ...
Fragmented IP protocol (proto=UDP 17, off=5920, ID=0251) [Reassembled in ...


Any update on the question What is the relation between MTU and 
max_repetitions.?
Is there api exist to verify the destination MTU of the interface and source 
interface MTU?


Thanks
Giridhar
-Original Message-
From: Lee [mailto:ler...@gmail.com] 
Sent: Friday, September 18, 2015 5:47 PM
To: Soumit
Cc: net-snmp-coders@lists.sourceforge.net; AR, Giridhar
Subject: Re: guidelines to increase the MTU size

On 9/17/15, Soumit <soumit.s...@gmail.com> wrote:
> Have you got packet captures for SNMP requests. Even if jumbo frames 
> enabled does the SNMP datagrams are really that big (>1500)

The problem with enabling jumbo frames (ie. MTU > 1500) is that _every_ device 
on the subnet, including routers & switches, has to be configured with the same 
MTU size or else you will drop some traffic.

On the other hand, as long as you don't have stupid firewall rules like 'drop 
non-initial fragments', large packets work just fine.  If you can get access to 
a Cisco switch, see for yourself:

   on the switch:
conf term
snmp-server community foo ro
snmp-server packetsize 9000
end

   on the workstation, start capturing packets & do:
snmpbulkwalk -c foo -Cr100 ipAddressOfSwitch .1.3.6

If you're using wireshark you'll see lots of instances that look like

Fragmented IP protocol (proto=UDP 17, off=0, ID=0251) [Reassembled in ...
Fragmented IP protocol (proto=UDP 17, off=1480, ID=0251) [Reassembled in ...
Fragmented IP protocol (proto=UDP 17, off=2960, ID=0251) [Reassembled in ...
Fragmented IP protocol (proto=UDP 17, off=4440, ID=0251) [Reassembled in ...
Fragmented IP protocol (proto=UDP 17, off=5920, ID=0251) [Reassembled in ...
get-response

Regards,
Lee


> On 16 Sep 2015 20:38, "Lee" <ler...@gmail.com> wrote:
>
>> On 9/16/15, AR, Giridhar <giridhar.raja...@netapp.com> wrote:
>> > Hi Lee,
>> >
>> > Thanks for the response.
>> > No I didn't change the MTU on every router and switches between 
>> > client
>> and
>> > server.
>> >
>> >>What is the relation between MTU and max_repetitions.?
>> >> If we change the MTU to jumbo frame like 9000, what should be 
>> >>these  macros (SNMP_MAX_MSG_SIZE  , SNMP_MAX_LEN ) should be set ?
>> >> Is there any other changes in need to perform?
>> >
>> > Do you have any inputs on the above questions?
>>
>> Don't change the MTU.  If you do, and don't change the MTU on your 
>> routers & switches, anything you send > 1500 bytes will be dropped.
>>
>> I'm not exactly sure what you're trying to accomplish, but if I was 
>> trying to change how much info could be returned I'd start with 
>> changing SNMP_MAX_MSG_SIZE and see if that worked.
>>
>> Regards,
>> Lee
>> >
>> > Thanks
>> > Giridhar
>> >
>> > -Original Message-
>> > From: Lee [mailto:ler...@gmail.com]
>> > Sent: Wednesday, September 16, 2015 5:13 PM
>> > To: AR, Giridhar
>> > Cc: net-snmp-coders@lists.sourceforge.net
>> > Subject: Re: guidelines to increase the MTU size
>> >
>> > On 9/16/15, AR, Giridhar <giridhar.raja...@netapp.com> wrote:
>> >> Hi
>> >>
>> >> Need guidelines to increase the MTU size (example from 1500 to 
>> >> 9000) Our application is using snmpv2. When the MTU size (9000) is 
>> >> changed on interface of client and server, Snmp queries are timed out.
>> >
>> > Did you also change the MTU on every router & switch between the 
>> > client
>> and
>> > server to be 9000?  If no, that's probably why queries are timing 
>> > out
>> >
>> > Regards,
>> > Lee
>> >
>> >
>> >> We collected the packet traces on both client and server.  Client 
>> >> is receiving the response from server, but client is dropping the packet.
>> >> Following are the  test cases tried
>> >>
>> >> case   client MTU  server MTU
>> >> result
>> >>
>> >> 1  1500   1500
>> >>

[Yahoo-eng-team] [Bug 1498645] [NEW] [VMware] Creating snapshot stuck at "Image uploading" state

2015-09-22 Thread Giridhar Jayavelu
RROR glance.api.v1.upload_utils BadStatusLine: ''
2015-09-22 13:23:33.233 21257 ERROR glance.api.v1.upload_utils


git commits used

devstack
-
commit c00e39901be810deb4044904734cc68af42aad8e
Merge: 7d4485c be65c6f
Author: Jenkins <jenk...@review.openstack.org>
Date:   Tue Sep 15 03:17:15 2015 +

Merge "Fix typos in stackrc and unstack.sh"

glance
--
commit 925794c63e38263198345f92124f370211a9d39e
Merge: 84437b9 5c3a3bd
Author: Jenkins <jenk...@review.openstack.org>
Date:   Tue Sep 15 02:00:35 2015 +

Merge "Make task_time_to_live work"

nova
---
commit fe1d118c0c37cd94fe05271946ded8c2b11f9482
Merge: b14bc0a 812d75e
Author: Jenkins <jenk...@review.openstack.org>
Date:   Wed Sep 16 05:13:44 2015 +

Merge "Fix cells use of legacy bdms during local instance delete
operations"

** Affects: glance
 Importance: Undecided
 Assignee: Giridhar Jayavelu (gjayavelu)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1498645

Title:
  [VMware] Creating snapshot stuck at "Image uploading" state

Status in Glance:
  New

Bug description:
  - Started devstack from master branch
  - Created an instance which is backed by iscsi datastore on vsphere
  Creating snapshot got stuck at "Image Uploading" state.

  Noticed "Operation timed out" error while executing "Export OVF
  template" task.

  vpxd.log
  
  2015-09-22T20:25:49.681Z info vpxd[7F5A52B6A700] [Originator@6876 sub=vpxLro 
opID=257026c4-1697-4696-b644-8e8533a3ef91-12119-ngc-8] [VpxLRO] -- BEGIN 
task-internal-209954 -- datastore-60 -- vim.Datastore.GetBrowser -- 
52a48e8a-5d02-e8d3-f168-22a925987f0b(52100a84-20a1-b773-bec2-1e9f6ab7c5a4)
  2015-09-22T20:25:49.682Z info vpxd[7F5A52B6A700] [Originator@6876 sub=vpxLro 
opID=257026c4-1697-4696-b644-8e8533a3ef91-12119-ngc-8] [VpxLRO] -- FINISH 
task-internal-209954
  2015-09-22T20:25:49.686Z info vpxd[7F5A52C6C700] [Originator@6876 sub=vpxLro 
opID=257026c4-1697-4696-b644-8e8533a3ef91-12120-ngc-b4] [VpxLRO] -- BEGIN 
session[52a48e8a-5d02-e8d3-f168-22a925987f0b]52ff9143-1a53-1738-6c4c-4226b0667c04
 -- datastoreBrowser-datastore-60 -- vim.host.DatastoreBrowser.search -- 
52a48e8a-5d02-e8d3-f168-22a925987f0b(52100a84-20a1-b773-bec2-1e9f6ab7c5a4)
  2015-09-22T20:25:49.806Z info vpxd[7F5A52C6C700] [Originator@6876 sub=vpxLro 
opID=257026c4-1697-4696-b644-8e8533a3ef91-12120-ngc-b4] [VpxLRO] -- FINISH 
session[52a48e8a-5d02-e8d3-f168-22a925987f0b]52ff9143-1a53-1738-6c4c-4226b0667c04
  2015-09-22T20:25:50.643Z info vpxd[7F5A5AA03700] [Originator@6876 
sub=VAppExport opID=5d311e85-e] [ExportTaskMo] Timed out
  2015-09-22T20:25:50.643Z info vpxd[7F5A5AA03700] [Originator@6876 
sub=VAppExport opID=5d311e85-e] [ExportTaskMo] Task timed out
  2015-09-22T20:25:50.643Z info vpxd[7F5A5AA03700] [Originator@6876 
sub=MoHttpNfcLease opID=5d311e85-e] [HttpNfcLeaseMo] Releasing HTTP-NFC ticket
  2015-09-22T20:25:50.649Z warning vpxd[7F5A5AA03700] [Originator@6876 
sub=VpxProfiler opID=5d311e85-e] VpxLro::LroMain [TotalTime] took 300023 ms
  2015-09-22T20:25:50.649Z info vpxd[7F5A5AA03700] [Originator@6876 sub=vpxLro 
opID=5d311e85-e] [VpxLRO] -- FINISH task-872
  2015-09-22T20:25:50.649Z info vpxd[7F5A5AA03700] [Originator@6876 sub=Default 
opID=5d311e85-e] [VpxLRO] -- ERROR task-872 -- vm-176 -- 
VirtualMachine.ExportVmLRO: vim.fault.Timedout:
  --> Result:
  --> (vim.fault.Timedout) {
  -->faultCause = (vmodl.MethodFault) null,
  -->msg = ""
  --> }
  --> Args:
  -->

  
  g-api.log
  -
  2015-09-22 13:23:33.232 21257 ERROR glance_store._drivers.vmware_datastore 
[req-d91e76c5-746d-4b0f-8942-cd5adf866b91 86ba66edc4b24e639c37e4ce992d9384 
3d5d5d98dde249f08298210cb2e45866 - - -] Communication error sending http PUT 
requestto the url 
/folder/openstack_glance/89c4faf4-79d7-4593-9b6d-4ea4827e3f7c%3FdcPath%3Ddatacenter-1%26dsName%3Dshared.
  Got IOError [Errno 32] Broken pipe
  2015-09-22 13:23:33.233 21257 ERROR glance.api.v1.upload_utils 
[req-d91e76c5-746d-4b0f-8942-cd5adf866b91 86ba66edc4b24e639c37e4ce992d9384 
3d5d5d98dde249f08298210cb2e45866 - - -] Failed to upload image 
89c4faf4-79d7-4593-9b6d-4ea4827e3f7c
  2015-09-22 13:23:33.233 21257 ERROR glance.api.v1.upload_utils Traceback 
(most recent call last):
  2015-09-22 13:23:33.233 21257 ERROR glance.api.v1.upload_utils   File 
"/opt/stack/glance/glance/api/v1/upload_utils.py", line 113, in 
upload_data_to_store
  2015-09-22 13:23:33.233 21257 ERROR glance.api.v1.upload_utils 
context=req.context)
  2015-09-22 13:23:33.233 21257 ERROR glance.api.v1.upload_utils   File 
"/usr/local/lib/python2.7/dist-packages/glance_store/backend.py", line 340, in 
store_add_to_backend
  2015-09-22 13:23:33.233 21257 ERROR glance.api.v1.upload_ut

Re: [ceph-users] radosgw + civetweb latency issue on Hammer

2015-09-22 Thread Giridhar Yasa
I encountered the same issue in my setup (with an AWS SDK client) and
on further investigation found that 'rgw print continue' was set to
false for a civetweb driven rgw.

Updated http://tracker.ceph.com/issues/12640

Giridhar
--
Giridhar Yasa | Flipkart Engineering | http://www.flipkart.com/


On Thu, Aug 6, 2015 at 5:31 AM, Srikanth Madugundi
<srikanth.madugu...@gmail.com> wrote:
> Hi,
>
> After upgrading to Hammer and moving from apache to civetweb. We
> started seeing high PUT latency in the order of 2 sec for every PUT
> request. The GET request lo
>
> Attaching the radosgw logs for a single request. The ceph.conf has the
> following configuration for civetweb.
>
> [client.radosgw.gateway]
> rgw frontends = civetweb port=5632
>
>
> Further investion reveled the call to get_data() at
> https://github.com/ceph/ceph/blob/hammer/src/rgw/rgw_op.cc#L1786 is
> taking 2 sec to respond. The cluster is running Hammer 94.2 release
>
> Did any one face this issue before? Is there some configuration I am missing?
>
> Regards
> Srikanth
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw + civetweb latency issue on Hammer

2015-09-22 Thread Giridhar Yasa
I encountered the same issue in my setup (with an AWS SDK client) and on
further investigation found that 'rgw print continue' was set to false for
a civetweb driven rgw.

Updated http://tracker.ceph.com/issues/12640

Giridhar

--
Giridhar Yasa | Flipkart Engineering | http://www.flipkart.com/

On Thu, Aug 6, 2015 at 5:31 AM, Srikanth Madugundi <
srikanth.madugu...@gmail.com> wrote:

> Hi,
>
> After upgrading to Hammer and moving from apache to civetweb. We
> started seeing high PUT latency in the order of 2 sec for every PUT
> request. The GET request lo
>
> Attaching the radosgw logs for a single request. The ceph.conf has the
> following configuration for civetweb.
>
> [client.radosgw.gateway]
> rgw frontends = civetweb port=5632
>
>
> Further investion reveled the call to get_data() at
> https://github.com/ceph/ceph/blob/hammer/src/rgw/rgw_op.cc#L1786 is
> taking 2 sec to respond. The cluster is running Hammer 94.2 release
>
> Did any one face this issue before? Is there some configuration I am
> missing?
>
> Regards
> Srikanth
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw + civetweb latency issue on Hammer

2015-09-22 Thread Giridhar Yasa
I encountered the same issue in my setup (with an AWS SDK client) and
on further investigation found that 'rgw print continue' was set to
false for a civetweb driven rgw.

Updated http://tracker.ceph.com/issues/12640

Giridhar
--
Giridhar Yasa | Flipkart Engineering | http://www.flipkart.com/


On Thu, Aug 6, 2015 at 5:31 AM, Srikanth Madugundi
<srikanth.madugu...@gmail.com> wrote:
> Hi,
>
> After upgrading to Hammer and moving from apache to civetweb. We
> started seeing high PUT latency in the order of 2 sec for every PUT
> request. The GET request lo
>
> Attaching the radosgw logs for a single request. The ceph.conf has the
> following configuration for civetweb.
>
> [client.radosgw.gateway]
> rgw frontends = civetweb port=5632
>
>
> Further investion reveled the call to get_data() at
> https://github.com/ceph/ceph/blob/hammer/src/rgw/rgw_op.cc#L1786 is
> taking 2 sec to respond. The cluster is running Hammer 94.2 release
>
> Did any one face this issue before? Is there some configuration I am missing?
>
> Regards
> Srikanth
>
> ___
> ceph-users mailing list
> ceph-us...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Yahoo-eng-team] [Bug 1498233] [NEW] No error thrown while importing an image which does not have read permission

2015-09-21 Thread Giridhar Jayavelu
Public bug reported:

Started devstack from master branch on git.

I tried to create an image on horizon by  choosing "Image location" option for 
Image source and passed an URL to ova file.
"Copy data" option was selected. There were no errors thrown after submitting 
the request.
Also, after refreshing the page, I could not find the new image as well.

Found the following exception on g-api.log. It turned out to be wrong
file permission for the ova being imported. It did not have read
permission (set to 600).

2015-09-21 17:23:34.191 18326 DEBUG glance.common.client 
[req-6edbf424-2bc9-472c-8cef-e9d12762a55e 86ba66edc4b24e639c37e4ce992d9384 
3d5d5d98dde249f08298210cb2e45866 - - -] Constructed URL: 
http://10.161.71.96:9191/images/detail?sort_key=created_at_public=None=21_dir=desc
 _construct_url /opt/stack/glance/glance/common/client.py:402
2015-09-21 17:23:34.216 18327 DEBUG glance.registry.client.v1.client [-] 
Registry request PUT /images/d13562fb-ffd7-40e9-9910-bb99fe751332 HTTP 200 
request id req-dfa0c604-8cd9-4f3f-9837-4c03192bdb9a do_request 
/opt/stack/glance/glance/registry/client/v1/client.py:128
2015-09-21 17:23:34.219 18326 DEBUG glance.registry.client.v1.client 
[req-6edbf424-2bc9-472c-8cef-e9d12762a55e 86ba66edc4b24e639c37e4ce992d9384 
3d5d5d98dde249f08298210cb2e45866 - - -] Registry request GET /images/detail 
HTTP 200 request id req-6edbf424-2bc9-472c-8cef-e9d12762a55e do_request 
/opt/stack/glance/glance/registry/client/v1/client.py:128
2015-09-21 17:23:34.221 18326 INFO eventlet.wsgi.server 
[req-6edbf424-2bc9-472c-8cef-e9d12762a55e 86ba66edc4b24e639c37e4ce992d9384 
3d5d5d98dde249f08298210cb2e45866 - - -] 10.161.71.96 - - [21/Sep/2015 17:23:34] 
"GET 
/v1/images/detail?sort_key=created_at_dir=desc=21_public=None 
HTTP/1.1" 200 805 0.035794
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images [-] Copy from external 
source 'vsphere' failed for image: d13562fb-ffd7-40e9-9910-bb99fe751332
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images Traceback (most recent 
call last):
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/opt/stack/glance/glance/api/v1/images.py", line 619, in _upload
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images dest=store)
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/opt/stack/glance/glance/api/v1/images.py", line 471, in _get_from_store
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images image_data, 
image_size = src_store.get(loc, context=context)
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/usr/local/lib/python2.7/dist-packages/glance_store/capabilities.py", line 
226, in op_checker
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images return 
store_op_fun(store, *args, **kwargs)
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/usr/local/lib/python2.7/dist-packages/glance_store/_drivers/http.py", line 
130, in get
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images conn, resp, 
content_length = self._query(location, 'GET')
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/usr/local/lib/python2.7/dist-packages/glance_store/_drivers/http.py", line 
196, in _query
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images raise 
exceptions.BadStoreUri(message=reason)
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images BadStoreUri: HTTP URL 
/gjayavelu/ovf/dsl-4-4-10.ova returned a 403 status code.
2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images
2015-09-21 17:23:34.230 18327 INFO glance.api.v1.images [-] Uploaded data of 
image d13562fb-ffd7-40e9-9910-bb99fe751332 from request payload successfully.

It would be good to catch this exception and throw error.

Attached g-api.log

** Affects: glance
 Importance: Undecided
 Status: New

** Attachment added: "g-api.log"
   https://bugs.launchpad.net/bugs/1498233/+attachment/4470786/+files/g-api.log

** Description changed:

  Started devstack from master branch on git.
  
  I tried to create an image on horizon by  choosing "Image location" option 
for Image source and passed an URL to ova file.
- "Copy data" option was selected. There were no errors thrown after submitting 
the request. 
+ "Copy data" option was selected. There were no errors thrown after submitting 
the request.
  Also, after refreshing the page, I could not find the new image as well.
  
- 
- Found, the following exception on g-api.log. It turned out to be wrong file 
permission for the ova being imported. It did not have read permission (set to 
600).
+ Found the following exception on g-api.log. It turned out to be wrong
+ file permission for the ova being imported. It did not have read
+ permission (set to 600).
  
  2015-09-21 17:23:34.191 18326 DEBUG glance.common.client 
[req-6edbf424-2bc9-472c-8cef-e9d12762a55e 86ba66edc4b24e639c37e4ce992d9384 
3d5d5d98dde249f08298210cb2e45866 - - -] Constructed URL: 

guidelines to increase the MTU size

2015-09-16 Thread AR, Giridhar
Hi

Need guidelines to increase the MTU size (example from 1500 to 9000)
Our application is using snmpv2. When the MTU size (9000) is changed on 
interface of client and server, Snmp queries are timed out.
We collected the packet traces on both client and server.  Client is receiving 
the response from server, but client is dropping the packet. Following are the  
test cases tried

case   client MTU  server MTU   
 result

1  1500   1500  
 passed

2  1500   9000  
 failed

3  9000   1500  
 passed

4  9000   9000  
 failed


Looked at the client code, uses the net-snmp-code,



#define SNMP_MAX_MSG_SIZE  1472
#define SNMP_MAX_LEN 1500

And by default max_repetitions is set 20. If we set to 1, snmp queries are 
successful.

What is the relation between MTU and max_repetitions.?
If we change the MTU to jumbo frame like 9000, what should be these macros 
(SNMP_MAX_MSG_SIZE  , SNMP_MAX_LEN ) should be set ?
Is there any other changes in need to perform?


Thanks
Giridhar





--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991=/4140___
Net-snmp-coders mailing list
Net-snmp-coders@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders


RE: guidelines to increase the MTU size

2015-09-16 Thread AR, Giridhar
Hi Lee,

Thanks for the response.
No I didn't change the MTU on every router and switches between client and 
server.

>What is the relation between MTU and max_repetitions.?
> If we change the MTU to jumbo frame like 9000, what should be these 
> macros (SNMP_MAX_MSG_SIZE  , SNMP_MAX_LEN ) should be set ?
> Is there any other changes in need to perform?

Do you have any inputs on the above questions?

Thanks
Giridhar

-Original Message-
From: Lee [mailto:ler...@gmail.com] 
Sent: Wednesday, September 16, 2015 5:13 PM
To: AR, Giridhar
Cc: net-snmp-coders@lists.sourceforge.net
Subject: Re: guidelines to increase the MTU size

On 9/16/15, AR, Giridhar <giridhar.raja...@netapp.com> wrote:
> Hi
>
> Need guidelines to increase the MTU size (example from 1500 to 9000) 
> Our application is using snmpv2. When the MTU size (9000) is changed 
> on interface of client and server, Snmp queries are timed out.

Did you also change the MTU on every router & switch between the client and 
server to be 9000?  If no, that's probably why queries are timing out

Regards,
Lee


> We collected the packet traces on both client and server.  Client is 
> receiving the response from server, but client is dropping the packet.
> Following are the  test cases tried
>
> case   client MTU  server MTU
> result
>
> 1  1500   1500
> passed
>
> 2  1500   9000
> failed
>
> 3  9000   1500
> passed
>
> 4  9000   9000
> failed
>
>
> Looked at the client code, uses the net-snmp-code,
>
>
>
> #define SNMP_MAX_MSG_SIZE  1472
> #define SNMP_MAX_LEN 1500
>
> And by default max_repetitions is set 20. If we set to 1, snmp queries 
> are successful.
>
> What is the relation between MTU and max_repetitions.?
> If we change the MTU to jumbo frame like 9000, what should be these 
> macros (SNMP_MAX_MSG_SIZE  , SNMP_MAX_LEN ) should be set ?
> Is there any other changes in need to perform?
>
>
> Thanks
> Giridhar
>
>
>
>
>
>
--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991=/4140
___
Net-snmp-coders mailing list
Net-snmp-coders@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders


[Yahoo-eng-team] [Bug 1495834] [NEW] [VMware] Launching an instance with large image size crashes nova-compute

2015-09-15 Thread Giridhar Jayavelu
Public bug reported:

Created an image of size ~6.5GB.
Launching a new instance from this image crashes nova-compute.

I'm observing nova-compute node running out of memory.

This could probably be due to reading entire file stream in memory
without using proper chunk size.

This is from git source and not any distribution.

git log -1
commit 1cf97bd096112b8d2e0eb95fd2a636a53cbf0bcc
Merge: a54c0d6 17fe88a
Author: Jenkins 
Date:   Mon Sep 14 02:34:18 2015 +

Merge "Fix typo in lib/keystone"

nova image-show fa376c74-4058-492b-9081-f31522f640f6
+--+--+
| Property | Value|
+--+--+
| OS-EXT-IMG-SIZE:size | 6997009920   |
| created  | 2015-09-14T10:53:09Z |
| id   | fa376c74-4058-492b-9081-f31522f640f6 |
| minDisk  | 0|
| minRam   | 0|
| name | win2k12-01   |
| progress | 100  |
| status   | ACTIVE   |
| updated  | 2015-09-14T10:59:33Z |
+--+--+

Attached n-cpu.log

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "n-cpu log"
   https://bugs.launchpad.net/bugs/1495834/+attachment/4464738/+files/n-cpu.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495834

Title:
  [VMware] Launching an instance with large image size crashes nova-
  compute

Status in OpenStack Compute (nova):
  New

Bug description:
  Created an image of size ~6.5GB.
  Launching a new instance from this image crashes nova-compute.

  I'm observing nova-compute node running out of memory.

  This could probably be due to reading entire file stream in memory
  without using proper chunk size.

  This is from git source and not any distribution.

  git log -1
  commit 1cf97bd096112b8d2e0eb95fd2a636a53cbf0bcc
  Merge: a54c0d6 17fe88a
  Author: Jenkins 
  Date:   Mon Sep 14 02:34:18 2015 +

  Merge "Fix typo in lib/keystone"

  nova image-show fa376c74-4058-492b-9081-f31522f640f6
  +--+--+
  | Property | Value|
  +--+--+
  | OS-EXT-IMG-SIZE:size | 6997009920   |
  | created  | 2015-09-14T10:53:09Z |
  | id   | fa376c74-4058-492b-9081-f31522f640f6 |
  | minDisk  | 0|
  | minRam   | 0|
  | name | win2k12-01   |
  | progress | 100  |
  | status   | ACTIVE   |
  | updated  | 2015-09-14T10:59:33Z |
  +--+--+

  Attached n-cpu.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Puppet Users] Re: Unable to setup Environments in 2015.2

2015-09-09 Thread giridhar kazama
Thanks alot. That really did the trick. I realized it very lately. I wasn't 
refreshing the classes so didn't get it immediately.Your help is very much 
appreciated.

Now I am having a hustle while picking up the hiera data. Its working when 
I am using the static directory structure in the datadir but it doesn't if 
I give variables.


static dir : /etc/puppetlabs/code/environments/production/hieradata
dynamic : /etc/puppetlabs/code/environments/%{environments}/hieradata

On Tuesday, September 8, 2015 at 10:00:52 AM UTC+5:30, giridhar kazama 
wrote:
>
> HI,
> I have set up a master with 400+ agents. My master has 2015.2. I have to 
> setup 4 environments and all are test environments. I've tried creating 
> directories named with the test environments at 
> /etc/puppetlabs/code/environments/test1 
> /etc/puppetlabs/code/environments/test2 
> /etc/puppetlabs/code/environments/test3 and 
> /etc/puppetlabs/code/environments/test4.
> I have also changed the environment field to test1 in one of the agents 
> but it doesn't seems to be happy. could anyone help me on this please.
>
>
> Regards
> Giridhar
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/29ed3d95-4b8b-4b1c-a4d7-925b13c50489%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: Unable to setup Environments in 2015.2

2015-09-09 Thread giridhar kazama
Sorry forgot to post the output:
[root@example1234 hieradata]# hiera example12345
nil
[root@example1234 hieradata]# hiera example12345 --debug
DEBUG: 2015-09-09 15:12:35 +0100: Hiera YAML backend starting
DEBUG: 2015-09-09 15:12:35 +0100: Looking up example12345 in YAML backend
DEBUG: 2015-09-09 15:12:35 +0100: Looking for data source defaults
DEBUG: 2015-09-09 15:12:35 +0100: Cannot find datafile 
/etc/puppetlabs/code/environments//hieradata/defaults.yaml, skipping
DEBUG: 2015-09-09 15:12:35 +0100: Looking for data source global
DEBUG: 2015-09-09 15:12:35 +0100: Cannot find datafile 
/etc/puppetlabs/code/environments//hieradata/global.yaml, skipping
nil





On Tuesday, September 8, 2015 at 10:00:52 AM UTC+5:30, giridhar kazama 
wrote:
>
> HI,
> I have set up a master with 400+ agents. My master has 2015.2. I have to 
> setup 4 environments and all are test environments. I've tried creating 
> directories named with the test environments at 
> /etc/puppetlabs/code/environments/test1 
> /etc/puppetlabs/code/environments/test2 
> /etc/puppetlabs/code/environments/test3 and 
> /etc/puppetlabs/code/environments/test4.
> I have also changed the environment field to test1 in one of the agents 
> but it doesn't seems to be happy. could anyone help me on this please.
>
>
> Regards
> Giridhar
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/09774dce-d797-43a2-bfcc-4b88744b5867%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Unable to setup Environments in 2015.2

2015-09-07 Thread giridhar kazama
HI,
I have set up a master with 400+ agents. My master has 2015.2. I have to 
setup 4 environments and all are test environments. I've tried creating 
directories named with the test environments at 
/etc/puppetlabs/code/environments/test1 
/etc/puppetlabs/code/environments/test2 
/etc/puppetlabs/code/environments/test3 and 
/etc/puppetlabs/code/environments/test4.
I have also changed the environment field to test1 in one of the agents but 
it doesn't seems to be happy. could anyone help me on this please.


Regards
Giridhar

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/5cf87f13-ba2f-45c3-9bdc-dd108a059e31%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [BangPypers] Online communication channel

2015-09-05 Thread Chetan Giridhar
Hi Krace, I'll be interested to help. Please loop me in.

On Sat, Sep 5, 2015 at 5:38 PM, Kiran Gangadharan  wrote:

> Hi,
>
> On Sat, Sep 5, 2015 at 4:15 PM, kracekumar ramaraju <
> kracethekingma...@gmail.com> wrote:
>
> > Slack for BangPypers [0] is set up. You need to visit [1] for the invite
> > and once you signup, you will receive email link for verification.
> >
> > We have three channels `general, random, meetup`. `general` channel is
> for
> > asking question about Python and other related questions.
> > `meetup` channel is for discussion about meetup and workshop/talk
> queries.
> >
> > We will come up guidelines and admins soon. Any volunteers for framing up
> > initial guidelines ?
> >
>
> I can help out perhaps. I need to be of some use as an admin :)
>
>
> >
> > [0]: https://bangpypers.slack.com
> > [1]: https://bangpypers.herokuapp.com/
> >
> >
> > On Mon, Aug 17, 2015 at 11:05 PM, Anand B Pillai <
> > anandpil...@letterboxes.org> wrote:
> >
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: SHA1
> > >
> > > On Sunday 16 August 2015 10:55 AM, kracekumar ramaraju wrote:
> > > >>
> > > >> [...] Um, is chat an absolute necessity for that? I mean, this
> > > >> mailing list already does that, right?
> > > >>
> > > >>
> > > > Well idea is not to replace ML. ML is has its place.
> > >
> > > We are discussing this on an ML which has been on this server since
> > > 2007, so I guess it has :)
> > >
> > >
> > > > Like any other project which has both IRC/ML. There are lot of ways
> > > > it will be useful, one example, Let's say there is an workshop in
> > > > meetup, tutor wants to help people who have problem with
> > > > installation, it can be simple to drop an email asking them to join
> > > > a channel and get answers for their issues.
> > >
> > > There is no end to digital communication. It is never a question of
> > > Why ? but more of Which ?
> > >
> > > And with lot of options to choose from now a days, no wonder it needs
> > > discussion on an ML :)
> > >
> > > > ___ BangPypers mailing
> > > > list BangPypers@python.org
> > > > https://mail.python.org/mailman/listinfo/bangpypers
> > > >
> > >
> > >
> > > - --
> > > Regards,
> > >
> > > - --Anand
> > >
> > > - 
> > > Software Architect/Consultant
> > > anandpil...@letterboxes.org
> > >
> > > Cell: +919880078014
> > > -BEGIN PGP SIGNATURE-
> > > Version: GnuPG v1
> > > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> > >
> > > iQEcBAEBAgAGBQJV0htOAAoJEHKU2n17CpvDVkUH/R9AqI6ltnGMYZNhJ35yS/9D
> > > rOxMmsQT392+TLr5/oSyqlsJ/vWTiGNY3F7jHaWM3EeyN14Ad5Hakx+1dITtxx12
> > > +ggMjpT1frv20gfCuPIQsUMqZ4fEpaSnUI65vElySb0mvYxwbRR0KeumQCOOMU25
> > > oNFf4mgEQ98b2V8mqHdGieZPA9W1dnnPsxqOqwhAeq8GHASZ8Lo6ob84qRImFCHZ
> > > hAPtowpWhACep8ddUVGyLUN4Baj5AMDxtBBgafafopKpAE06hi5VowL1Kc1MHeTt
> > > kSSOQFWIr/NzJDC2njlD+VFXuJ4yBKU5k04orV2wGh3PEtwtAQLNoYuH5zQZIEw=
> > > =J5LQ
> > > -END PGP SIGNATURE-
> > > ___
> > > BangPypers mailing list
> > > BangPypers@python.org
> > > https://mail.python.org/mailman/listinfo/bangpypers
> > >
> >
> >
> >
> > --
> >
> > *Thanks & Regardskracekumar"Talk is cheap, show me the code" -- Linus
> > Torvaldshttp://kracekumar.com *
> > ___
> > BangPypers mailing list
> > BangPypers@python.org
> > https://mail.python.org/mailman/listinfo/bangpypers
> >
>
>
>
> --
> Cheers,
> Kiran Gangadharan
> http://kirang.in
> ___
> BangPypers mailing list
> BangPypers@python.org
> https://mail.python.org/mailman/listinfo/bangpypers
>



-- 
Regards,
Chetan
___
BangPypers mailing list
BangPypers@python.org
https://mail.python.org/mailman/listinfo/bangpypers


[Qgis-user] The new own built exe is not working. Giving errors on launch while starting python

2015-07-09 Thread Sushim Giridhar Bhiwapurkar
Dears,



I am building a new own build of QGIS from source (Version 2.4.0).

The build was successful but while running the new exe of qgis (qgis.exe) I am 
getting some errors (crash dump).



Problem Statement: The new own built exe is not working. Giving errors on 
launch while starting python



Operating System and version - Windows 7 QGIS version - 2.4.0 (Source Code from 
GitHub) Installed as own build.

It broke at the launch while starting Python.



I have followed the steps mentioned in the link below,



http://htmlpreview.github.io/?https://raw.github.com/qgis/QGIS/master/doc/INSTALL.html



1.Downloaded  the QGIS source code from github.

2.Configured Cmake and generated solution of visual studio

3.Own build of the qgis source code was successfully created in 
C:/Program

Files/qgis2.4.0 folder.

4.I copied the entire folder to 'C:/OSGeo4W/apps' and tried to 
launch

qgis.bat

5.Further, all the dependent dlls etc are added to respective 
paths. (Not

sure if all the required dlls are added).

6.While running the qgis.exe / qgis.bat (attached with this email) 
we are

getting following errors,







[cid:image001.jpg@01D0BA6F.A2786600]





I am not sure if I am missing any reference while configuration in CMAKE or 
While Building the new exe or While launching the new qgis.exe.

I have tried the same with 2.6 and 2.8.2 Versions of QGIS source but no luck.





Help from QGIS experts will be appreciated.



Thank you in advance.





Regards,

SUSHIM




Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html externally 
http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.


___
Qgis-user mailing list
Qgis-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/qgis-user

Need help in Netscape Directory SDK

2015-04-07 Thread Giridhar reddy
Hi Friends,

I need help in downloading the latest netscape LDAP libs for Java.

Could some one give me the download link ..

Thank You,
Giridhar
___
dev-tech-ldap mailing list
dev-tech-ldap@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-ldap


[AngularJS] How to call the AngularJs factory Only once

2015-02-17 Thread Giridhar Bhageshpur
Hi all,


 I was working on a prototype in angularjs and the code is something as 
below

.controller('HomePageController',['UserService','socket',function(UserService,socket)
 {
this.userservice = UserService;
socket.on('messagetoyou',function(data){
console.log('obtained data for only me yaaay: ',data);
});
 }])

*Controller.js*

 .factory('socket',function($rootScope){
var socket;
return{
getsocket:function()
{
  return socket;
},
createsocket:function()
{
  socket = io.connect('http://localhost:3000');
},
on:function(eventName,data){
socket.on(eventName,data);
},
emit:function(eventName,data){
socket.emit(eventName,data);
}
};

})

*Service.js*

My Query is  everytime I reload my page, the controller gets initialized 
and my factory also gets initialized. Is there a way to create factory only 
once?

Thanks,
Giri 

-- 
You received this message because you are subscribed to the Google Groups 
AngularJS group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to angular+unsubscr...@googlegroups.com.
To post to this group, send email to angular@googlegroups.com.
Visit this group at http://groups.google.com/group/angular.
For more options, visit https://groups.google.com/d/optout.


Re: Delayed FC rport removal causes errors on other device

2014-12-18 Thread Giridhar Malavali
Tony, 

We will get back to you after looking at the logs.

-- Giri

On 12/18/14 8:37 AM, Tony Battersby to...@cybernetics.com wrote:

On 12/17/2014 06:17 PM, Giridhar Malavali wrote:
 Tony, 

 We will look into this further and get back to you.

 Do you have driver logs with extended error logging for this failure. If
 not, can you please capture one and send it across.



I have attached two logs - one log from the initiator during the
initiator test that I described, and one log from the target during the
target test that I described.  Please note that these are logs from two
different test procedures, so you cannot correlate events from the
initiator log with events from the target log.

Thanks,
Tony Battersby
Cybernetics



attachment: winmail.dat

Re: Delayed FC rport removal causes errors on other device

2014-12-17 Thread Giridhar Malavali
Tony, 

We will look into this further and get back to you.

Do you have driver logs with extended error logging for this failure. If
not, can you please capture one and send it across.

Thanks,
Giridhar

On 12/17/14 2:41 PM, Tony Battersby to...@cybernetics.com wrote:

Initiator-mode problem summary:

FC initiator HBA cabled directly (no FC switch) to FC target device #1.
Unplug cable from FC target device #1 and quickly plug it into FC target
device #2, keeping the other end of the cable plugged into the same
initiator HBA port.  The new device shows up immediately; begin sending
commands to it.  The old device stays visible for 30 - 60 seconds after
the cable was moved, and then it disappears with the message
rport-7:0-0: blocked FC remote port time out: removing rport.  When
the old device disappears, commands outstanding to the new device are
aborted or lock up.


Initiator-mode problem details:

vanilla kernel version 3.18.1

I have three types of FC HBAs:
QLogic QLE2672 16Gb FC HBA using qla2xxx
QLogic QLE2562  8Gb FC HBA using qla2xxx
LSI7204EP   4Gb FC HBA using mptfc

I have seen the problem with both qla2xxx and mptfc.  With qla2xxx,
commands active during the old rport removal are aborted with host
status 0x0E, but then it recovers and additional commands work fine.
With mptfc, active commands lock up.

---

The problem I have described above happens using just the initiator-mode
drivers in the mainline kernel, but my real interest is a related
problem that happens with the QLogic target-mode drivers from
git://git.qlogic.com/scst-qla2xxx.git.  I will describe that problem
with more details below.

---

Target-mode problem summary:

Two separate PCs, each with a QLogic QLE2672 16Gb FC HBA (firmware
7.04.01).
One PC uses the FC HBA in initiator mode; the other PC uses the FC HBA
in target mode.
The target-mode PC presents a disk drive to the initiator-mode PC over FC.
The FC HBAs are directly connected with a FC cable; no FC switch involved.

With the cable plugged in, the initiator PC sees the target-mode FC disk
at /dev/sg2.  When I unplug the cable from one port on the initiator and
quickly plug it into the other port on the initiator, a new disk shows
up for the new path at /dev/sg3.  The old disk at /dev/sg2 stays around
for about 30 seconds and then disappears.  That is all fine and
expected.  The problem is that when the old disk at /dev/sg2 disappears,
the new disk at /dev/sg3 stops responding to commands (but doesn't
disappear).

Note: in the initiator-mode test described at the beginning of this
message, the cable is moved from one target device to another target
device while keeping the same initiator port.  In contrast, in this
target-mode test, the cable is moved from one initiator port to another
initiator port while keeping the same target port.  That is what
distinguishes the two tests.  Whichever port remains the same (whether
initiator or target) is the port that causes the problem when the old
rport is removed.


Target-mode problem details:

vanilla kernel version 3.18.1

Before unplugging the cable, the target-mode PC creates a session with
portid 00:00:e8, loop_id 0, and the wwn of the first initiator port.
When the cable is unplugged, the session is scheduled for deletion.
When the cable is plugged into the other initiator port, the target-mode
PC creates another session with the same portid and loop_id as the first
session (which is now scheduled for deletion) but with a different wwn
corresponding to the second initiator port.

The new disk at /dev/sg3 stops responding to commands when the
target-mode PC calls isp_ops-fabric_logout() from
qla2x00_terminate_rport_io() in qla_attr.c.  If I disable that call to
fabric_logout() then the new disk at /dev/sg3 continues to work as
expected.  It looks like qla2x00_terminate_rport_io() is being called to
cleanup the old removed fcport, but ends up messing up the new
still-present fcport instead (I am guessing because the old and new
fcports share the same portid and loopid).  After fabric_logout() messes
up the new fcport, the target-mode HBA returns CTIO_PORT_LOGGED_OUT for
any new incoming commands from the initiator.

The problem can be avoided by waiting 30 seconds after unplugging the
cable before plugging it back in.  But that is not a good solution for
me since these HBAs are to be used in a product sold by my company, and
we want it to just work for our customers.

I am very familiar with SCSI but only a little bit with FC, so I am not
exactly sure of the correct fix.  So bear with me while I ask a few
questions: Is it correct for the old and new fcports to share the same
portid and loop_id?  When creating the new fcport, should qla2xxx have
detected that the lost-but-not-yet-dead fcport was using the same portid
and loop_id, and chosen to use different values for the new fcport
instead?  Or should it have invalidated the portid and/or loop_id of the
lost-but-not-yet-dead fcport somewhere (LOOP UP/LOOP DOWN/LIP
reset

Re: [PATCH RESEND 00/35] qla2xxx: Patches for 3.18 scsi misc branch.

2014-10-01 Thread Giridhar Malavali
Christoph, 

In order to synchronize with our internal testing and minimize the
regression, we are planning to fix those sparse errors in a staggered
phase.

We will update what can be fixed for 3.19.

Hope this approach works.

-- Giri

On 9/25/14 10:01 AM, Christoph Hellwig h...@lst.de wrote:

Thanks, I've applied both qla2xxx series to drivers-for-3.18.

I've noticed a lot of existing sparse endianess warnings
(from make C=2 CF=-D__CHECK_ENDIAN__ SUBDIRS=drivers/scsi/qla2xxx)
when compiling the driver, any chance you could try to fix those up
for 3.19?


attachment: winmail.dat

Doubt Regarding QJM protocol - example 2.10.6 of Quorum-Journal Design document

2014-09-28 Thread Giridhar Addepalli
Hi All,

I am going through Quorum Journal Design document.

It is mentioned in Section 2.8 - In Accept Recovery RPC section

If the current on-disk log is missing, or a *different length *than the
proposed recovery, the JN downloads the log from the provided URI,
replacing any current copy of the log segment.


I can see it that the code follows above design

Source :: Journal.java
 

  public synchronized void acceptRecovery(RequestInfo reqInfo,
  SegmentStateProto segment, URL fromUrl)
  throws IOException {

  
  if (currentSegment == null ||
currentSegment.getEndTxId() != segment.getEndTxId()) {
  
  } else {
  LOG.info(Skipping download of log  +
  TextFormat.shortDebugString(segment) +
  : already have up-to-date logs);
  }
  
  }


My question is what if on-disk log is present and is of *same length *as
the proposed recovery

If JournalNode is skipping download because the logs are of same length,
then we could end up in a situation where finalized log segments contain
different data !

This could happen if we follow example 2.10.6

As per that example we write transactions (151-153 ) on JN1
then when recovery proceeded with only JN2  JN3 let us assume that we
write again *different transactions* as (151-153) . Then after the crash
when we run recovery , JN1 will skip downloading correct segment from
JN2/JN3 as it thinks it has correct segment( as per the code pasted above).
This will result in a situation where finalized segment ( edits_151-153 )
on JN1 is different from finalized segment edits_151-153 on JN2/JN3.

Please let me know if i have gone wrong some where, and this situation is
taken care of.

Thanks,
Giridhar.


Accessing my own object inside ev_handler

2014-09-16 Thread Giridhar Addepalli
Hi All,

I am newbie to mongoose.
I am trying to have an embedded mongoose server in my c++ application.

My use case is to have http server , which will look at the request and 
using object of a class from my application(say 'myclass')  figures out 
whether the request is valid and figures out what file to send back to the 
client.
I am following example at 
: https://github.com/cesanta/mongoose/blob/master/examples/send_file/send_file.c

I want to be able to access an object of 'myclass' inside ev_handler 
function, so that i can call methods on this object  of 'myclass'

My doubt is how to setup things such that object of 'myclass'   is 
accessible inside ev_handler function.

Please help.

Thanks,
Giridhar.

-- 
You received this message because you are subscribed to the Google Groups 
mongoose-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mongoose-users+unsubscr...@googlegroups.com.
To post to this group, send email to mongoose-users@googlegroups.com.
Visit this group at http://groups.google.com/group/mongoose-users.
For more options, visit https://groups.google.com/d/optout.


Re: Significance of PID files

2014-07-06 Thread Giridhar Addepalli
At Daemon level.

Thanks,
Giridhar.


On Fri, Jul 4, 2014 at 11:03 AM, Vijaya Narayana Reddy Bhoomi Reddy 
vijay.bhoomire...@gmail.com wrote:

 Vikas,

 Its main use is to keep one process at a time...like one one datanode at
 a any host - Can you please elaborate in a more detail?

 What is meant by one process at a time? At what level does a pid file come
 into picture i.e. is it at a deamon level, job level, task level etc?

 Thanks
 Vijay


 On 4 July 2014 10:36, Vikas srivastava vikas_srivas...@apple.com wrote:

 Paid files used to store the pic if particular process. Its main use is
 to keep one process at a time...like one one datanode at a any host
  On Jul 4, 2014 10:00 AM, Vijaya Narayana Reddy Bhoomi Reddy 
 vijay.bhoomire...@gmail.com wrote:

 Hi,

 Can anyone please explain the significance of the pid files in Hadoop
 i.e. purpose and usage etc?

 Thanks  Regards
 Vijay





Regarding Quorum Journal protocol used in HDFS

2014-06-18 Thread Giridhar Addepalli
Hi,

We are trying to understand Quorum Journal Protocol (HDFS-3077)

Came across a scenario in which active name node is terminated and standby
namenode took  over as new active namenode. But we could not understand why
active namenode got terminated in the first place.

Scenario :

We have 3 nodes ( n1, n2, n3 )

n1 acts as Active NameNode, JournalNode
n2 acts as StandBy NameNode, JournalNode
n3 acts as JournalNode

JournalNode process on n3 is down when
segment edits_inprogress_005 is created.

JournalNode process is up on n1  n2
n1 and n2 has edits_inprogress_005  n3 doesn't have it

Now before edit log roll over happened , we started JournalNode process on
n3  stopped JournalNode process on n2.

Now when namenode on n1 tried to finalize inprogress log segment ( upon
instruction from standby namenode on n2 after edit log roll over time has
passed ), namenode process on n1 got terminated.
Standy Namenode on n2 took over as Active now.
After this following are the logs on n1, n2 , n3 in directory ::
/var/lib/hadoop-hdfs/cache/hdfs/dfs/journal/sample-cluster/current

n1:

-rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:07
edits_005-006

-rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:07
edits_inprogress_007


n2:

-rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:02
edits_inprogress_005


n3:

-rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:07
edits_005-006

-rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:07
edits_inprogress_007

Please help us understand why NameNode process on n1 got terminated even
though 2 journal nodes ( n1  n2 ) were running when n1 tried to finalize
the log segment.

Even though in the above scenario we configured our 3 node cluster with
automatic failover, we are only planning for manual failover in our
production cluster.

Given this, above scenario looks problematic because it requires manual
intervention in our case.

Is it recommended to have manual failover when using QJM ?

Thanks,

Giridhar.


Re: Regarding Quorum Journal protocol used in HDFS

2014-06-18 Thread Giridhar Addepalli
Just wanted to be more clear.

Now when namenode on n1 tried to finalize inprogress log segment ( upon
instruction from standby namenode on n2 after edit log roll over time has
passed ), namenode process on n1 got terminated(*because it could not get
quorum of responses*).

Thanks,
Giridhar.


On Wed, Jun 18, 2014 at 10:08 PM, Giridhar Addepalli giridhar1...@gmail.com
 wrote:

 Hi,

 We are trying to understand Quorum Journal Protocol (HDFS-3077)

 Came across a scenario in which active name node is terminated and standby
 namenode took  over as new active namenode. But we could not understand why
 active namenode got terminated in the first place.

 Scenario :

 We have 3 nodes ( n1, n2, n3 )

 n1 acts as Active NameNode, JournalNode
 n2 acts as StandBy NameNode, JournalNode
 n3 acts as JournalNode

 JournalNode process on n3 is down when
 segment edits_inprogress_005 is created.

 JournalNode process is up on n1  n2
 n1 and n2 has edits_inprogress_005  n3 doesn't have it

 Now before edit log roll over happened , we started JournalNode process on
 n3  stopped JournalNode process on n2.

 Now when namenode on n1 tried to finalize inprogress log segment ( upon
 instruction from standby namenode on n2 after edit log roll over time has
 passed ), namenode process on n1 got terminated.
 Standy Namenode on n2 took over as Active now.
 After this following are the logs on n1, n2 , n3 in directory ::
 /var/lib/hadoop-hdfs/cache/hdfs/dfs/journal/sample-cluster/current

 n1:

 -rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:07
 edits_005-006

 -rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:07
 edits_inprogress_007


 n2:

 -rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:02
 edits_inprogress_005


 n3:

 -rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:07
 edits_005-006

 -rw-r--r-- 1 hdfs hdfs 1.0M Jun 18 21:07
 edits_inprogress_007

 Please help us understand why NameNode process on n1 got terminated even
 though 2 journal nodes ( n1  n2 ) were running when n1 tried to finalize
 the log segment.

 Even though in the above scenario we configured our 3 node cluster with
 automatic failover, we are only planning for manual failover in our
 production cluster.

 Given this, above scenario looks problematic because it requires manual
 intervention in our case.

 Is it recommended to have manual failover when using QJM ?

 Thanks,

 Giridhar.



Bug#751511: xfslibs-dev: Incorrect licensing information in debian/copyright

2014-06-13 Thread Y Giridhar Appaji Nag
Source: xfslibs-dev
Severity: serious
Justification: Policy 4.5

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

It looks like the xfslibs-dev package source files are licensed under LGPL
v2.1, however the debian/copyright file indicates that only the libhandle
package is under LGPL and that the rest of the package is licensed under GPL.

Giridhar

- -- System Information:
Debian Release: jessie/sid
  APT prefers testing-updates
  APT policy: (500, 'testing-updates'), (500, 'testing')
Architecture: amd64 (x86_64)

Kernel: Linux 3.13-1-amd64 (SMP w/2 CPU cores)
Locale: LANG=en_IN, LC_CTYPE=en_IN (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBCAAGBQJTmwnSAAoJEKjSzROqVEqhm4wQAIVHEEsJ1+YkbxQOx8ubKHov
hW4HJK1h6PoBudiyK+RSBIekl8WV08qvwaUJHg6xPodH3N/WhfDw7vduQv97s5Hr
DJ92DWA5j06XUOKNPsRFGi6mGL2VVPgOkb+3xGPWgTLpAeUfbkukOStp7Gi8tOEI
6REM2z3uh8krCzuIsB1RS4j8SWXFVy3OE0ROKfL92JsWyVgUpRmDdU0nwEmhjrfn
7Bk4ecEaZA3nXeT5FY2yAEtk0hIat/rtCYe66lJw6k7FmogD5W46tb6U2bgtmqt1
I1JGDivQgbBd1pogxUdgNyK4lizSkTOdg24WuaGHMhI7sH6+DHMPOiC81INsf1qY
6q8OB7naez3x8BFGMH03BL6wpn1yVAd+m0nEXJLluu3niLlOHjSTLAbqiX8h+n2b
PyXFnTViaGI8oYGHbTnOC+t/o+mbnbIE7hnsfdDhVXqDhZo5dJtWl4VxNuOBTOV8
7ZiNH4y8kHwU8yT6dW/dGLCYxWvE+RvTGhgL/V0tO2ipkuVUmZaMyvvUuk6xSfER
wSf1AUWhhgJBJzGRZj4UglWJ99MaZMxYzLMJq20uQmTSzlnN//vLsmjMcfFcx61f
33AaniHi0fbOwPHV5lecxjyxKgjo5CSd0EqaCV7fenAB1Tr6mKXB3frrOJMxbhlr
c69uwhjzVNZRP8tudDtC
=yzqb
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#751511: xfslibs-dev: Incorrect licensing information in debian/copyright

2014-06-13 Thread Y Giridhar Appaji Nag
Source: xfslibs-dev
Severity: serious
Justification: Policy 4.5

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

It looks like the xfslibs-dev package source files are licensed under LGPL
v2.1, however the debian/copyright file indicates that only the libhandle
package is under LGPL and that the rest of the package is licensed under GPL.

Giridhar

- -- System Information:
Debian Release: jessie/sid
  APT prefers testing-updates
  APT policy: (500, 'testing-updates'), (500, 'testing')
Architecture: amd64 (x86_64)

Kernel: Linux 3.13-1-amd64 (SMP w/2 CPU cores)
Locale: LANG=en_IN, LC_CTYPE=en_IN (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBCAAGBQJTmwnSAAoJEKjSzROqVEqhm4wQAIVHEEsJ1+YkbxQOx8ubKHov
hW4HJK1h6PoBudiyK+RSBIekl8WV08qvwaUJHg6xPodH3N/WhfDw7vduQv97s5Hr
DJ92DWA5j06XUOKNPsRFGi6mGL2VVPgOkb+3xGPWgTLpAeUfbkukOStp7Gi8tOEI
6REM2z3uh8krCzuIsB1RS4j8SWXFVy3OE0ROKfL92JsWyVgUpRmDdU0nwEmhjrfn
7Bk4ecEaZA3nXeT5FY2yAEtk0hIat/rtCYe66lJw6k7FmogD5W46tb6U2bgtmqt1
I1JGDivQgbBd1pogxUdgNyK4lizSkTOdg24WuaGHMhI7sH6+DHMPOiC81INsf1qY
6q8OB7naez3x8BFGMH03BL6wpn1yVAd+m0nEXJLluu3niLlOHjSTLAbqiX8h+n2b
PyXFnTViaGI8oYGHbTnOC+t/o+mbnbIE7hnsfdDhVXqDhZo5dJtWl4VxNuOBTOV8
7ZiNH4y8kHwU8yT6dW/dGLCYxWvE+RvTGhgL/V0tO2ipkuVUmZaMyvvUuk6xSfER
wSf1AUWhhgJBJzGRZj4UglWJ99MaZMxYzLMJq20uQmTSzlnN//vLsmjMcfFcx61f
33AaniHi0fbOwPHV5lecxjyxKgjo5CSd0EqaCV7fenAB1Tr6mKXB3frrOJMxbhlr
c69uwhjzVNZRP8tudDtC
=yzqb
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



[AngularJS] Date Time is not displaying not properly in angularjs bootstrap

2014-05-18 Thread giridhar mungi
div data-datetime data-date-format=-mm-dd hh:mm:ss 
data-link-field=dtp_input1
input data-ng-model=ContactTime name=contactTime id=contactTime 
class=form-control size=16 type=text 
 /div

myApp.directive(datetime, function ($rootScope) {
return {
restrict: 'A',
link: function(scope, element, attrs) {
element.click(function(){
element.datetimepicker({
dateFormat:'-mm-dd HH:mm:ss',
autoclose: 1,
todayHighlight: true,
showMeridian:1
});
});
}
};
});





Hi friends,Here is my div which displays date with the time with the help 
of angular directive.But Here the minute and seconds are not displaying 
properly.Did i do anything wrong in the datetime directive.

For example when i select the time 17-May-2014 2:35.It displaying as 
17-May-2014 2:05. Minutes are always displaying 05.seconds also displaying 
wrongly.

I think the format is wrong.please help me.

-- 
You received this message because you are subscribed to the Google Groups 
AngularJS group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to angular+unsubscr...@googlegroups.com.
To post to this group, send email to angular@googlegroups.com.
Visit this group at http://groups.google.com/group/angular.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   5   6   7   8   9   10   >