Need information for connecting to mariadb from remote machine
Hi I am connecting to mariadb from remote machine , I am executing this commands from bash @when 'relation_name.available' function mariadb_install_check(){ db_host=`relation-get host` db_user=`relation-get user` db_pass=`relation-get password` mysql -h db_host -u db_user -p db_pass } while connecting to my sql : I am getting error: ERROR 1045 (28000): Access denied for user please let me know what grant permission I can provide from mariadb to default user of remote machine. remote machine user I am getting: ietohvoibaitaik Rajith IBM AIX Certified, OCPCertified Cell- 9901966577 Email: rajith...@in.ibm.com From: Rajith P Venkata/India/IBM To: Daniel Bartholomew , juju Date: 14-06-16 11:31 PM Subject:Need information for performing housekeeping tasks on mariadb from a remote machine Hi, I have deployed mariadb in unit 1 and want to perform following tasks from RTM charm which is in unit 2 rm -f /var/lib/mysql/ib_logfile* service mysql stop rm -f /var/lib/mysql/ibdata* Please let me know how I can perform this housekeeping tasks. Rajith IBM AIX Certified, OCPCertified Cell- 9901966577 Email: rajith...@in.ibm.com -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Need information for connecting to mariadb from remote machine
On Mon, Jun 20, 2016 at 5:51 AM, Rajith P Venkata wrote: > Hi > > I am connecting to mariadb from remote machine , I am executing this > commands from bash > > @when 'relation_name.available' > function mariadb_install_check(){ > db_host=`relation-get host` > db_user=`relation-get user` > db_pass=`relation-get password` > mysql -h db_host -u db_user -p db_pass > } > Didn't know that is a proper bash function; learning something new every day. > > while connecting to my sql : I am getting error: ERROR 1045 (28000): Access > denied for user > Have you tried manually to connect as said user? From the mariadb server and then from remote host? > please let me know what grant permission I can provide from mariadb to > default user of remote machine. > I think that is a mysql (and derivatives) question, not a juju one. The error message you got might provide a hint. > remote machine user I am getting: ietohvoibaitaik > > > > > Rajith > > IBM AIX Certified, OCPCertified > > > Cell- 9901966577 > Email: rajith...@in.ibm.com > > > > From:Rajith P Venkata/India/IBM > To:Daniel Bartholomew , juju > > Date:14-06-16 11:31 PM > Subject:Need information for performing housekeeping tasks on > mariadb from a remote machine > > > > Hi, > > I have deployed mariadb in unit 1 and want to perform following tasks from > RTM charm which is in unit 2 > > rm -f /var/lib/mysql/ib_logfile* > service mysql stop > rm -f /var/lib/mysql/ibdata* > > Please let me know how I can perform this housekeeping tasks. > > > Rajith > > IBM AIX Certified, OCPCertified > > > Cell- 9901966577 > Email: rajith...@in.ibm.com > > > > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Need information for connecting to mariadb from remote machine
Hi To be more precise, I am getting below error when I execute this : ubuntu@charm-local-machine-5:~$ mysql -h 10.0.3.46 -u ietohvoibaitaik ERROR 1045 (28000): Access denied for user 'ietohvoibaitaik'@'10.0.3.76' Rajith IBM AIX Certified, OCPCertified Cell- 9901966577 Email: rajith...@in.ibm.com From: Mauricio Tavares To: Rajith P Venkata/India/IBM@IBMIN Cc: Daniel Bartholomew , juju Date: 20-06-16 03:46 PM Subject:Re: Need information for connecting to mariadb from remote machine On Mon, Jun 20, 2016 at 5:51 AM, Rajith P Venkata wrote: > Hi > > I am connecting to mariadb from remote machine , I am executing this > commands from bash > > @when 'relation_name.available' > function mariadb_install_check(){ > db_host=`relation-get host` > db_user=`relation-get user` > db_pass=`relation-get password` > mysql -h db_host -u db_user -p db_pass > } > Didn't know that is a proper bash function; learning something new every day. > > while connecting to my sql : I am getting error: ERROR 1045 (28000): Access > denied for user > Have you tried manually to connect as said user? From the mariadb server and then from remote host? > please let me know what grant permission I can provide from mariadb to > default user of remote machine. > I think that is a mysql (and derivatives) question, not a juju one. The error message you got might provide a hint. > remote machine user I am getting: ietohvoibaitaik > > > > > Rajith > > IBM AIX Certified, OCPCertified > > > Cell- 9901966577 > Email: rajith...@in.ibm.com > > > > From:Rajith P Venkata/India/IBM > To:Daniel Bartholomew , juju > > Date:14-06-16 11:31 PM > Subject:Need information for performing housekeeping tasks on > mariadb from a remote machine > > > > Hi, > > I have deployed mariadb in unit 1 and want to perform following tasks from > RTM charm which is in unit 2 > > rm -f /var/lib/mysql/ib_logfile* > service mysql stop > rm -f /var/lib/mysql/ibdata* > > Please let me know how I can perform this housekeeping tasks. > > > Rajith > > IBM AIX Certified, OCPCertified > > > Cell- 9901966577 > Email: rajith...@in.ibm.com > > > > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Need information for connecting to mariadb from remote machine
On Mon, Jun 20, 2016 at 6:23 AM, Rajith P Venkata wrote: > Hi > > To be more precise, I am getting below error when I execute this : > > ubuntu@charm-local-machine-5:~$ mysql -h 10.0.3.46 -u ietohvoibaitaik > ERROR 1045 (28000): Access denied for user 'ietohvoibaitaik'@'10.0.3.76' > 1. Your command seems to be incomplete. How are you passing the password? 2. Have you tried as I suggested connecting from 10.0.3.46 instead of 10.0.3.76? > Rajith > > IBM AIX Certified, OCPCertified > > > Cell- 9901966577 > Email: rajith...@in.ibm.com > > > > From:Mauricio Tavares > To:Rajith P Venkata/India/IBM@IBMIN > Cc:Daniel Bartholomew , juju > > Date:20-06-16 03:46 PM > Subject:Re: Need information for connecting to mariadb from remote > machine > > > > > On Mon, Jun 20, 2016 at 5:51 AM, Rajith P Venkata > wrote: >> Hi >> >> I am connecting to mariadb from remote machine , I am executing this >> commands from bash >> >> @when 'relation_name.available' >> function mariadb_install_check(){ >> db_host=`relation-get host` >> db_user=`relation-get user` >> db_pass=`relation-get password` >> mysql -h db_host -u db_user -p db_pass >> } >> > Didn't know that is a proper bash function; learning something > new every day. > >> >> while connecting to my sql : I am getting error: ERROR 1045 (28000): >> Access >> denied for user >> > Have you tried manually to connect as said user? From the > mariadb server and then from remote host? > >> please let me know what grant permission I can provide from mariadb to >> default user of remote machine. >> > I think that is a mysql (and derivatives) question, not a juju > one. The error message you got might provide a hint. > >> remote machine user I am getting: ietohvoibaitaik >> >> >> >> >> Rajith >> >> IBM AIX Certified, OCPCertified >> >> >> Cell- 9901966577 >> Email: rajith...@in.ibm.com >> >> >> >> From:Rajith P Venkata/India/IBM >> To:Daniel Bartholomew , juju >> >> Date:14-06-16 11:31 PM >> Subject:Need information for performing housekeeping tasks on >> mariadb from a remote machine >> >> >> >> Hi, >> >> I have deployed mariadb in unit 1 and want to perform following tasks from >> RTM charm which is in unit 2 >> >> rm -f /var/lib/mysql/ib_logfile* >> service mysql stop >> rm -f /var/lib/mysql/ibdata* >> >> Please let me know how I can perform this housekeeping tasks. >> >> >> Rajith >> >> IBM AIX Certified, OCPCertified >> >> >> Cell- 9901966577 >> Email: rajith...@in.ibm.com >> >> >> >> >> -- >> Juju mailing list >> Juju@lists.ubuntu.com >> Modify settings or unsubscribe at: >> https://lists.ubuntu.com/mailman/listinfo/juju >> > > > > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Need information for connecting to mariadb from remote machine
Hi please check this I am connecting to mariadb from remote machine 10.0.3.76. Mariadb is installed in 10.0.3.46 Rajith IBM AIX Certified, OCPCertified Cell- 9901966577 Email: rajith...@in.ibm.com From: Mauricio Tavares To: Rajith P Venkata/India/IBM@IBMIN Cc: juju Date: 20-06-16 04:02 PM Subject:Re: Need information for connecting to mariadb from remote machine On Mon, Jun 20, 2016 at 6:23 AM, Rajith P Venkata wrote: > Hi > > To be more precise, I am getting below error when I execute this : > > ubuntu@charm-local-machine-5:~$ mysql -h 10.0.3.46 -u ietohvoibaitaik > ERROR 1045 (28000): Access denied for user 'ietohvoibaitaik'@'10.0.3.76' > 1. Your command seems to be incomplete. How are you passing the password? 2. Have you tried as I suggested connecting from 10.0.3.46 instead of 10.0.3.76? > Rajith > > IBM AIX Certified, OCPCertified > > > Cell- 9901966577 > Email: rajith...@in.ibm.com > > > > From:Mauricio Tavares > To:Rajith P Venkata/India/IBM@IBMIN > Cc:Daniel Bartholomew , juju > > Date:20-06-16 03:46 PM > Subject:Re: Need information for connecting to mariadb from remote > machine > > > > > On Mon, Jun 20, 2016 at 5:51 AM, Rajith P Venkata > wrote: >> Hi >> >> I am connecting to mariadb from remote machine , I am executing this >> commands from bash >> >> @when 'relation_name.available' >> function mariadb_install_check(){ >> db_host=`relation-get host` >> db_user=`relation-get user` >> db_pass=`relation-get password` >> mysql -h db_host -u db_user -p db_pass >> } >> > Didn't know that is a proper bash function; learning something > new every day. > >> >> while connecting to my sql : I am getting error: ERROR 1045 (28000): >> Access >> denied for user >> > Have you tried manually to connect as said user? From the > mariadb server and then from remote host? > >> please let me know what grant permission I can provide from mariadb to >> default user of remote machine. >> > I think that is a mysql (and derivatives) question, not a juju > one. The error message you got might provide a hint. > >> remote machine user I am getting: ietohvoibaitaik >> >> >> >> >> Rajith >> >> IBM AIX Certified, OCPCertified >> >> >> Cell- 9901966577 >> Email: rajith...@in.ibm.com >> >> >> >> From:Rajith P Venkata/India/IBM >> To:Daniel Bartholomew , juju >> >> Date:14-06-16 11:31 PM >> Subject:Need information for performing housekeeping tasks on >> mariadb from a remote machine >> >> >> >> Hi, >> >> I have deployed mariadb in unit 1 and want to perform following tasks from >> RTM charm which is in unit 2 >> >> rm -f /var/lib/mysql/ib_logfile* >> service mysql stop >> rm -f /var/lib/mysql/ibdata* >> >> Please let me know how I can perform this housekeeping tasks. >> >> >> Rajith >> >> IBM AIX Certified, OCPCertified >> >> >> Cell- 9901966577 >> Email: rajith...@in.ibm.com >> >> >> >> >> -- >> Juju mailing list >> Juju@lists.ubuntu.com >> Modify settings or unsubscribe at: >> https://lists.ubuntu.com/mailman/listinfo/juju >> > > > > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
nfs kernel server installation failed in lxd container
Hi, I am using NFS for file sharing in my charm. To install nfs-kernel-server on LXC container(Juju 1.25), I had done the below steps on my host machine : apt-get install nfs-common modprobe nfsd mount -t nfsd nfsd /proc/fs/nfsd Then edit /etc/apparmor.d/lxc/lxc-default and added the following lines to it and restart the apparmor mount fstype=nfs, mount fstype=nfs4, mount fstype=nfsd, mount fstype=rpc_pipefs, By doing this and further steps of installing nfs server and nfs client, I was able to share files between the LXC containers. Now when I run the same charm on Juju 2.0(LXD containers), my charm fails as the nfs-kernel-server installation fails. In the logs I see the below messages A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. invoke-rc.d: initscript nfs-kernel-server, action "start" failed. dpkg: error processing package nfs-kernel-server (--configure): subprocess installed post-installation script returned error exit status 1 On doing journalctl -xe, I see lots of "Operation not permitted" messages and "Failed to mount NFSD configuration filesystem"error messages. Can anyone please help me in resolving the above issue and configuring NFS Server on LXD containers. Thanks and Regards, Shilpa Kaul -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Charm push crashes when uploading big charms
Hello Merlijn, I can replicate the problem and I can work around it by using a faster internet connection. At some point, tcp connections have to time out. I can only replicate the issue when that timeout is reached. If you have the means to relocate to a faster internet connection temporarily for pushing to charmstore, please do. You might also try recompressing any items in the charm using a higher compression level. xz -9 instead of gzip -3 or whatever things may be using now. We are aware this is a poor longterm solution. We are investigating better solutions for uploads. As you've mentioned, resources will also help the situation. I am sorry that I do not have a better solution. -- Jay On Fri, Jun 17, 2016 at 4:29 PM, Rick Harding wrote: > Merlijn, thanks. I'm going to bet there's an issue with http request sizes > for the charmstore that the charm command talks do as we've got some layers > (Apache, Squid) in front of the actual application. The team is looking > into it. Thanks for giving us the heads up. > > On Wed, Jun 15, 2016 at 10:28 AM Merlijn Sebrechts < > merlijn.sebrec...@gmail.com> wrote: > >> Hi all >> >> >> I've hit a roadblock in setting up my CI pipeline. I have a charm with a >> Java sdk blob of ~200MB. Pushing this charm to the store causes the tool to >> crash. I put a bug report here: >> https://bugs.launchpad.net/juju/+bug/1592822 >> >> Juju resources would fix my problem, but then I'd need to move to Juju >> 2.0 and I'm not ready to do that yet. >> >> >> >> Kind regards >> Merlijn Sebrechts >> -- >> Juju mailing list >> Juju@lists.ubuntu.com >> Modify settings or unsubscribe at: >> https://lists.ubuntu.com/mailman/listinfo/juju >> > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Charm push crashes when uploading big charms
Thanks for looking into this! I'll try the compression and see if that works. Just curious; why does filesize affect tcp connection timeout? Aren't the files broken up into a bunch of smaller tcp packets? Filesize should only affect the number of tcp packets, not the size of the tcp packets? So getting a tcp timeout because of filesize seems strange to me... Any idea what exactly goes wrong here? Also, now that I think of it, the resource upload command might also be affected by this, if it uses the same library and similar backend infrastructure? I'll test this out. Op maandag 20 juni 2016 heeft Jay Wren het volgende geschreven: > Hello Merlijn, > I can replicate the problem and I can work around it by using a faster internet connection. > At some point, tcp connections have to time out. I can only replicate the issue when that timeout is reached. If you have the means to relocate to a faster internet connection temporarily for pushing to charmstore, please do. You might also try recompressing any items in the charm using a higher compression level. xz -9 instead of gzip -3 or whatever things may be using now. > We are aware this is a poor longterm solution. We are investigating better solutions for uploads. As you've mentioned, resources will also help the situation. > I am sorry that I do not have a better solution. > -- > Jay > On Fri, Jun 17, 2016 at 4:29 PM, Rick Harding wrote: >> >> Merlijn, thanks. I'm going to bet there's an issue with http request sizes for the charmstore that the charm command talks do as we've got some layers (Apache, Squid) in front of the actual application. The team is looking into it. Thanks for giving us the heads up. >> On Wed, Jun 15, 2016 at 10:28 AM Merlijn Sebrechts < merlijn.sebrec...@gmail.com> wrote: >>> >>> Hi all >>> >>> I've hit a roadblock in setting up my CI pipeline. I have a charm with a Java sdk blob of ~200MB. Pushing this charm to the store causes the tool to crash. I put a bug report here: https://bugs.launchpad.net/juju/+bug/1592822 >>> Juju resources would fix my problem, but then I'd need to move to Juju 2.0 and I'm not ready to do that yet. >>> >>> >>> Kind regards >>> Merlijn Sebrechts >>> -- >>> Juju mailing list >>> Juju@lists.ubuntu.com >>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju >> >> -- >> Juju mailing list >> Juju@lists.ubuntu.com >> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju >> > > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Charm push crashes when uploading big charms
Yes, files are broken up into many TCP packets, and they are all transmitted over a single TCP connection. TCP is a complex protocol which is well documented, so I'll not repeat that here. If you want lots of details, wikipedia is not bad: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation In the abstract, when you connect to a server using TCP it is identified by a 4-tuple of source address, source port, target ip address, target port. These connections consume server resources and indeed a connection exhaustion is a popular denial of service attack. You are getting a tcp timeout due to of file size because the time it takes to send the entire content is longer than the TCP connection timeout. Yes, the resource upload command to charmstore will also be affected by this. Luckily, resources also have the ability to be uploaded specifically to a model, which might have greater network data rates from the resource uploader. -- Jay On Mon, Jun 20, 2016 at 7:04 PM, Merlijn Sebrechts < merlijn.sebrec...@gmail.com> wrote: > Thanks for looking into this! I'll try the compression and see if that > works. > > Just curious; why does filesize affect tcp connection timeout? Aren't the > files broken up into a bunch of smaller tcp packets? Filesize should only > affect the number of tcp packets, not the size of the tcp packets? So > getting a tcp timeout because of filesize seems strange to me... Any idea > what exactly goes wrong here? > > Also, now that I think of it, the resource upload command might also be > affected by this, if it uses the same library and similar backend > infrastructure? I'll test this out. > > > Op maandag 20 juni 2016 heeft Jay Wren het > volgende geschreven: > > Hello Merlijn, > > I can replicate the problem and I can work around it by using a faster > internet connection. > > At some point, tcp connections have to time out. I can only replicate > the issue when that timeout is reached. If you have the means to relocate > to a faster internet connection temporarily for pushing to charmstore, > please do. You might also try recompressing any items in the charm using a > higher compression level. xz -9 instead of gzip -3 or whatever things may > be using now. > > We are aware this is a poor longterm solution. We are investigating > better solutions for uploads. As you've mentioned, resources will also help > the situation. > > I am sorry that I do not have a better solution. > > -- > > Jay > > On Fri, Jun 17, 2016 at 4:29 PM, Rick Harding < > rick.hard...@canonical.com> wrote: > >> > >> Merlijn, thanks. I'm going to bet there's an issue with http request > sizes for the charmstore that the charm command talks do as we've got some > layers (Apache, Squid) in front of the actual application. The team is > looking into it. Thanks for giving us the heads up. > >> On Wed, Jun 15, 2016 at 10:28 AM Merlijn Sebrechts < > merlijn.sebrec...@gmail.com> wrote: > >>> > >>> Hi all > >>> > >>> I've hit a roadblock in setting up my CI pipeline. I have a charm with > a Java sdk blob of ~200MB. Pushing this charm to the store causes the tool > to crash. I put a bug report here: > https://bugs.launchpad.net/juju/+bug/1592822 > >>> Juju resources would fix my problem, but then I'd need to move to Juju > 2.0 and I'm not ready to do that yet. > >>> > >>> > >>> Kind regards > >>> Merlijn Sebrechts > >>> -- > >>> Juju mailing list > >>> Juju@lists.ubuntu.com > >>> Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > >> > >> -- > >> Juju mailing list > >> Juju@lists.ubuntu.com > >> Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > >> > > > > > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Charm push crashes when uploading big charms
On 20/06/16 17:41, Jay Wren wrote: > You are getting a tcp timeout due to of file size because the time it > takes to send the entire content is longer than the TCP connection > timeout. Unlikely. TCP timeout would be the time between packets, not the total time of a session ;) -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Charm push crashes when uploading big charms
Thanks for the explanation, Jay. I did some further testing. Charm upload fails for a 270MB Charm both from my home, my work and our datacenter. - The datacenter is connected directly to Belnet (upload bandwith ~300Mbit/s). - My upload bandwidth at home is ~ 3 Mbps (speedtest.net) although during upload, system monitor shows ~400KiB/s. This causes me to think there is more at play here then large file + slow internet... Let me know if I can help to further debug this problem. As an aside; I don't consider 270MB to be that large. Some examples: - Kubernetes is ~1G - Ubuntu docker base image is ~200MB I think this is stuff we should be able to handle... 2016-06-20 18:41 GMT+02:00 Jay Wren : > Yes, files are broken up into many TCP packets, and they are all > transmitted over a single TCP connection. TCP is a complex protocol which > is well documented, so I'll not repeat that here. If you want lots of > details, wikipedia is not bad: > https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation > > In the abstract, when you connect to a server using TCP it is identified > by a 4-tuple of source address, source port, target ip address, target > port. These connections consume server resources and indeed a connection > exhaustion is a popular denial of service attack. > > You are getting a tcp timeout due to of file size because the time it > takes to send the entire content is longer than the TCP connection timeout. > > Yes, the resource upload command to charmstore will also be affected by > this. Luckily, resources also have the ability to be uploaded specifically > to a model, which might have greater network data rates from the resource > uploader. > > -- > Jay > > > On Mon, Jun 20, 2016 at 7:04 PM, Merlijn Sebrechts < > merlijn.sebrec...@gmail.com> wrote: > >> Thanks for looking into this! I'll try the compression and see if that >> works. >> >> Just curious; why does filesize affect tcp connection timeout? Aren't the >> files broken up into a bunch of smaller tcp packets? Filesize should only >> affect the number of tcp packets, not the size of the tcp packets? So >> getting a tcp timeout because of filesize seems strange to me... Any idea >> what exactly goes wrong here? >> >> Also, now that I think of it, the resource upload command might also be >> affected by this, if it uses the same library and similar backend >> infrastructure? I'll test this out. >> >> >> Op maandag 20 juni 2016 heeft Jay Wren het >> volgende geschreven: >> > Hello Merlijn, >> > I can replicate the problem and I can work around it by using a faster >> internet connection. >> > At some point, tcp connections have to time out. I can only replicate >> the issue when that timeout is reached. If you have the means to relocate >> to a faster internet connection temporarily for pushing to charmstore, >> please do. You might also try recompressing any items in the charm using a >> higher compression level. xz -9 instead of gzip -3 or whatever things may >> be using now. >> > We are aware this is a poor longterm solution. We are investigating >> better solutions for uploads. As you've mentioned, resources will also help >> the situation. >> > I am sorry that I do not have a better solution. >> > -- >> > Jay >> > On Fri, Jun 17, 2016 at 4:29 PM, Rick Harding < >> rick.hard...@canonical.com> wrote: >> >> >> >> Merlijn, thanks. I'm going to bet there's an issue with http request >> sizes for the charmstore that the charm command talks do as we've got some >> layers (Apache, Squid) in front of the actual application. The team is >> looking into it. Thanks for giving us the heads up. >> >> On Wed, Jun 15, 2016 at 10:28 AM Merlijn Sebrechts < >> merlijn.sebrec...@gmail.com> wrote: >> >>> >> >>> Hi all >> >>> >> >>> I've hit a roadblock in setting up my CI pipeline. I have a charm >> with a Java sdk blob of ~200MB. Pushing this charm to the store causes the >> tool to crash. I put a bug report here: >> https://bugs.launchpad.net/juju/+bug/1592822 >> >>> Juju resources would fix my problem, but then I'd need to move to >> Juju 2.0 and I'm not ready to do that yet. >> >>> >> >>> >> >>> Kind regards >> >>> Merlijn Sebrechts >> >>> -- >> >>> Juju mailing list >> >>> Juju@lists.ubuntu.com >> >>> Modify settings or unsubscribe at: >> https://lists.ubuntu.com/mailman/listinfo/juju >> >> >> >> -- >> >> Juju mailing list >> >> Juju@lists.ubuntu.com >> >> Modify settings or unsubscribe at: >> https://lists.ubuntu.com/mailman/listinfo/juju >> >> >> > >> > >> > > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Re: Charm push crashes when uploading big charms
If I had to upload 270mb from my home I'd be waiting 3 weeks. what's the timeout set to? ;) On 20 Jun 2016 19:30, "Merlijn Sebrechts" wrote: > Thanks for the explanation, Jay. > > I did some further testing. Charm upload fails for a 270MB Charm both from > my home, my work and our datacenter. > >- The datacenter is connected directly to Belnet (upload bandwith >~300Mbit/s). >- My upload bandwidth at home is ~ 3 Mbps (speedtest.net) although >during upload, system monitor shows ~400KiB/s. > > This causes me to think there is more at play here then large file + slow > internet... Let me know if I can help to further debug this problem. > > > As an aside; I don't consider 270MB to be that large. Some examples: > > >- Kubernetes is ~1G >- Ubuntu docker base image is ~200MB > > > I think this is stuff we should be able to handle... > > 2016-06-20 18:41 GMT+02:00 Jay Wren : > >> Yes, files are broken up into many TCP packets, and they are all >> transmitted over a single TCP connection. TCP is a complex protocol which >> is well documented, so I'll not repeat that here. If you want lots of >> details, wikipedia is not bad: >> https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation >> >> In the abstract, when you connect to a server using TCP it is identified >> by a 4-tuple of source address, source port, target ip address, target >> port. These connections consume server resources and indeed a connection >> exhaustion is a popular denial of service attack. >> >> You are getting a tcp timeout due to of file size because the time it >> takes to send the entire content is longer than the TCP connection timeout. >> >> Yes, the resource upload command to charmstore will also be affected by >> this. Luckily, resources also have the ability to be uploaded specifically >> to a model, which might have greater network data rates from the resource >> uploader. >> >> -- >> Jay >> >> >> On Mon, Jun 20, 2016 at 7:04 PM, Merlijn Sebrechts < >> merlijn.sebrec...@gmail.com> wrote: >> >>> Thanks for looking into this! I'll try the compression and see if that >>> works. >>> >>> Just curious; why does filesize affect tcp connection timeout? Aren't >>> the files broken up into a bunch of smaller tcp packets? Filesize should >>> only affect the number of tcp packets, not the size of the tcp packets? So >>> getting a tcp timeout because of filesize seems strange to me... Any idea >>> what exactly goes wrong here? >>> >>> Also, now that I think of it, the resource upload command might also be >>> affected by this, if it uses the same library and similar backend >>> infrastructure? I'll test this out. >>> >>> >>> Op maandag 20 juni 2016 heeft Jay Wren het >>> volgende geschreven: >>> > Hello Merlijn, >>> > I can replicate the problem and I can work around it by using a faster >>> internet connection. >>> > At some point, tcp connections have to time out. I can only replicate >>> the issue when that timeout is reached. If you have the means to relocate >>> to a faster internet connection temporarily for pushing to charmstore, >>> please do. You might also try recompressing any items in the charm using a >>> higher compression level. xz -9 instead of gzip -3 or whatever things may >>> be using now. >>> > We are aware this is a poor longterm solution. We are investigating >>> better solutions for uploads. As you've mentioned, resources will also help >>> the situation. >>> > I am sorry that I do not have a better solution. >>> > -- >>> > Jay >>> > On Fri, Jun 17, 2016 at 4:29 PM, Rick Harding < >>> rick.hard...@canonical.com> wrote: >>> >> >>> >> Merlijn, thanks. I'm going to bet there's an issue with http request >>> sizes for the charmstore that the charm command talks do as we've got some >>> layers (Apache, Squid) in front of the actual application. The team is >>> looking into it. Thanks for giving us the heads up. >>> >> On Wed, Jun 15, 2016 at 10:28 AM Merlijn Sebrechts < >>> merlijn.sebrec...@gmail.com> wrote: >>> >>> >>> >>> Hi all >>> >>> >>> >>> I've hit a roadblock in setting up my CI pipeline. I have a charm >>> with a Java sdk blob of ~200MB. Pushing this charm to the store causes the >>> tool to crash. I put a bug report here: >>> https://bugs.launchpad.net/juju/+bug/1592822 >>> >>> Juju resources would fix my problem, but then I'd need to move to >>> Juju 2.0 and I'm not ready to do that yet. >>> >>> >>> >>> >>> >>> Kind regards >>> >>> Merlijn Sebrechts >>> >>> -- >>> >>> Juju mailing list >>> >>> Juju@lists.ubuntu.com >>> >>> Modify settings or unsubscribe at: >>> https://lists.ubuntu.com/mailman/listinfo/juju >>> >> >>> >> -- >>> >> Juju mailing list >>> >> Juju@lists.ubuntu.com >>> >> Modify settings or unsubscribe at: >>> https://lists.ubuntu.com/mailman/listinfo/juju >>> >> >>> > >>> > >>> >> >> > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/juju > > -- Juju mailing list Juju@lis
Re: Charm push crashes when uploading big charms
>From the bug report: "The timeout is apache setting `Timeout` which defaults to 300." https://bugs.launchpad.net/ubuntu/+source/charm/+bug/1592822 2016-06-20 20:32 GMT+02:00 Tom Barber : > If I had to upload 270mb from my home I'd be waiting 3 weeks. what's > the timeout set to? ;) > On 20 Jun 2016 19:30, "Merlijn Sebrechts" > wrote: > >> Thanks for the explanation, Jay. >> >> I did some further testing. Charm upload fails for a 270MB Charm both >> from my home, my work and our datacenter. >> >>- The datacenter is connected directly to Belnet (upload bandwith >>~300Mbit/s). >>- My upload bandwidth at home is ~ 3 Mbps (speedtest.net) although >>during upload, system monitor shows ~400KiB/s. >> >> This causes me to think there is more at play here then large file + slow >> internet... Let me know if I can help to further debug this problem. >> >> >> As an aside; I don't consider 270MB to be that large. Some examples: >> >> >>- Kubernetes is ~1G >>- Ubuntu docker base image is ~200MB >> >> >> I think this is stuff we should be able to handle... >> >> 2016-06-20 18:41 GMT+02:00 Jay Wren : >> >>> Yes, files are broken up into many TCP packets, and they are all >>> transmitted over a single TCP connection. TCP is a complex protocol which >>> is well documented, so I'll not repeat that here. If you want lots of >>> details, wikipedia is not bad: >>> https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation >>> >>> In the abstract, when you connect to a server using TCP it is identified >>> by a 4-tuple of source address, source port, target ip address, target >>> port. These connections consume server resources and indeed a connection >>> exhaustion is a popular denial of service attack. >>> >>> You are getting a tcp timeout due to of file size because the time it >>> takes to send the entire content is longer than the TCP connection timeout. >>> >>> Yes, the resource upload command to charmstore will also be affected by >>> this. Luckily, resources also have the ability to be uploaded specifically >>> to a model, which might have greater network data rates from the resource >>> uploader. >>> >>> -- >>> Jay >>> >>> >>> On Mon, Jun 20, 2016 at 7:04 PM, Merlijn Sebrechts < >>> merlijn.sebrec...@gmail.com> wrote: >>> Thanks for looking into this! I'll try the compression and see if that works. Just curious; why does filesize affect tcp connection timeout? Aren't the files broken up into a bunch of smaller tcp packets? Filesize should only affect the number of tcp packets, not the size of the tcp packets? So getting a tcp timeout because of filesize seems strange to me... Any idea what exactly goes wrong here? Also, now that I think of it, the resource upload command might also be affected by this, if it uses the same library and similar backend infrastructure? I'll test this out. Op maandag 20 juni 2016 heeft Jay Wren het volgende geschreven: > Hello Merlijn, > I can replicate the problem and I can work around it by using a faster internet connection. > At some point, tcp connections have to time out. I can only replicate the issue when that timeout is reached. If you have the means to relocate to a faster internet connection temporarily for pushing to charmstore, please do. You might also try recompressing any items in the charm using a higher compression level. xz -9 instead of gzip -3 or whatever things may be using now. > We are aware this is a poor longterm solution. We are investigating better solutions for uploads. As you've mentioned, resources will also help the situation. > I am sorry that I do not have a better solution. > -- > Jay > On Fri, Jun 17, 2016 at 4:29 PM, Rick Harding < rick.hard...@canonical.com> wrote: >> >> Merlijn, thanks. I'm going to bet there's an issue with http request sizes for the charmstore that the charm command talks do as we've got some layers (Apache, Squid) in front of the actual application. The team is looking into it. Thanks for giving us the heads up. >> On Wed, Jun 15, 2016 at 10:28 AM Merlijn Sebrechts < merlijn.sebrec...@gmail.com> wrote: >>> >>> Hi all >>> >>> I've hit a roadblock in setting up my CI pipeline. I have a charm with a Java sdk blob of ~200MB. Pushing this charm to the store causes the tool to crash. I put a bug report here: https://bugs.launchpad.net/juju/+bug/1592822 >>> Juju resources would fix my problem, but then I'd need to move to Juju 2.0 and I'm not ready to do that yet. >>> >>> >>> Kind regards >>> Merlijn Sebrechts >>> -- >>> Juju mailing list >>> Juju@lists.ubuntu.com >>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju >> >> -- >> Juju mailing list >> Ju
Re: Charm push crashes when uploading big charms
So if that's 300 days, that should give you plenty of time ;) 2016-06-20 20:38 GMT+02:00 Merlijn Sebrechts : > From the bug report: "The timeout is apache setting `Timeout` which > defaults to 300." > > https://bugs.launchpad.net/ubuntu/+source/charm/+bug/1592822 > > 2016-06-20 20:32 GMT+02:00 Tom Barber : > >> If I had to upload 270mb from my home I'd be waiting 3 weeks. what's >> the timeout set to? ;) >> On 20 Jun 2016 19:30, "Merlijn Sebrechts" >> wrote: >> >>> Thanks for the explanation, Jay. >>> >>> I did some further testing. Charm upload fails for a 270MB Charm both >>> from my home, my work and our datacenter. >>> >>>- The datacenter is connected directly to Belnet (upload bandwith >>>~300Mbit/s). >>>- My upload bandwidth at home is ~ 3 Mbps (speedtest.net) although >>>during upload, system monitor shows ~400KiB/s. >>> >>> This causes me to think there is more at play here then large file + >>> slow internet... Let me know if I can help to further debug this problem. >>> >>> >>> As an aside; I don't consider 270MB to be that large. Some examples: >>> >>> >>>- Kubernetes is ~1G >>>- Ubuntu docker base image is ~200MB >>> >>> >>> I think this is stuff we should be able to handle... >>> >>> 2016-06-20 18:41 GMT+02:00 Jay Wren : >>> Yes, files are broken up into many TCP packets, and they are all transmitted over a single TCP connection. TCP is a complex protocol which is well documented, so I'll not repeat that here. If you want lots of details, wikipedia is not bad: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation In the abstract, when you connect to a server using TCP it is identified by a 4-tuple of source address, source port, target ip address, target port. These connections consume server resources and indeed a connection exhaustion is a popular denial of service attack. You are getting a tcp timeout due to of file size because the time it takes to send the entire content is longer than the TCP connection timeout. Yes, the resource upload command to charmstore will also be affected by this. Luckily, resources also have the ability to be uploaded specifically to a model, which might have greater network data rates from the resource uploader. -- Jay On Mon, Jun 20, 2016 at 7:04 PM, Merlijn Sebrechts < merlijn.sebrec...@gmail.com> wrote: > Thanks for looking into this! I'll try the compression and see if that > works. > > Just curious; why does filesize affect tcp connection timeout? Aren't > the files broken up into a bunch of smaller tcp packets? Filesize should > only affect the number of tcp packets, not the size of the tcp packets? So > getting a tcp timeout because of filesize seems strange to me... Any idea > what exactly goes wrong here? > > Also, now that I think of it, the resource upload command might also > be affected by this, if it uses the same library and similar backend > infrastructure? I'll test this out. > > > Op maandag 20 juni 2016 heeft Jay Wren het > volgende geschreven: > > Hello Merlijn, > > I can replicate the problem and I can work around it by using a > faster internet connection. > > At some point, tcp connections have to time out. I can only > replicate the issue when that timeout is reached. If you have the means to > relocate to a faster internet connection temporarily for pushing to > charmstore, please do. You might also try recompressing any items in the > charm using a higher compression level. xz -9 instead of gzip -3 or > whatever things may be using now. > > We are aware this is a poor longterm solution. We are investigating > better solutions for uploads. As you've mentioned, resources will also > help > the situation. > > I am sorry that I do not have a better solution. > > -- > > Jay > > On Fri, Jun 17, 2016 at 4:29 PM, Rick Harding < > rick.hard...@canonical.com> wrote: > >> > >> Merlijn, thanks. I'm going to bet there's an issue with http > request sizes for the charmstore that the charm command talks do as we've > got some layers (Apache, Squid) in front of the actual application. The > team is looking into it. Thanks for giving us the heads up. > >> On Wed, Jun 15, 2016 at 10:28 AM Merlijn Sebrechts < > merlijn.sebrec...@gmail.com> wrote: > >>> > >>> Hi all > >>> > >>> I've hit a roadblock in setting up my CI pipeline. I have a charm > with a Java sdk blob of ~200MB. Pushing this charm to the store causes the > tool to crash. I put a bug report here: > https://bugs.launchpad.net/juju/+bug/1592822 > >>> Juju resources would fix my problem, but then I'd need to move to > Juju 2.0 and I'm not ready to do that yet. > >>> > >>> > >>> Kind regards > >>> Me
Re: Charm push crashes when uploading big charms
Thanks for the further testing. Now I'm questioning how in the world I was able to see the same error. I will continue my testing to attempt to reproduce the error. Also, the Apache Timeout 300 should behave exactly as Mark said. I'm still trying to find what is causing this failure. On Mon, Jun 20, 2016 at 9:49 PM, Merlijn Sebrechts < merlijn.sebrec...@gmail.com> wrote: > So if that's 300 days, that should give you plenty of time ;) > > 2016-06-20 20:38 GMT+02:00 Merlijn Sebrechts > : > >> From the bug report: "The timeout is apache setting `Timeout` which >> defaults to 300." >> >> https://bugs.launchpad.net/ubuntu/+source/charm/+bug/1592822 >> >> 2016-06-20 20:32 GMT+02:00 Tom Barber : >> >>> If I had to upload 270mb from my home I'd be waiting 3 weeks. what's >>> the timeout set to? ;) >>> On 20 Jun 2016 19:30, "Merlijn Sebrechts" >>> wrote: >>> Thanks for the explanation, Jay. I did some further testing. Charm upload fails for a 270MB Charm both from my home, my work and our datacenter. - The datacenter is connected directly to Belnet (upload bandwith ~300Mbit/s). - My upload bandwidth at home is ~ 3 Mbps (speedtest.net) although during upload, system monitor shows ~400KiB/s. This causes me to think there is more at play here then large file + slow internet... Let me know if I can help to further debug this problem. As an aside; I don't consider 270MB to be that large. Some examples: - Kubernetes is ~1G - Ubuntu docker base image is ~200MB I think this is stuff we should be able to handle... 2016-06-20 18:41 GMT+02:00 Jay Wren : > Yes, files are broken up into many TCP packets, and they are all > transmitted over a single TCP connection. TCP is a complex protocol which > is well documented, so I'll not repeat that here. If you want lots of > details, wikipedia is not bad: > https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation > > In the abstract, when you connect to a server using TCP it is > identified by a 4-tuple of source address, source port, target ip address, > target port. These connections consume server resources and indeed a > connection exhaustion is a popular denial of service attack. > > You are getting a tcp timeout due to of file size because the time it > takes to send the entire content is longer than the TCP connection > timeout. > > Yes, the resource upload command to charmstore will also be affected > by this. Luckily, resources also have the ability to be uploaded > specifically to a model, which might have greater network data rates from > the resource uploader. > > -- > Jay > > > On Mon, Jun 20, 2016 at 7:04 PM, Merlijn Sebrechts < > merlijn.sebrec...@gmail.com> wrote: > >> Thanks for looking into this! I'll try the compression and see if >> that works. >> >> Just curious; why does filesize affect tcp connection timeout? Aren't >> the files broken up into a bunch of smaller tcp packets? Filesize should >> only affect the number of tcp packets, not the size of the tcp packets? >> So >> getting a tcp timeout because of filesize seems strange to me... Any idea >> what exactly goes wrong here? >> >> Also, now that I think of it, the resource upload command might also >> be affected by this, if it uses the same library and similar backend >> infrastructure? I'll test this out. >> >> >> Op maandag 20 juni 2016 heeft Jay Wren het >> volgende geschreven: >> > Hello Merlijn, >> > I can replicate the problem and I can work around it by using a >> faster internet connection. >> > At some point, tcp connections have to time out. I can only >> replicate the issue when that timeout is reached. If you have the means >> to >> relocate to a faster internet connection temporarily for pushing to >> charmstore, please do. You might also try recompressing any items in the >> charm using a higher compression level. xz -9 instead of gzip -3 or >> whatever things may be using now. >> > We are aware this is a poor longterm solution. We are investigating >> better solutions for uploads. As you've mentioned, resources will also >> help >> the situation. >> > I am sorry that I do not have a better solution. >> > -- >> > Jay >> > On Fri, Jun 17, 2016 at 4:29 PM, Rick Harding < >> rick.hard...@canonical.com> wrote: >> >> >> >> Merlijn, thanks. I'm going to bet there's an issue with http >> request sizes for the charmstore that the charm command talks do as we've >> got some layers (Apache, Squid) in front of the actual application. The >> team is looking into it. Thanks for giving us the heads up. >> >> On Wed, Jun 15, 2016 at 10:28 AM Merlijn Sebrechts
DevOps Days SLC: Juju, MAAS, & Snaps!
Hey everyone, Jorge Castro and I spent the last two days at DevOps Days Salt Lake City. We were "blue square" sponsors giving us table space and a one minute pitch. This is the first of our “tour de force” for the Juju 2.0 launch and a lot of refinement to the “pitch” and talking points were made. Overall this was a fantastic event, first year SLC has had a DevOps Days. The attendees and conversations were great. Jorge and I have highlighted a few of the conversations and counterpoints we got when engaging with the the conference goers. To start, we got a 1 min pitch to the attendees on the first day. Unlike other conferences where all the sponsors line up at once and everyone leaves while we pitch; they put the pitches interspersed throughout the day. This means we got a really engaged audience which Jorge Castro pitched to. Jorge framed the problem space really well while also identifying us as a solution. During the set up for the pitch (audio problems) Jorge polled the room to find of the 250 attendees, roughly ~80% raised their hands to using Ubuntu in one way or another. Booth talk was surprising, a lot of people developing code on or using Ubuntu/Linux and deploying to mostly, or exclusively, Microsoft Server. Being able to very early on highlight our native powershell support for charms and Windows deployments in MAAS brought a lot of attendees from swag hunting to engaged. Of the other people we talked to we found a lot of the Ubuntu stack between IoT, Desktop, and Cloud applied in one way or another. Here’s a few highlights of people we met and either their feedback or interests. Blake from Vivint Security. Vivint Security is a local security company that produces everything from wall warts to cloud services for customers. They do production of units in house along with the rest of the suite of software. With a mixture of hardware and cloud the Ubuntu Core/Snappy, MAAS, and Juju story was pretty appealing. We’ll be following up with Blake later this week to continue the conversation. Several SlingTV developers dropped by, where they let us know that they run exclusively on Ubuntu. Health Catalyst have a lot of acquired software which spans different deployment types from Windows, CentOS, and Ubuntu. Most of this is done on VMWare, MAAS was what this engineer who stopped by our booth was most interested in. Booz Allen Hamilton - This was someone from the training department. Was once technical, a long time ago. After explaining the idea behind Juju he seemed really interested in the problem space as it struck some notes with what Booz does as a consulting company. Tyler Bird from Stark & Wayne - this is a Cloud Foundry/BOSH company, where they pretty much do Cloud Foundry deployments for customers. The conversation started and ended with the engineer still thinking BOSH was better. Considering they’re only doing CF we couldn’t convince them otherwise. Aaron Mildenstein from elastic - we showed off, and showered with compliments, the elastic stack bundle, which is: elasticsearch, kibana, logstash, beats. Aaron was really interested in the snap packages as he currently has to build 8+ different packages to distribute elastic software. Jorge is working close with him via email to get him in touch with Snappy people and we’ve worked to connect the dots that elastic snapping everything means charms are easier to maintain. We’re looking to see if this might help make inroads to finally getting an elastic CPP. Finally, Brad Woodward from Applied Trust. This was someone who has tried Juju in the past, (1.20) and even filed bugs[0]. He confessed to loving the idea behind Juju but not liking the implementation at the time. We spent quite a lot of time walking through juju 2.0-beta8 and outlining the maturity of the project. While we did caveat heavily that it’s still beta, he seemed really engaged and eager to try Juju again. We’re working to setup a charm school with him and his colleagues as they’ve recently finished a terraform + packer + ansible project where he likened all the work they did to something he’d rather just use Juju for. Overall this DevOps Days was a success. We made inroads with the organizers for DevOps Days, helped support the first ever DevOps Days SLC, and really narrowed in on the stories that excite people in this community. We’re packing up and moving on to DevOps Days Amsterdam on June 29th-July 1st as our next stop where we hope to continue this trend of renewed interested, or new interest, in Juju as core rounds out 2.0 [0]: https://bugs.launchpad.net/juju-core/+bug/1315497/comments/4 Thanks! -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Breaking API change landing for 2.0-beta10
Hi folks, This impacts all people calling the Juju API directly. If you use the juju client, you *should* be fine. Taking advantage of our time in "beta", we are adding consistency to the wire-protocol used by the juju client to talk to the juju apiserver. In general this means that all the parameters are lower-case with words separated with dashes. *Error returns should all be "omitempty" meaning that if there was no error, there is no "error" entry in the return. There is no API facade version bump with this change. This allows us to start 2.0 with a clean and consistent slate. The change can be found here: https://github.com/juju/juju/pull/5674 This change will be landing very soon, perhaps by the time you read this message. Cheers, Tim -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju