Re: Nginx Zero Size Buffer Alerts !!

2016-02-26 Thread shahzaib shahzaib
Hi,

Alright then, thanks for clarification.

Regards.
Shahzaib

On Fri, Feb 26, 2016 at 5:12 PM, Valentin V. Bartenev 
wrote:

> On Friday 26 February 2016 16:05:28 shahzaib shahzaib wrote:
> > Hi,
> >
> > We've many zero size buff alerts related to file_uploader.php file in
> nginx
> > logs :
> >
> >
> > 2016/02/26 12:56:02 [alert] 71067#0: *12457068 zero size buf in output
> t:0
> > r:0 f:1  -
> 000803E29F08
> > 0-0 while sending request to upstream, client: 182.178.58.123, server:
> > domain.com request: "OPTIONS /actions/file_uploader.php HTTP/1.1",
> > upstream: "fastcgi://unix:/var/run/www.socket:", host: "domain.com"
> >
> > -
> >
> > Here is our Nginx Version :
> >
> > [root@cw001 /usr/local/etc]# nginx -V
> > nginx version: nginx/1.8.0
> > built with OpenSSL 1.0.1j-freebsd 15 Oct 2014 (running with OpenSSL
> > 1.0.1l-freebsd 15 Jan 2015)
> > TLS SNI support enabled
> > configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I
> > /usr/local/include' --with-ld-opt='-L /usr/local/lib'
> > --conf-path=/usr/local/etc/nginx/nginx.conf
> > --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid
> > --error-log-path=/var/log/nginx-error.log --user=www --group=www
> > --with-file-aio --with-ipv6
> > --http-client-body-temp-path=/var/tmp/nginx/client_body_temp
> > --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp
> > --http-proxy-temp-path=/var/tmp/nginx/proxy_temp
> > --http-scgi-temp-path=/var/tmp/nginx/scgi_temp
> > --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp
> > --http-log-path=/var/log/nginx-access.log --with-http_flv_module
> > --with-http_geoip_module --with-http_mp4_module --with-http_realip_module
> > --with-http_secure_link_module --with-http_stub_status_module --with-pcre
> > --with-http_spdy_module --with-http_ssl_module
> >
> > ---
> >
> > We don't have 3rd party module installed with nginx. Please let me know
> if
> > i am missing something important here which could be cause of these
> errors ?
> >
> > Thanks in advance !!
> >
> [..]
>
> If you're using spdy, then that is known issue.  You can ignore this alert.
>
>   wbr, Valentin V. Bartenev
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx Zero Size Buffer Alerts !!

2016-02-26 Thread shahzaib shahzaib
Hi,

We've many zero size buff alerts related to file_uploader.php file in nginx
logs :


2016/02/26 12:56:02 [alert] 71067#0: *12457068 zero size buf in output t:0
r:0 f:1  - 000803E29F08
0-0 while sending request to upstream, client: 182.178.58.123, server:
domain.com request: "OPTIONS /actions/file_uploader.php HTTP/1.1",
upstream: "fastcgi://unix:/var/run/www.socket:", host: "domain.com"

-

Here is our Nginx Version :

[root@cw001 /usr/local/etc]# nginx -V
nginx version: nginx/1.8.0
built with OpenSSL 1.0.1j-freebsd 15 Oct 2014 (running with OpenSSL
1.0.1l-freebsd 15 Jan 2015)
TLS SNI support enabled
configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I
/usr/local/include' --with-ld-opt='-L /usr/local/lib'
--conf-path=/usr/local/etc/nginx/nginx.conf
--sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid
--error-log-path=/var/log/nginx-error.log --user=www --group=www
--with-file-aio --with-ipv6
--http-client-body-temp-path=/var/tmp/nginx/client_body_temp
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp
--http-proxy-temp-path=/var/tmp/nginx/proxy_temp
--http-scgi-temp-path=/var/tmp/nginx/scgi_temp
--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp
--http-log-path=/var/log/nginx-access.log --with-http_flv_module
--with-http_geoip_module --with-http_mp4_module --with-http_realip_module
--with-http_secure_link_module --with-http_stub_status_module --with-pcre
--with-http_spdy_module --with-http_ssl_module

---

We don't have 3rd party module installed with nginx. Please let me know if
i am missing something important here which could be cause of these errors ?

Thanks in advance !!

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx Slow download over 1Gbps load !!

2016-01-31 Thread shahzaib shahzaib
The server is using ports 18 and 19 and those port are configured with
speed 1000



LH26876_SW2#sh run int g 0/18

!

interface GigabitEthernet 0/18

description LH28765_3

no ip address

speed 1000

!

port-channel-protocol LACP

  port-channel 3 mode active

no shutdown

LH26876_SW2#sh run int g 0/19

!

interface GigabitEthernet 0/19

description LH28765_3

no ip address

speed 1000

!

port-channel-protocol LACP

  port-channel 3 mode active

no shutdown

LH26876_SW2#


--

Is it alright ?

Regards.

Shahzaib

On Sun, Jan 31, 2016 at 11:18 PM, shahzaib shahzaib 
wrote:

> Hi,
>
> Thanks a lot for response. Now i am doubting that issue is on network
> layer as i can examine lots of retransmitted packets in netstat -s output.
> Here is the server's status :
>
> http://prntscr.com/9xa6z2
>
> Following is the thread with same mentioned issue :
>
>
> http://serverfault.com/questions/218101/freebsd-8-1-unstable-network-connection
>
> This is what he said in thread :
>
> "I ran into a problem with Cisco Switchs forcing Negotiation of network
> speeds. This caused intermittent errors and retransmissions. The result was
> file transfers being really slow. May not be the cases, but you can turn of
> speed negotiation with miitools (if I recall correctly, been a long time).
> "
>
> >>Can you replicate using ftp, scp?
> Yes, we recently tried downloading file over FTP and encountered the same
> slow transfer rate.
>
> >>What's the output of zpool iostat (and the overal zpool/zfs
> configuration)?Also do you have ZFS on top of hardware raid ? In general
> just 12 SATA disks won't have a lot of IOps (especially random read) unless
> it all hits ZFS Arc (can/should be monitored), even more if there is a
> hardware raid underneath (in your place would flash the HBA with IT
> firmware so you get plain jbods managed by ZFS).
>
> zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not
> hardware controller as FreeBSD recommends to use HBA in order to directly
> access all drives for scrubbing and data-integrity purposes. Do you
> recommend Hardware-Raid ? Following is the scrnshot of ARC status :
>
> http://prntscr.com/9xaf9p
>
> >>How is your switch configured? How are the links negotiated, make sure
> both sides of both links are full duplex 1gig. Look for crc or input errors
> on the interface side.
> On My side, i can see that both interfaces have Fully-Duplex port.
> Regarding crc / input errors, is there any command i can use to check that
> on FreeBSD ?
>
> Regards.
> Shahzaib
>
>
> On Sun, Jan 31, 2016 at 11:04 PM, Payam Chychi  wrote:
>
>> Hi,
>>
>> Forget the application layer being the problem until you have
>> successfully replicated the problem in several different setups.
>>
>> Are you monitoring both links utilization levels? Really sounds like a
>> network layer problem or something with your ip stack.
>>
>> Can you replicate using ftp, scp?
>>
>> How is your switch configured? How are the links negotiated, make sure
>> both sides of both links are full duplex 1gig. Look for crc or input errors
>> on the interface side.
>>
>> How many packets are you pushing? Make sure the switch isnt activating
>> unicast limiting.
>>
>> Lots of things to check... Would help if you can help us understand what
>> tests youve done to determine its nginx.
>>
>> Thanks
>>
>> --
>> Payam Chychi
>> Network Engineer / Security Specialist
>>
>> On Sunday, January 31, 2016 at 9:33 AM, Reinis Rozitis wrote:
>>
>> This is a bit out of scope of nginx but ..
>>
>> could be network issue or LACP issue but doesn't looks like it is
>>
>>
>> How did you determine this?
>> Can you generate more than 1 Gbps (without nginx)?
>>
>>
>> 12 x 3TB SATA Raid-10 (HBA LSI-9211)
>> ZFS FileSystem with 18TB usable space
>>
>>
>> Please i need guidance to handle with this problem, i am sure that some
>> value needs to tweak.
>>
>>
>> What's the output of zpool iostat (and the overal zpool/zfs
>> configuration)?
>>
>> Also do you have ZFS on top of hardware raid ?
>>
>> In general just 12 SATA disks won't have a lot of IOps (especially random
>> read) unless it all hits ZFS Arc (can/should be monitored), even more if
>> there is a hardware raid underneath (in your place would flash the HBA
>> with
>> IT firmware so you get plain jbods managed by ZFS).
>>
>> rr
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>>
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx Slow download over 1Gbps load !!

2016-01-31 Thread shahzaib shahzaib
Hi,

Thanks a lot for response. Now i am doubting that issue is on network layer
as i can examine lots of retransmitted packets in netstat -s output. Here
is the server's status :

http://prntscr.com/9xa6z2

Following is the thread with same mentioned issue :

http://serverfault.com/questions/218101/freebsd-8-1-unstable-network-connection

This is what he said in thread :

"I ran into a problem with Cisco Switchs forcing Negotiation of network
speeds. This caused intermittent errors and retransmissions. The result was
file transfers being really slow. May not be the cases, but you can turn of
speed negotiation with miitools (if I recall correctly, been a long time)."

>>Can you replicate using ftp, scp?
Yes, we recently tried downloading file over FTP and encountered the same
slow transfer rate.

>>What's the output of zpool iostat (and the overal zpool/zfs
configuration)?Also do you have ZFS on top of hardware raid ? In general
just 12 SATA disks won't have a lot of IOps (especially random read) unless
it all hits ZFS Arc (can/should be monitored), even more if there is a
hardware raid underneath (in your place would flash the HBA with IT
firmware so you get plain jbods managed by ZFS).

zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not
hardware controller as FreeBSD recommends to use HBA in order to directly
access all drives for scrubbing and data-integrity purposes. Do you
recommend Hardware-Raid ? Following is the scrnshot of ARC status :

http://prntscr.com/9xaf9p

>>How is your switch configured? How are the links negotiated, make sure
both sides of both links are full duplex 1gig. Look for crc or input errors
on the interface side.
On My side, i can see that both interfaces have Fully-Duplex port.
Regarding crc / input errors, is there any command i can use to check that
on FreeBSD ?

Regards.
Shahzaib


On Sun, Jan 31, 2016 at 11:04 PM, Payam Chychi  wrote:

> Hi,
>
> Forget the application layer being the problem until you have successfully
> replicated the problem in several different setups.
>
> Are you monitoring both links utilization levels? Really sounds like a
> network layer problem or something with your ip stack.
>
> Can you replicate using ftp, scp?
>
> How is your switch configured? How are the links negotiated, make sure
> both sides of both links are full duplex 1gig. Look for crc or input errors
> on the interface side.
>
> How many packets are you pushing? Make sure the switch isnt activating
> unicast limiting.
>
> Lots of things to check... Would help if you can help us understand what
> tests youve done to determine its nginx.
>
> Thanks
>
> --
> Payam Chychi
> Network Engineer / Security Specialist
>
> On Sunday, January 31, 2016 at 9:33 AM, Reinis Rozitis wrote:
>
> This is a bit out of scope of nginx but ..
>
> could be network issue or LACP issue but doesn't looks like it is
>
>
> How did you determine this?
> Can you generate more than 1 Gbps (without nginx)?
>
>
> 12 x 3TB SATA Raid-10 (HBA LSI-9211)
> ZFS FileSystem with 18TB usable space
>
>
> Please i need guidance to handle with this problem, i am sure that some
> value needs to tweak.
>
>
> What's the output of zpool iostat (and the overal zpool/zfs configuration)?
>
> Also do you have ZFS on top of hardware raid ?
>
> In general just 12 SATA disks won't have a lot of IOps (especially random
> read) unless it all hits ZFS Arc (can/should be monitored), even more if
> there is a hardware raid underneath (in your place would flash the HBA
> with
> IT firmware so you get plain jbods managed by ZFS).
>
> rr
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx Slow download over 1Gbps load !!

2016-01-30 Thread shahzaib shahzaib
Hi,

We've recently shifted to FreeBSD-10 due to its robust asynchronous
performance for big storage based on .mp4 files. Here is the server specs :

2 x Intel Xeon X5690
96GB DDR3 Memory
12 x 3TB SATA Raid-10 (HBA LSI-9211)
ZFS FileSystem with 18TB usable space
2 x 1Gbps LACP (2Gbps Throughput)

Things are working quite well, no high I/O due to Big Ram cache and AIO
performance but once network port started to go over 1Gbps, performance
begins to lag, download speed started to stuck around 60-100Kbps on a 4Mbps
connection (using wget) which works quite efficient under 800Mbps port
(450kbps on 4Mbps). We first thought it could be network issue or LACP
issue but doesn't looks like it is. We also checked that if requests are in
queue using following command but it was '0':

[root@cw005 ~/scripts]# netstat -Lan
Current listen queue sizes (qlen/incqlen/maxqlen)
Proto Listen Local Address
tcp4  0/0/6000   *.80
tcp4  0/0/6000   *.443
tcp4  0/0/10 127.0.0.1.25
tcp4  0/0/128*.1880
tcp6  0/0/128*.1880
tcp4  0/0/5  *.5666
tcp6  0/0/5  *.5666
tcp4  0/0/128*.199
unix  0/0/6000   /var/run/www.socket
unix  0/0/4  /var/run/devd.pipe
unix  0/0/4  /var/run/devd.seqpacket.pipe

Here is the output of mbcluster :

119747/550133/669880/6127378 mbuf clusters in use (current/cache/total/max)
661065/1410183/2071248/6063689 4k (page size) jumbo clusters in use
(current/cache/total/max)

We also checked with Disk Busy rate using gstat which was quite stable as
well.

So it looks like either the sysctl values need to tweak or Nginx
configurations are not optimized. Here is the sysctl.conf :

kern.ipc.somaxconn=6000
# set to at least 16MB for 10GE hosts
kern.ipc.maxsockbuf=16777216
# socket buffers

net.inet.tcp.recvspace=4194304
net.inet.tcp.sendspace=4197152
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
# security
security.bsd.see_other_uids=0
security.bsd.see_other_gids=0
# drop UDP packets destined for closed sockets
net.inet.udp.blackhole=1
# drop TCP packets destined for closed sockets
net.inet.tcp.blackhole=2
# ipfw
net.inet.ip.fw.verbose_limit=3
# maximum incoming and outgoing IPv4 network queue sizes
net.inet.ip.intr_queue_maxlen=2048
net.route.netisr_maxqlen=2048

net.inet.icmp.icmplim: 2048
net.inet.tcp.fast_finwait2_recycle=1
kern.random.sys.harvest.ethernet=0
net.inet.ip.portrange.randomized=0
net.link.lagg.0.use_flowid=0

Here is the bootloader.conf :

zpool_cache_load="YES"
zpool_cache_type="/boot/zfs/zpool.cache"
zpool_cache_name="/boot/zfs/zpool.cache"
aio_load="YES"
zfs_load="YES"
ipmi_load="YES"

Here is the nginx.conf :

user  www www;
worker_processes 48;
worker_rlimit_nofile 90; #2 filehandlers for each connection
error_log  /var/log/nginx-error.log  error;
#pidlogs/nginx.pid;

events {
worker_connections  10240;
multi_accept on;

}
http {
 include   mime.types;
 default_type  application/octet-stream;
 add_header X-Content-Type-Options nosniff;
 add_header X-XSS-Protection "1; mode=block";
 client_max_body_size 4096M;
 client_body_buffer_size 800M;
 output_buffers 1 512k;
 sendfile_max_chunk 128k;


 fastcgi_connect_timeout 30;
 fastcgi_send_timeout 30;
 fastcgi_read_timeout 30;
 proxy_read_timeout 30;
 fastcgi_buffer_size 64k;
 fastcgi_buffers 16 64k;
 fastcgi_temp_file_write_size 256k;


 server_tokens off; #Conceals nginx version
 access_log off;
 sendfileoff;
 tcp_nodelay on;
 aio on;
 client_header_timeout  30s;
 client_body_timeout 30s;
 send_timeout 30s;
 keepalive_timeout  15s;
 ssl_session_cache   shared:SSL:10m;
 ssl_session_timeout 10m;
gzip off;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
gzip_proxied any;
gzip_http_version 1.0;
gzip_min_length  1280;
gzip_comp_level  6;
gzip_buffers  16 8k;
gzip_typestext/plain text/xml text/css application/x-javascript
image/png image/x-icon image/gif image/jpeg image/jpg application/xml
application/xml+rss text/javascr ipt application/atom+xml;
 include /usr/local/etc/nginx/vhosts/*.conf;
}

Here is the vhost :

server {
 listen  80  sndbuf=16k;
  server_name  cw005.files.com cw005.domain.com
www.cw005.files.com www.cw005.domain.com cw005.domain.net
www.cw005.domain.net;
location / {
root   /files;
index index.html index.htm index.php;
   autoindex off;
}
   location ~ \.(jpg)$ {
  *  sendfile on;*
 tcp_nopush on;
  *  aio off;*
root /files;
try_files $uri /thumbs.php;
expires 1y;
 }

location ~* \.(js|css|png|gif|ico)$ {
root /files;
expires 1y;

Re: Nginx + Php-fpm Ownership issue !!

2015-12-30 Thread shahzaib shahzaib
Thanks for reply. However, our developer just notified us that the
directory with root owner was created by a cron which ran by user root and
created that issue though I've slightly modified nginx user directive with
following :

former
user www ;

later
user www www;

Regards.
Shahzaib

On Wed, Dec 30, 2015 at 12:59 PM, wishmaster  wrote:

> Hi,
> >
> > Hi,
> >
> >
> > We've installed nginx + php-fpm on FreeBSD OS and both of them are
> listening on www user / group. Here is the config :
> >
> > NGINX :
> >
> > user www ;
> >
> >
> >
> > PHP-FPM :
> >
> >
> > [www]
> > listen = /var/run/www.socket
> > user = www
> > group = www
> > listen.owner = www
> > listen.group = www
> > listen.allowed_clients = 127.0.0.1
> >
>
> I think you have mistake in the owner ownership.
> Below my config for nginx + php-fpm bundle..
>
>  MY pool for Joomla CMS ;;
>
> [joomla1]
>
> user = www-joomla1
> group = www-joomla1
> listen = /var/run/php-fpm-joomla1.sock
>
> listen.owner = www-joomla1
> listen.group = www
> listen.mode = 0660
>
>
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx + Php-fpm Ownership issue !!

2015-12-29 Thread shahzaib shahzaib
Hi,

We've installed nginx + php-fpm on FreeBSD OS and both of them are
listening on www user / group. Here is the config :

NGINX :

user  www ;

PHP-FPM :

[www]
listen = /var/run/www.socket
user = www
group = www
listen.owner = www
listen.group = www
listen.allowed_clients = 127.0.0.1



According to these configs, now any files / directory should be created
with www:www permissions in webroot directory by nginx/php-fpm but thats
not happening, new uploaded files via nginx are being uploading with
root:www permissions due to which most of uploading is getting failed. Here
is the permission failed on
(/videos/files/logs/2015/12/30/full-145145901836a71.log) :

2015/12/30 12:03:49 [error] 976#0: *1502344 FastCGI sent in stderr: "PHP
message: PHP Warning:
 file_put_contents(/videos/files/logs/2015/12/30/full-145145901836a71.log):
failed to open stream: Permission denied in /videos/functions.php on line
759" while reading response header from upstream, client: 5.254.102.94,
server: cw005.videos.com, request: "POST /actions/file_uploader.php
HTTP/1.1", upstream: "fastcgi://unix:/var/run/www.socket:", host: "
cw005.videos.com", referrer: "http://domain.com/upload";

---

Now if i check the ownership of newly created
"/videos/files/logs/2015/12/30/" by webserver, it is as follows :

root:www /videos/files/logs/2015/12/30/

It should had been www:www. Where we doing wrong ?

Please help.

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Slow mp4 buffer over SSL !!

2015-12-27 Thread shahzaib shahzaib
Hi,

We've shifted our static content to SSL recently and found that mp4
streaming is drastically slow over SSL ( around 90KBps on 4Mbps connection)
and if we test the same video over HTTP it gives us full 400+KBps speed.
Here is the SSL config :

server {
listen  443 spdy;
ssl on;
server_name cw004.domain.net www.cw004.domain.net;
  ssl_certificate /etc/ssl/certs/domain/domain-combined.crt;
  ssl_certificate_key /etc/ssl/certs/domain/domain.key;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers
'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-A$
location / {
root   /videos;
index index.html index.htm index.php;

}
location ~ \.(flv)$ {
flv;
root /videos;
expires 7d;
include hotlink.inc;
}


include thumbs.inc;

#location ~ \.(jpg)$ {
#root /videos;
#try_files $uri /files/thumbs/no_thumb.jpg;
#}


location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1y;
log_not_found off;
}


You have new mail in /var/mail/root
[root@cw004 /usr/local/etc/nginx/vhosts]# ^C
[root@cw004 /usr/local/etc/nginx/vhosts]# cat virtual-ssl.conf
server {
listen  443 spdy;
ssl on;
server_name cw004.domain.net www.cw004.domain.net;
  ssl_certificate /etc/ssl/certs/domain/domain-combined.crt;
  ssl_certificate_key /etc/ssl/certs/domain/domain.key;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers
'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4';
  ssl_prefer_server_ciphers on;
location / {
root   /videos;
index index.html index.htm index.php;

}
location ~ \.(flv)$ {
flv;
root /videos;
expires 7d;
include hotlink.inc;
}


include thumbs.inc;

#location ~ \.(jpg)$ {
#root /videos;
#try_files $uri /files/thumbs/no_thumb.jpg;
#}


location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1y;
log_not_found off;
}

location ~ \.(mp4)$ {
mp4;
 mp4_buffer_size 4M;
mp4_max_buffer_size 10M;
expires 1y;
add_header Cache-Control "public";
root /videos;
include hotlink.inc;
}

 # pass the PHP scripts to FastCGI server listening on
unix:/var/run/www.socket
location ~ \.php$ {
root /videos;
fastcgi_pass   unix:/var/run/www.socket;
   fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME
 $document_root$fastcgi_script_name;
includefastcgi_params;
}

location ~ /\.ht {
deny  all;
}
}

-

Is there optimization being missed for SSL ?

Thanks.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

SSL handshake issue !!

2015-12-03 Thread shahzaib shahzaib
Hi,

  We've been encountering this issue quiet frequently. Looks like that is
the reason of our drop in traffic as well.

 2015/12/03 16:19:18 [crit] 26272#0: *176263213 SSL_do_handshake() failed
(SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate
fallback) while SSL handshaking, client: 43.245.8.217, server: 10.0.0.52:443

Guys, any help on it ? So far i am unable to find any fix for it.

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 400 Error on % !!

2015-11-23 Thread shahzaib shahzaib
Hi,

  Thanks for the reply, now we've tons of these URLs which are not properly
encoded. Can we redirect '%' request to the same URL by excluding '%' ?
Such as

http://domain.com/video/100%-working

to

http://domain.com/video/100-working

Regards.
Shahzaib

On Mon, Nov 23, 2015 at 1:20 PM, nanaya  wrote:

>
>
> On Mon, Nov 23, 2015, at 05:17 PM, shahzaib shahzaib wrote:
> > Hi,
> >
> >We've encountered with 400 Bad request error on nginx reverse proxy in
> > front of apache. Here is the attached link :
> >
> > http://prntscr.com/95wlsl
> >
> > If we remove '%' from the URL, it works fine. What could be the issue ?
> >
>
> `%-` isn't a valid percent-encoding (and thus a bad request). Try
> encoding the url properly.
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

400 Error on % !!

2015-11-23 Thread shahzaib shahzaib
Hi,

   We've encountered with 400 Bad request error on nginx reverse proxy in
front of apache. Here is the attached link :

http://prntscr.com/95wlsl

If we remove '%' from the URL, it works fine. What could be the issue ?

Regards.
Shahzaib


Need to send me private email? I use Virtru
.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Redirect request based on source $scheme !!

2015-11-11 Thread shahzaib shahzaib
>>If you want to catch HTTP requests separate from HTTPS requests, make 2
server blocks, as Avraham suggested.
You can then redirect all requests coming to the HTTP block to the same URI
with the HTTPS scheme (301, 302, 303, 307... have it your way!). There is
not constraint regarding HTTP pages loading HTTPS content (while reverse is
true).

B.R yes, we've decided to go with this strategy as well. I think that
should do the job, we just overlooked things and made it more complex.

Thanks for help and suggestion guys, i'll update about the results. :)

Regards.
Shahzaib

On Wed, Nov 11, 2015 at 7:13 PM, B.R.  wrote:

> Maybe I am seing oversimplified things here, but it seems to me the
> question does not require all the effort everyone amazingly puts in here.
>
> If you want to catch HTTP requests separate from HTTPS requests, make 2
> server blocks, as Avraham suggested.
> You can then redirect all requests coming to the HTTP block to the same
> URI with the HTTPS scheme (301, 302, 303, 307... have it your way!). There
> is not constraint regarding HTTP pages loading HTTPS content (while reverse
> is true).
> I am making the assumption the request to a HTTP resource in the HTTPS
> page will be made, here. If the browser prevents them because they are seen
> as insecure, there little to nothing you can do about it. Stupid webpage
> service cannot be fixed on the remote side.
>
> Of course, the client browser embedding the content will need to be clever
> enough to follow redirects on included resources, which is I think the case
> of any standard use-case.
>
> ​Am I missing something there?​
> ---
> *B. R.*
>
> On Wed, Nov 11, 2015 at 2:32 PM, Francis Daly  wrote:
>
>> On Wed, Nov 11, 2015 at 03:25:11PM +0500, shahzaib shahzaib wrote:
>>
>> Hi there,
>>
>> > Actually we've video sharing website from where people embed http/https
>> > links to there websites. Now the problem is, some of the HTTPS  websites
>> > have embedded HTTP URL links from our website instead of HTTPS due to
>> which
>> > the code is unable to execute on their HTTPS website because it is
>> making
>> > call from https -> http which is wrong.
>>
>> Before you put too much time into building the solution, can you do a
>> quick test to confirm that it can work?
>>
>> As in:
>>
>> * on a https site, include a link to http on your server to one particular
>> url that you control.
>> * in your config, redirect that one url to something https on your site
>> * for that https request, return the response that you want
>>
>> When you do that -- does it work?
>>
>> As in: do you know that the client (browser) that you care about, will
>> access your http url and accept the https redirection and then make use
>> of the code that you return over that https link?
>>
>> Because if that does not work, then it does not matter what else you do.
>>
>> > So we're thinking to have some condition in place that if the request
>> for
>> > HTTP embedded link comes from any HTTPS domain , nginx will detect that
>> > source $scheme and redirect that request to HTTPS.
>>
>> You cannot reliably detect where the link came from.
>>
>> If you are willing to accept unreliably detecting where the link came
>> from, knowing that some innocent cases and some malicious cases will be
>> handled wrongly, then you can just examine $http_referer.
>>
>> If it starts with "https://";, then probably the link was on a https site.
>>
>> If it starts with "http://";, then probably the link was on a http site.
>>
>> If it is blank, then probably the link was on a https site and it is
>> accessing your http site.
>>
>> Each "probably" is because the Referer header is set to whatever the
>> browser wants. Some browsers lie. Some browsers omit it always. Some
>> browsers set it to a non-default value because that's what the user
>> configured it to do. Other possibilities exist.
>>
>> f
>> --
>> Francis Dalyfran...@daoine.org
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Redirect request based on source $scheme !!

2015-11-11 Thread shahzaib shahzaib
>>if you don't own the domain then you won't ever receive the request and
you can't do nothing about it
We don't own ad.domain.com but that domain sends http/https request to our
domain 'ourdomain.com' .We just need to find out the $scheme they use to
send requests such as :

Is request coming from http://ad.domain.com or is it coming from
https://ad.domain.com ?

On Wed, Nov 11, 2015 at 4:02 PM, Avraham Serour  wrote:

> if you don't own the domain then you won't ever receive the request and
> you can't do nothing about it
>
> On Wed, Nov 11, 2015 at 1:01 PM, shahzaib shahzaib 
> wrote:
>
>> One point is worth mentioning, we don't own ad.domain.com its a 3rd
>> party website. All we can control is ourdomain.com.
>>
>> On Wed, Nov 11, 2015 at 3:25 PM, shahzaib shahzaib > > wrote:
>>
>>> >>you can create separate server blocks for each domain
>>> I think issue will still persist. Say https://ad.domain.com makes
>>> static call to http://ourdomain.com , it'll end up with conflicted
>>> scheme i.e https -> http. We can't force http to https as well because
>>> it'll break static calls from http -> http.
>>>
>>> Actually we've video sharing website from where people embed http/https
>>> links to there websites. Now the problem is, some of the HTTPS  websites
>>> have embedded HTTP URL links from our website instead of HTTPS due to which
>>> the code is unable to execute on their HTTPS website because it is making
>>> call from https -> http which is wrong. The number of these malformed links
>>> are huge and there's no way that those users can manually correct
>>> the embedded links by editing http to https and vice versa).
>>>
>>> So we're thinking to have some condition in place that if the request
>>> for HTTP embedded link comes from any HTTPS domain , nginx will detect that
>>> source $scheme and redirect that request to HTTPS.
>>>
>>> On Wed, Nov 11, 2015 at 3:10 PM, Avraham Serour 
>>> wrote:
>>>
>>>> you can create separate server blocks for each domain
>>>>
>>>> On Wed, Nov 11, 2015 at 11:36 AM, shahzaib shahzaib <
>>>> shahzaib...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>>Is there a way we can serve $scheme (HTTP/HTTPS) based on source
>>>>> request ? Such as :
>>>>>
>>>>> if https://ad.domain.com --> sends request to http://ourdomain.com
>>>>> (as it'll fail due to cross  $scheme conflict)
>>>>>
>>>>> So http://ourdomain.com will check that the request invoked using
>>>>> https $scheme and it'll redirect http://ourdomain.com to
>>>>> https://ourdomain.com for that particular ad.domain.com.
>>>>>
>>>>> -
>>>>>
>>>>> Is that possible guys ?
>>>>>
>>>>> Thanks in Advance !
>>>>>
>>>>> Regards.
>>>>> Shahzaib
>>>>>
>>>>>
>>>>> Need to send me private email? I use Virtru
>>>>> <https://www.virtru.com/how-it-works/?utm_medium=Email&utm_source=Plugin%20Users&utm_campaign=Footer>.
>>>>>
>>>>>
>>>>> ___
>>>>> nginx mailing list
>>>>> nginx@nginx.org
>>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>>>
>>>>
>>>>
>>>> ___
>>>> nginx mailing list
>>>> nginx@nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>>
>>>
>>>
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Redirect request based on source $scheme !!

2015-11-11 Thread shahzaib shahzaib
One point is worth mentioning, we don't own ad.domain.com its a 3rd party
website. All we can control is ourdomain.com.

On Wed, Nov 11, 2015 at 3:25 PM, shahzaib shahzaib 
wrote:

> >>you can create separate server blocks for each domain
> I think issue will still persist. Say https://ad.domain.com makes static
> call to http://ourdomain.com , it'll end up with conflicted scheme i.e
> https -> http. We can't force http to https as well because it'll break
> static calls from http -> http.
>
> Actually we've video sharing website from where people embed http/https
> links to there websites. Now the problem is, some of the HTTPS  websites
> have embedded HTTP URL links from our website instead of HTTPS due to which
> the code is unable to execute on their HTTPS website because it is making
> call from https -> http which is wrong. The number of these malformed links
> are huge and there's no way that those users can manually correct
> the embedded links by editing http to https and vice versa).
>
> So we're thinking to have some condition in place that if the request for
> HTTP embedded link comes from any HTTPS domain , nginx will detect that
> source $scheme and redirect that request to HTTPS.
>
> On Wed, Nov 11, 2015 at 3:10 PM, Avraham Serour  wrote:
>
>> you can create separate server blocks for each domain
>>
>> On Wed, Nov 11, 2015 at 11:36 AM, shahzaib shahzaib <
>> shahzaib...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>>Is there a way we can serve $scheme (HTTP/HTTPS) based on source
>>> request ? Such as :
>>>
>>> if https://ad.domain.com --> sends request to http://ourdomain.com (as
>>> it'll fail due to cross  $scheme conflict)
>>>
>>> So http://ourdomain.com will check that the request invoked using https
>>> $scheme and it'll redirect http://ourdomain.com to https://ourdomain.com
>>> for that particular ad.domain.com.
>>>
>>> -
>>>
>>> Is that possible guys ?
>>>
>>> Thanks in Advance !
>>>
>>> Regards.
>>> Shahzaib
>>>
>>>
>>> Need to send me private email? I use Virtru
>>> <https://www.virtru.com/how-it-works/?utm_medium=Email&utm_source=Plugin%20Users&utm_campaign=Footer>.
>>>
>>>
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Redirect request based on source $scheme !!

2015-11-11 Thread shahzaib shahzaib
>>you can create separate server blocks for each domain
I think issue will still persist. Say https://ad.domain.com makes static
call to http://ourdomain.com , it'll end up with conflicted scheme i.e
https -> http. We can't force http to https as well because it'll break
static calls from http -> http.

Actually we've video sharing website from where people embed http/https
links to there websites. Now the problem is, some of the HTTPS  websites
have embedded HTTP URL links from our website instead of HTTPS due to which
the code is unable to execute on their HTTPS website because it is making
call from https -> http which is wrong. The number of these malformed links
are huge and there's no way that those users can manually correct
the embedded links by editing http to https and vice versa).

So we're thinking to have some condition in place that if the request for
HTTP embedded link comes from any HTTPS domain , nginx will detect that
source $scheme and redirect that request to HTTPS.

On Wed, Nov 11, 2015 at 3:10 PM, Avraham Serour  wrote:

> you can create separate server blocks for each domain
>
> On Wed, Nov 11, 2015 at 11:36 AM, shahzaib shahzaib  > wrote:
>
>> Hi,
>>
>>Is there a way we can serve $scheme (HTTP/HTTPS) based on source
>> request ? Such as :
>>
>> if https://ad.domain.com --> sends request to http://ourdomain.com (as
>> it'll fail due to cross  $scheme conflict)
>>
>> So http://ourdomain.com will check that the request invoked using https
>> $scheme and it'll redirect http://ourdomain.com to https://ourdomain.com
>> for that particular ad.domain.com.
>>
>> -
>>
>> Is that possible guys ?
>>
>> Thanks in Advance !
>>
>> Regards.
>> Shahzaib
>>
>>
>> Need to send me private email? I use Virtru
>> <https://www.virtru.com/how-it-works/?utm_medium=Email&utm_source=Plugin%20Users&utm_campaign=Footer>.
>>
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Redirect request based on source $scheme !!

2015-11-11 Thread shahzaib shahzaib
Hi,

   Is there a way we can serve $scheme (HTTP/HTTPS) based on source request
? Such as :

if https://ad.domain.com --> sends request to http://ourdomain.com (as
it'll fail due to cross  $scheme conflict)

So http://ourdomain.com will check that the request invoked using https
$scheme and it'll redirect http://ourdomain.com to https://ourdomain.com
for that particular ad.domain.com.

-

Is that possible guys ?

Thanks in Advance !

Regards.
Shahzaib


Need to send me private email? I use Virtru
.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: DocumentRoot should end up on specific file !

2015-08-31 Thread shahzaib shahzaib
Hi,

  THanks, looking into it.

Regards.
Shahzaib

On Mon, Aug 31, 2015 at 7:33 PM, Dewangga Bachrul Alam <
dewangg...@xtremenitro.org> wrote:

> Hello!
>
> On 08/31/2015 09:29 PM, shahzaib shahzaib wrote:
> > Hi,
> >
> >   We want nginx vhost to access the file audo_portal.php without
> > specifying it,i.e  instead of using
> > URL http://domain.com/audio_portal.php , can we access it with
> > http://domain.com ? So it'll directly access audio_portal.php just like
> > index.php ?
>
> Yes you can use index on http, location, and server. See
> http://nginx.org/en/docs/http/ngx_http_index_module.html
>
> >
> > Regards.
> > Shahzaib
> >
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

DocumentRoot should end up on specific file !

2015-08-31 Thread shahzaib shahzaib
Hi,

  We want nginx vhost to access the file audo_portal.php without specifying
it,i.e  instead of using URL http://domain.com/audio_portal.php , can we
access it with http://domain.com ? So it'll directly access
audio_portal.php just like index.php ?

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Redirect on specific threshold !!

2015-08-29 Thread shahzaib shahzaib
Hi,

  Sorry got back to this thread after long time. First of all, thanks to
all for suggestions. Alright, i have also checked with rate_limit module,
should this work as well or it should be only limit_conn (to parse
error_log and constructing redirect URL).

P.S : Actuall looks like limit_conn needs to recompile nginx as it is not
included in default yum install nginx repo. So i tried with rate_limit
which is built-in within nginx repo.

http://greenroom.com.my/blog/2014/10/rate_limit-with-nginx-on-ubuntu/

Regards.
Shahzaib

On Wed, Jun 17, 2015 at 2:30 AM, Francis Daly  wrote:

> On Mon, Jun 15, 2015 at 01:45:42PM +0300, Valentin V. Bartenev wrote:
> > On Sunday 14 June 2015 22:12:37 shahzaib shahzaib wrote:
>
> Hi there,
>
> > > If there are exceeding 1K requests for
> http://storage.domain.com/test.mp4 ,
> > > nginx should  construct a Redirect URL for rest of the requests
> related to
> > > test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest
> of
> > > requests for test.mp4 from Caching Node while long tail would still be
> > > served from storage.
>
> > You can use limit_conn and limit_req modules to set limits:
> > http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html
> > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
> >
> > and the error_page directive to construct the redirect.
>
> limit_conn and limit_req are the right answer if you care about concurrent
> requests.
>
> (For example: rate=1r/m with burst=1000 might do most of what you want,
> without too much work on your part.)
>
> I think you might care about historical requests, instead -- so if a
> url is ever accessed 1K times, then it is "popular" and future requests
> should be redirected.
>
> To do that, you probably will find it simpler to do it outside of nginx,
> at least initially.
>
> Have something read the recent-enough log files[*], and whenever there are
> more that 1K requests for the same resource, add a fragment like
>
>   location = /test.mp4 { return 301 http://cache.domain.com/test.mp4; }
>
> to nginx.conf (and remove similar fragments that are no longer currently
> popular-enough, if appropriate), and do a no-downtime config reload.
>
> You can probably come up with a module or a code config that does the
> same thing, but I think it would take me longer to do that.
>
>
> [*] or accesses the statistics by a method of your choice
>
> f
> --
> Francis Dalyfran...@daoine.org
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Free O’Reilly animal book about nginx

2015-07-30 Thread shahzaib shahzaib
Whoops,

http://prntscr.com/7yzccl

That's great btw :)

On Fri, Jul 31, 2015 at 1:40 AM, Sarah Novotny  wrote:

>
> On Jul 30, 2015, at 1:12 PM, Igal @ Lucee.org  wrote:
>
> this is awesome news!  congrats!!!
>
> the download link is broken though (404).
>
>
> It’s been fixed :)
>
> sarah
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx support fot Weedfs !!

2015-07-02 Thread shahzaib shahzaib
Guys any advice ?

On Thu, Jun 25, 2015 at 4:17 AM, shahzaib shahzaib 
wrote:

> Hi,
>
>We're deploying WeedFS distributed filesystem for thumbs storage and
> scalabilty. Weedfs is composed of two layers (Master, Volume). Master
> server does all metadata mapping to track the corresponding volume server
> against user requested file whereas volume server is the actual storage to
> serve those requested files back to user via HTTP. Currently, weedfs
> default webserver is being used as HTTP but it would be better to have
> nginx webserver on volume servers for its low foot prints, stability and
> robust response time for static .jpg files.
>
> So we need to know if we can use nginx with weedFS ?  Following is the
> github we found, but need to confirm if it will fulfill our needs ?
>
> https://github.com/medcl/lua-resty-weedfs
>
> Thanks in advance.
>
> Regards.
> Shahzaib
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Can thread pool improve performance for such scenario

2015-06-30 Thread shahzaib shahzaib
Hi,

  This is what they have to say about threads :

Offloading read operations to the thread pool is a technique applicable to
very specific tasks. It is most useful where the volume of frequently
requested content doesn’t fit into the operating system’s VM cache. This
might be the case with, for instance, a heavily loaded NGINX-based
streaming media server. This is the situation we’ve simulated in our
benchmark.

---

So if you got storage which doesn't fit into Ram such as 2TB storage with
32Gb RAM . In this case, threads could be useful. Otherwise, nginx is
already very much robust to serve concurrent requests against standard
extensions such as jpeg,css,html and many more.

You can read more about threads in following link :

http://nginx.com/blog/thread-pools-boost-performance-9x/

Regards.
Shahzaib

On Tue, Jun 30, 2015 at 11:02 AM, Jerry OELoo  wrote:

> Hi All:
> I am using Nginx as a reverse proxy which provide a web API (HTTP GET
> ) to client.
> and the backend application will get request from nginx and do some
> time-consuming processing (1-2 seconds) then response result to nginx,
> Nginx return result to client.
> I think this is synchronize operation.
>
> As I know, Nginx import thread pools feature, so is it useful for my
> scenario and improve performance?
>
> --
> Rejoice,I Desire!
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx-1.9.2 fatal code 2 !!

2015-06-29 Thread shahzaib shahzaib
Hi,

  We've just compiled latest nginx-1.9.2 on Debian wheezy 7 in order to
utilize aio threads directive for our storage but nginx started to crash
since we enabled aio threads on it. Following is the compiled options and
log about the crash :

root@archive3:/usr/local/nginx/conf/vhosts# nginx -V
nginx version: nginx/1.9.2
built by gcc 4.7.2 (Debian 4.7.2-5)
configure arguments: --sbin-path=/usr/local/sbin/nginx
--with-http_flv_module --with-http_mp4_module --with-threads --with-stream
--with-debug

error_log :

2015/06/30 04:14:07 [alert] 32076#32076: worker process 11097 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:03 [alert] 32079#32079: pthread_create() failed (11:
Resource temporarily unavailable)
2015/06/30 04:14:07 [alert] 32076#32076: worker process 17232 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 18584 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 595 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 32121 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 7557 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 16852 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 32083 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 5933 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 32079 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:03 [alert] 25360#25360: pthread_create() failed (11:
Resource temporarily unavailable)
2015/06/30 04:14:03 [alert] 18540#18540: pthread_create() failed (11:
Resource temporarily unavailable)
2015/06/30 04:14:03 [alert] 11093#11093: pthread_create() failed (11:
Resource temporarily unavailable)
2015/06/30 04:14:03 [alert] 23953#23953: pthread_create() failed (11:
Resource temporarily unavailable)

Thanks in advance.

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx support fot Weedfs !!

2015-06-24 Thread shahzaib shahzaib
Hi,

   We're deploying WeedFS distributed filesystem for thumbs storage and
scalabilty. Weedfs is composed of two layers (Master, Volume). Master
server does all metadata mapping to track the corresponding volume server
against user requested file whereas volume server is the actual storage to
serve those requested files back to user via HTTP. Currently, weedfs
default webserver is being used as HTTP but it would be better to have
nginx webserver on volume servers for its low foot prints, stability and
robust response time for static .jpg files.

So we need to know if we can use nginx with weedFS ?  Following is the
github we found, but need to confirm if it will fulfill our needs ?

https://github.com/medcl/lua-resty-weedfs

Thanks in advance.

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Redirect on specific threshold !!

2015-06-15 Thread shahzaib shahzaib
Hi,

   Thanks for the help guys. Regarding @ryd994 suggestion, the reason we
don't want to deploy this structure is that the Caching node will have to
respond for each client's request and even it will be only doing proxy for
most of the requests(without caching them), high i/o will still be required
to serve the big proxy request(700MB mp4) to the user and that way caching
node will eventually become the bottleneck between user and storage node,
isn't it ?

@steve thanks for tmpfs point. but we're using caching node with 1TB+ SSD
storage and will prefer SSD  cache over RAM(though RAM is faster but not as
big as SSD).

Using redirect URL we believe would be only pointing specific requests
towards the cachind node and than this node will fetch requested file using
proxy_cache.

Regards.
Shahzaib.



On Mon, Jun 15, 2015 at 9:13 AM, ryd994  wrote:

> Does a nginx reverse proxy with cache fit you need?
>
> Client -> Caching server (with SSD and nginx proxy cache configured) ->
> Storage server(s) (Slow)
>
> You can add even more storage server by utilizing nginx upstream module.
>
> On Sun, Jun 14, 2015 at 1:12 PM shahzaib shahzaib 
> wrote:
>
>> Hi,
>>
>>We're using Nginx to serve videos on one of our Storage
>> server(contains mp4 videos) and due to high amount of requests we're
>> planning to have a separate caching Node based on Fast SSD drives to serve
>> "Hot" content in order to reduce load from Storage. We're planning to have
>> following method for caching :
>>
>> If there are exceeding 1K requests for http://storage.domain.com/test.mp4
>> , nginx should  construct a Redirect URL for rest of the requests related
>> to test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest
>> of requests for test.mp4 from Caching Node while long tail would still be
>> served from storage.
>>
>> So, can we achieve this approach with nginx or other like varnish ?
>>
>> Thanks in advance.
>>
>> Regards.
>> Shahzaib
>>
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Redirect on specific threshold !!

2015-06-14 Thread shahzaib shahzaib
Hi,

   We're using Nginx to serve videos on one of our Storage server(contains
mp4 videos) and due to high amount of requests we're planning to have a
separate caching Node based on Fast SSD drives to serve "Hot" content in
order to reduce load from Storage. We're planning to have following method
for caching :

If there are exceeding 1K requests for http://storage.domain.com/test.mp4 ,
nginx should  construct a Redirect URL for rest of the requests related to
test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest of
requests for test.mp4 from Caching Node while long tail would still be
served from storage.

So, can we achieve this approach with nginx or other like varnish ?

Thanks in advance.

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Static files bad loading time

2015-05-08 Thread shahzaib shahzaib
Right, thanks.

Btw, we used another nginx official doc for optimization and the most
effective optimization parameter was tweaking the backlog from default 512
to 4096 in nginx listen directive.

http://nginx.com/blog/tuning-nginx/

Regards.
Shahzaib

On Fri, May 8, 2015 at 6:18 PM, Valentin V. Bartenev 
wrote:

> On Friday 08 May 2015 18:05:51 shahzaib shahzaib wrote:
> > Well, reducing keepalive_timeout and increasing the values of
> > worker_connections resolved our issue. Following is the reference we used
> > to tweak nginx config :
> >
> >
> http://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/
> >
>
> This reference is quite inaccurate.  Don't trust arbitrary articles in the
> internet.
>
> See here for the detailed explanation:
> http://mailman.nginx.org/pipermail/nginx/2015-May/047460.html
>
>   wbr, Valentin V. Bartenev
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx gets halt on 15K connections !!

2015-05-08 Thread shahzaib shahzaib
Hi Valentine,


>>What's really important is the connections that nginx cannot close.  The
active ones.
How long the connection is active depends on the request processing time.


  Thanks for pointing that to me. Nginx serving around 800Mb of mp4 files
but the problem is we're unable to track request processing time. Could you
please let us know some method/command to find how long the connection
remains active during request being  process ?

Though, one thing is for sure, increasing worker_connections resolved our
problem. Current connections setting is quite high but working well to
entertain large number of connections with 900Mbps outward traffic. Here's
our workers and connections settings:


worker_processors 48;
worker_connections 102400;

Regards.
Shahzaib

On Fri, May 8, 2015 at 6:05 PM, Valentin V. Bartenev 
wrote:

> On Thursday 07 May 2015 18:38:23 shahzaib shahzaib wrote:
> > Hi,
> >
> >   It looks like we made the false calculation based on entertaining
> > concurrent connections per seconds and worker_connections limit was set
> to
> > be very low. I've increased this limit to 16000 and issue looks to be
> > fixed. Here's the mechanism i used to calculate concurrent
> connections/sec:
> >
> >  worker_processes * worker_connections / keepalive_timeout == concurrent
> > connections per second
> >
> > Concurrent connections on our server is around 15K . Based on this i used
> > the following values :
> >
> >  48 * 16000 / 15 == 51200/sec
> >
> > Can somebody point to me if the calculation method is false ?
> >
> [..]
>
> It's false.
>
> The keepalive_timeout has nothing to do with the concurrent connections
> per second.
> In fact, nginx can close an idle connection at any time when it reaches
> the limit
> of worker_connections.
>
> What's really important is the connections that nginx cannot close.  The
> active ones.
> How long the connection is active depends on the request processing time.
>
> The approximate calculation looks like this:
>
>  worker_processes * worker_connections * K / average $request_time
>
> where K is the average number of connections per request (for example, if
> you do proxy
> pass, then nginx needs additional connection to your backend).
>
>   wbr, Valentin V. Bartenev
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Static files bad loading time

2015-05-08 Thread shahzaib shahzaib
Well, reducing keepalive_timeout and increasing the values of
worker_connections resolved our issue. Following is the reference we used
to tweak nginx config :

http://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/

Thanks.
Shahzaib

On Fri, May 8, 2015 at 4:42 PM, Valentin V. Bartenev 
wrote:

> On Thursday 07 May 2015 23:27:44 shahzaib shahzaib wrote:
> > Hi,
> >
> >There are some tweaks required to nginx configurations. If the same
> > image which usually takes second to response can takes upto 10-20 seconds
> > to load, the wide guess would be exceeding concurrent connections at peak
> > traffic. The directive worker_rlimit_nofile value is set much lower as
> > compare to worker_connections. Nginx uses upto 2 file descriptors per
> > connections, so i would suggest to increase worker_rlimit_nofile value to
> > 124000.
> >
> > Also, default keepalive_timeout value is 65sec due to which your current
> > nginx configuration is not optimized to serve more than 2000 concurrent
> > connections. Here's how :
> >
> > (Worker_process)4 * 32768(worker_connections) / 65(Keepalive_timeout ==
> > 2016 connections per seconds.
> >
> > So i would suggest to decrease keepalive_timeout to 5sec directive and
> > increase worker_connections to 6.
> >
> > Also make sure to decrease timeout values.
> >
>
> The keepalive_timeout has nothing to do with the maximum number of
> concurrent connections per second.
>
>   wbr, Valentin V. Bartenev
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Static files bad loading time

2015-05-07 Thread shahzaib shahzaib
Hi,

   There are some tweaks required to nginx configurations. If the same
image which usually takes second to response can takes upto 10-20 seconds
to load, the wide guess would be exceeding concurrent connections at peak
traffic. The directive worker_rlimit_nofile value is set much lower as
compare to worker_connections. Nginx uses upto 2 file descriptors per
connections, so i would suggest to increase worker_rlimit_nofile value to
124000.

Also, default keepalive_timeout value is 65sec due to which your current
nginx configuration is not optimized to serve more than 2000 concurrent
connections. Here's how :

(Worker_process)4 * 32768(worker_connections) / 65(Keepalive_timeout ==
2016 connections per seconds.

So i would suggest to decrease keepalive_timeout to 5sec directive and
increase worker_connections to 6.

Also make sure to decrease timeout values.

Regards.
Shahzaib



On Thu, May 7, 2015 at 10:56 PM, grigory  wrote:

> Hi Francis,
>
> > Can you tell from nginx logs whether the slowness is due to
> > slow-read-from-disk, or slow-write-to-client, or something else?
>
> Could you please tell me how to check this out?
> My nginx logs do not contain this sort of information.
>
> > Can you find any pattern in the requests which respond more slowly than
> > you want? Certain browsers, certain times of day, anything like that?
>
> Unfortunately, I didn't find any pattern. It's just sometimes loads in 2
> seconds and in another time -- in 10-15 seconds. I mean same 300KB image
> within a couple of refreshes in a browser. I've tested the problem on
> different browsers and different times of day -- no luck.
>
> > If you make the request from the machine itself, so network issues should
> > be minor, does it still show sometimes being slow?
>
> When I make request from machine itself, the image loads pretty fast.
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,258372,258730#msg-258730
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx gets halt on 15K connections !!

2015-05-07 Thread shahzaib shahzaib
Hi,

  It looks like we made the false calculation based on entertaining
concurrent connections per seconds and worker_connections limit was set to
be very low. I've increased this limit to 16000 and issue looks to be
fixed. Here's the mechanism i used to calculate concurrent connections/sec:

 worker_processes * worker_connections / keepalive_timeout == concurrent
connections per second

Concurrent connections on our server is around 15K . Based on this i used
the following values :

 48 * 16000 / 15 == 51200/sec

Can somebody point to me if the calculation method is false ?

Regards.
Shahzaib

On Sun, May 3, 2015 at 3:11 AM, shahzaib shahzaib 
wrote:

> Experts,
>
> Could you please do me a favor in order to solve this problem ?
>
> Regards.
> Shahzaib
>
> On Sat, May 2, 2015 at 3:32 PM, shahzaib shahzaib 
> wrote:
>
>> Hi,
>>
>> We've been running nginx-1.8 instance on one of our media server to
>> serve big static .mp4 files as well as small files such as .jpeg. Nginx is
>> serving well under 13K connections/sec with 800Mbps outgoing network load
>> but whenever requests exceed 15K connections, nginx gets halt and 'D'
>> status goes all over around the nginx workers, as well as network load
>> drops down to 400Mbps  due to which video streaming gets stuck and after
>> 5-10 minutes load starts dropping and nginx starts stabilizing again as
>> well as network load gets back to 800Mbps. We've been encountering this
>> fluctuating situation on each 15minutes gap (Probably).
>>
>>  We know that 'D' status is most likely due to high Disk I/O and to
>> ensure that the disk i/o could be the problem under 15K connections, we had
>> enabled apache on port 8080 for testing same video stream during high load
>> and buffered on apache, well the stream was fluctuating a bit but there was
>> no stuck for around 5-10 minutes. In the meantime the same video was worst
>> on nginx and stucked for 5minutes during buffer.
>>
>> We suspecting this to be related to something else than Disk I/O, reason
>> is the same video under high load buffers better on apache(on port 8080).
>> Also if it is related to high disk I/O, there must be no possibility that
>> video should should stuck for 5-10 minutes.
>>
>> It looks to us that nginx gets halt when concurrent connections exceed
>> 15K. We also tried optimizing backlog directive which slightly improved the
>> performance but there must be something more related to nginx optimization
>> which we must be missing. I have linked nginx.conf file, sysctl and vhost
>> file to get better understanding of our tweaks.
>>
>> user  nginx;
>> worker_processes 48;
>> worker_rlimit_nofile 60; #2 filehandlers for each connection
>> #error_log  logs/error.log;
>> #error_log  logs/error.log  notice;
>> error_log  /var/log/nginx/error.log  error;
>> #error_log /dev/null;
>> #pidlogs/nginx.pid;
>>
>>
>> events {
>> worker_connections  2048;
>> use epoll;
>> #   use kqueue;
>> }
>> http {
>>  include   mime.types;
>>  default_type  application/octet-stream;
>>  #client_max_body_size 800M;
>>  client_body_buffer_size 128K;
>>  output_buffers 1 512k;
>>  sendfile_max_chunk 128k;
>>  client_header_buffer_size 256k;
>>  large_client_header_buffers 4 256k;
>> #fastcgi_buffers 512 8k;
>> #proxy_buffers   512 8k;
>> #fastcgi_read_timeout 300s;
>>  server_tokens off; #Conceals nginx version
>>  access_log off;
>> #access_log /var/log/nginx/access.log;
>>  sendfileoff;
>> #sendfile ;
>>  tcp_nodelay on;
>>  aio on;
>>  directio 512;
>> #tcp_nopush on;
>>  client_header_timeout  120s;
>>  client_body_timeout 120s;
>>  send_timeout 120s;
>>  keepalive_timeout  15;
>>  gzip on;
>> gzip_vary on;
>> gzip_disable "MSIE [1-6]\.";
>> gzip_proxied any;
>> gzip_http_version 1.0;
>> gzip_min_length  1280;
>> gzip_comp_level  6;
>> gzip_buffers  16 8k;
>> gzip_typestext/plain text/xml text/css application/x-javascript
>> image/png image/x-icon image/gif image/jpeg image/jpg application/xml
>> application/xml+rss text/javascr ipt application/atom+xml;
>>  include /usr/local/nginx/conf/vhosts/*.conf;
>> # open_file_cache  max=2000 inactive=20s;
>> # open_file_cache_valid60s;
>> # open_file_cache_min_uses 5;
>

Re: Nginx gets halt on 15K connections !!

2015-05-02 Thread shahzaib shahzaib
Experts,

Could you please do me a favor in order to solve this problem ?

Regards.
Shahzaib

On Sat, May 2, 2015 at 3:32 PM, shahzaib shahzaib 
wrote:

> Hi,
>
> We've been running nginx-1.8 instance on one of our media server to
> serve big static .mp4 files as well as small files such as .jpeg. Nginx is
> serving well under 13K connections/sec with 800Mbps outgoing network load
> but whenever requests exceed 15K connections, nginx gets halt and 'D'
> status goes all over around the nginx workers, as well as network load
> drops down to 400Mbps  due to which video streaming gets stuck and after
> 5-10 minutes load starts dropping and nginx starts stabilizing again as
> well as network load gets back to 800Mbps. We've been encountering this
> fluctuating situation on each 15minutes gap (Probably).
>
>  We know that 'D' status is most likely due to high Disk I/O and to ensure
> that the disk i/o could be the problem under 15K connections, we had
> enabled apache on port 8080 for testing same video stream during high load
> and buffered on apache, well the stream was fluctuating a bit but there was
> no stuck for around 5-10 minutes. In the meantime the same video was worst
> on nginx and stucked for 5minutes during buffer.
>
> We suspecting this to be related to something else than Disk I/O, reason
> is the same video under high load buffers better on apache(on port 8080).
> Also if it is related to high disk I/O, there must be no possibility that
> video should should stuck for 5-10 minutes.
>
> It looks to us that nginx gets halt when concurrent connections exceed
> 15K. We also tried optimizing backlog directive which slightly improved the
> performance but there must be something more related to nginx optimization
> which we must be missing. I have linked nginx.conf file, sysctl and vhost
> file to get better understanding of our tweaks.
>
> user  nginx;
> worker_processes 48;
> worker_rlimit_nofile 60; #2 filehandlers for each connection
> #error_log  logs/error.log;
> #error_log  logs/error.log  notice;
> error_log  /var/log/nginx/error.log  error;
> #error_log /dev/null;
> #pidlogs/nginx.pid;
>
>
> events {
> worker_connections  2048;
> use epoll;
> #   use kqueue;
> }
> http {
>  include   mime.types;
>  default_type  application/octet-stream;
>  #client_max_body_size 800M;
>  client_body_buffer_size 128K;
>  output_buffers 1 512k;
>  sendfile_max_chunk 128k;
>  client_header_buffer_size 256k;
>  large_client_header_buffers 4 256k;
> #fastcgi_buffers 512 8k;
> #proxy_buffers   512 8k;
> #fastcgi_read_timeout 300s;
>  server_tokens off; #Conceals nginx version
>  access_log off;
> #access_log /var/log/nginx/access.log;
>  sendfileoff;
> #sendfile ;
>  tcp_nodelay on;
>  aio on;
>  directio 512;
> #tcp_nopush on;
>  client_header_timeout  120s;
>  client_body_timeout 120s;
>  send_timeout 120s;
>  keepalive_timeout  15;
>  gzip on;
> gzip_vary on;
> gzip_disable "MSIE [1-6]\.";
> gzip_proxied any;
> gzip_http_version 1.0;
> gzip_min_length  1280;
> gzip_comp_level  6;
> gzip_buffers  16 8k;
> gzip_typestext/plain text/xml text/css application/x-javascript
> image/png image/x-icon image/gif image/jpeg image/jpg application/xml
> application/xml+rss text/javascr ipt application/atom+xml;
>  include /usr/local/nginx/conf/vhosts/*.conf;
> # open_file_cache  max=2000 inactive=20s;
> # open_file_cache_valid60s;
> # open_file_cache_min_uses 5;
> # open_file_cache_errors   off;
>
> }
>
> sysctl.conf main config :
>
> fs.file-max = 70
> net.core.wmem_max=6291456
> net.core.rmem_max=6291456
> net.ipv4.tcp_rmem= 10240 87380 6291456
> net.ipv4.tcp_wmem= 10240 87380 6291456
> net.ipv4.tcp_window_scaling = 1
> net.ipv4.tcp_timestamps = 1
> net.ipv4.tcp_sack = 1
> net.ipv4.tcp_no_metrics_save = 1
> net.core.netdev_max_backlog = 1
>
> net.ipv6.conf.all.disable_ipv6 = 1
> net.ipv6.conf.default.disable_ipv6 = 1
> net.ipv6.conf.lo.disable_ipv6 = 1
> net.ipv6.conf.eth0.disable_ipv6 = 1
> net.ipv6.conf.eth1.disable_ipv6 = 1
> net.ipv6.conf.ppp0.disable_ipv6 = 1
> net.ipv6.conf.tun0.disable_ipv6 = 1
> vm.dirty_background_ratio = 50
> vm.dirty_ratio = 80
> net.ipv4.tcp_fin_timeout = 30
> net.ipv4.ip_local_port_range=1024 65000
> net.ipv4.tcp_tw_reuse = 1
> net.netfilter.nf_conntrack_tcp_timeout_established = 54000
> net.ipv4.netfilter.ip_conntrack_generic_timeout = 120
> net.ipv4.tcp_syn_retries

Nginx gets halt on 15K connections !!

2015-05-02 Thread shahzaib shahzaib
Hi,

We've been running nginx-1.8 instance on one of our media server to
serve big static .mp4 files as well as small files such as .jpeg. Nginx is
serving well under 13K connections/sec with 800Mbps outgoing network load
but whenever requests exceed 15K connections, nginx gets halt and 'D'
status goes all over around the nginx workers, as well as network load
drops down to 400Mbps  due to which video streaming gets stuck and after
5-10 minutes load starts dropping and nginx starts stabilizing again as
well as network load gets back to 800Mbps. We've been encountering this
fluctuating situation on each 15minutes gap (Probably).

 We know that 'D' status is most likely due to high Disk I/O and to ensure
that the disk i/o could be the problem under 15K connections, we had
enabled apache on port 8080 for testing same video stream during high load
and buffered on apache, well the stream was fluctuating a bit but there was
no stuck for around 5-10 minutes. In the meantime the same video was worst
on nginx and stucked for 5minutes during buffer.

We suspecting this to be related to something else than Disk I/O, reason is
the same video under high load buffers better on apache(on port 8080). Also
if it is related to high disk I/O, there must be no possibility that video
should should stuck for 5-10 minutes.

It looks to us that nginx gets halt when concurrent connections exceed 15K.
We also tried optimizing backlog directive which slightly improved the
performance but there must be something more related to nginx optimization
which we must be missing. I have linked nginx.conf file, sysctl and vhost
file to get better understanding of our tweaks.

user  nginx;
worker_processes 48;
worker_rlimit_nofile 60; #2 filehandlers for each connection
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
error_log  /var/log/nginx/error.log  error;
#error_log /dev/null;
#pidlogs/nginx.pid;


events {
worker_connections  2048;
use epoll;
#   use kqueue;
}
http {
 include   mime.types;
 default_type  application/octet-stream;
 #client_max_body_size 800M;
 client_body_buffer_size 128K;
 output_buffers 1 512k;
 sendfile_max_chunk 128k;
 client_header_buffer_size 256k;
 large_client_header_buffers 4 256k;
#fastcgi_buffers 512 8k;
#proxy_buffers   512 8k;
#fastcgi_read_timeout 300s;
 server_tokens off; #Conceals nginx version
 access_log off;
#access_log /var/log/nginx/access.log;
 sendfileoff;
#sendfile ;
 tcp_nodelay on;
 aio on;
 directio 512;
#tcp_nopush on;
 client_header_timeout  120s;
 client_body_timeout 120s;
 send_timeout 120s;
 keepalive_timeout  15;
 gzip on;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
gzip_proxied any;
gzip_http_version 1.0;
gzip_min_length  1280;
gzip_comp_level  6;
gzip_buffers  16 8k;
gzip_typestext/plain text/xml text/css application/x-javascript
image/png image/x-icon image/gif image/jpeg image/jpg application/xml
application/xml+rss text/javascr ipt application/atom+xml;
 include /usr/local/nginx/conf/vhosts/*.conf;
# open_file_cache  max=2000 inactive=20s;
# open_file_cache_valid60s;
# open_file_cache_min_uses 5;
# open_file_cache_errors   off;

}

sysctl.conf main config :

fs.file-max = 70
net.core.wmem_max=6291456
net.core.rmem_max=6291456
net.ipv4.tcp_rmem= 10240 87380 6291456
net.ipv4.tcp_wmem= 10240 87380 6291456
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_no_metrics_save = 1
net.core.netdev_max_backlog = 1

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 1
net.ipv6.conf.eth1.disable_ipv6 = 1
net.ipv6.conf.ppp0.disable_ipv6 = 1
net.ipv6.conf.tun0.disable_ipv6 = 1
vm.dirty_background_ratio = 50
vm.dirty_ratio = 80
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_local_port_range=1024 65000
net.ipv4.tcp_tw_reuse = 1
net.netfilter.nf_conntrack_tcp_timeout_established = 54000
net.ipv4.netfilter.ip_conntrack_generic_timeout = 120
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_synack_retries=2
net.ipv4.netfilter.ip_conntrack_max = 90536
net.core.somaxconn = 1

Vhost :

server {
listen  80 backlog=1;
server_name archive3.domain.com archive3.domain.com
www.archive3.domain.com www.archive3.domain.com;
access_log off;
location / {
root   /content/archive;
index index.html index.htm index.php;
   autoindex off;
}

location /files/thumbs/ {
root /data/nginx/archive;
add_header X-Cache SSD;
expires max;
}

location ~ \.(flv)$ {
flv;
root /content/archive;
#aio on;
#directio 512;
#output_buffers 1 2m;
expires 7d;
valid_referers none b

Re: open socket #84 left in connection

2015-04-21 Thread shahzaib shahzaib
Hi,

 Problem was with monit which was kept on restarting nginx
persistently.

Thanks !!

On Mon, Apr 20, 2015 at 7:23 PM, shahzaib shahzaib 
wrote:

> I have also enabled debug logging and found 'Resource temporarily
> unavailable' messages. Below is the reference sample :
>
> 2015/04/20 18:41:29 [debug] 12917#0: *2711 event timer add: 18:
> 15000:1429537304304
> 2015/04/20 18:41:29 [debug] 12917#0: *2711 post event 00A372D0
> 2015/04/20 18:41:29 [debug] 12917#0: posted event 00A372D0
> 2015/04/20 18:41:29 [debug] 12917#0: *2711 delete posted event
> 00A372D0
> 2015/04/20 18:41:29 [debug] 12917#0: *2711 http keepalive handler
> 2015/04/20 18:41:29 [debug] 12917#0: *2711 malloc: 00ACF6C0:262144
> 2015/04/20 18:41:29 [debug] 12917#0: *2711 recv: fd:18 -1 of 262144
> 2015/04/20 18:41:29 [debug] 12917#0: *2711 recv() not ready (11: Resource
> temporarily unavailable)
> 2015/04/20 18:41:29 [debug] 12917#0: *2711 free: 00ACF6C0
> 2015/04/20 18:41:29 [debug] 12917#0: posted event 
> 2015/04/20 18:41:29 [debug] 12917#0: worker cycle
> 2015/04/20 18:41:29 [debug] 12917#0: accept mutex locked
> 2015/04/20 18:41:29 [debug] 12917#0: epoll timer: 15000
> 2015/04/20 18:41:29 [debug] 12913#0: epoll: fd:18 ev:0004
> d:7F58B7D89238
> 2015/04/20 18:41:29 [debug] 12913#0: *2698 http run request:
> "/files/videos/2015/04/15/14290705507373d-360.mp4?"
>
> Could that be the issue ?
>
> On Mon, Apr 20, 2015 at 6:18 PM, shahzaib shahzaib 
> wrote:
>
>> Hi,
>>
>>   We're using nginx to upload and serve videos files around 1GB of file
>> size via http. We've been receiving complains from some customers that
>> uploading has some issue and sometimes user are unable to upload videos
>> successfully. Server is installed with Nginx-1.4.7+php-fpm, ffmpeg, MP4Box.
>> On checking the nginx logs, we didn't got anything but Following messages :
>>
>> 2015/04/16 16:49:53 [alert] 15077#0: open socket #81 left in connection 49
>> 2015/04/16 16:49:53 [alert] 15084#0: open socket #48 left in connection 19
>> 2015/04/16 16:49:53 [alert] 15077#0: open socket #84 left in connection 51
>> 2015/04/16 16:49:53 [alert] 15084#0: open socket #52 left in connection 21
>> 2015/04/16 16:49:53 [alert] 15077#0: open socket #87 left in connection 53
>> 2015/04/16 16:49:53 [alert] 15079#0: open socket #81 left in connection 46
>> 2015/04/16 16:49:53 [alert] 15084#0: open socket #53 left in connection 22
>>
>> Here is the nginx.conf
>>
>> user  nginx;
>> worker_processes  16;
>> worker_rlimit_nofile 30; #2 filehandlers for each connection
>> error_log /usr/local/nginx/logs/error.log crit;
>> #access_log logs/access.log;
>> #pidlogs/nginx.pid;
>>
>>
>> events {
>> worker_connections  6000;
>> use epoll;
>> }
>> http {
>> include   mime.types;
>> default_type  application/octet-stream;
>> client_max_body_size 3000M;
>> client_body_buffer_size 2000M;
>> sendfile_max_chunk 128k;
>> client_header_buffer_size 256k;
>> large_client_header_buffers 4 256k;
>> output_buffers 1 512k;
>> server_tokens off; #Conceals nginx version
>>access_log  /usr/local/nginx/logs/access.log ;
>>access_log off;
>> sendfileoff;
>>ignore_invalid_headers on;
>> client_header_timeout  60m;
>> client_body_timeout 60m;
>> send_timeout 60m;
>> reset_timedout_connection on;
>>
>> keepalive_timeout 15;
>> include "/usr/local/nginx/conf/vhosts/*.conf";
>> error_page 404 = /thumb.php;
>> error_page   403 /forbidden.html;
>> }
>>
>> If anyone can help me with this ?
>>
>> Regards.
>> Shahzaib
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: open socket #84 left in connection

2015-04-20 Thread shahzaib shahzaib
I have also enabled debug logging and found 'Resource temporarily
unavailable' messages. Below is the reference sample :

2015/04/20 18:41:29 [debug] 12917#0: *2711 event timer add: 18:
15000:1429537304304
2015/04/20 18:41:29 [debug] 12917#0: *2711 post event 00A372D0
2015/04/20 18:41:29 [debug] 12917#0: posted event 00A372D0
2015/04/20 18:41:29 [debug] 12917#0: *2711 delete posted event
00A372D0
2015/04/20 18:41:29 [debug] 12917#0: *2711 http keepalive handler
2015/04/20 18:41:29 [debug] 12917#0: *2711 malloc: 00ACF6C0:262144
2015/04/20 18:41:29 [debug] 12917#0: *2711 recv: fd:18 -1 of 262144
2015/04/20 18:41:29 [debug] 12917#0: *2711 recv() not ready (11: Resource
temporarily unavailable)
2015/04/20 18:41:29 [debug] 12917#0: *2711 free: 00ACF6C0
2015/04/20 18:41:29 [debug] 12917#0: posted event 
2015/04/20 18:41:29 [debug] 12917#0: worker cycle
2015/04/20 18:41:29 [debug] 12917#0: accept mutex locked
2015/04/20 18:41:29 [debug] 12917#0: epoll timer: 15000
2015/04/20 18:41:29 [debug] 12913#0: epoll: fd:18 ev:0004 d:7F58B7D89238
2015/04/20 18:41:29 [debug] 12913#0: *2698 http run request:
"/files/videos/2015/04/15/14290705507373d-360.mp4?"

Could that be the issue ?

On Mon, Apr 20, 2015 at 6:18 PM, shahzaib shahzaib 
wrote:

> Hi,
>
>   We're using nginx to upload and serve videos files around 1GB of file
> size via http. We've been receiving complains from some customers that
> uploading has some issue and sometimes user are unable to upload videos
> successfully. Server is installed with Nginx-1.4.7+php-fpm, ffmpeg, MP4Box.
> On checking the nginx logs, we didn't got anything but Following messages :
>
> 2015/04/16 16:49:53 [alert] 15077#0: open socket #81 left in connection 49
> 2015/04/16 16:49:53 [alert] 15084#0: open socket #48 left in connection 19
> 2015/04/16 16:49:53 [alert] 15077#0: open socket #84 left in connection 51
> 2015/04/16 16:49:53 [alert] 15084#0: open socket #52 left in connection 21
> 2015/04/16 16:49:53 [alert] 15077#0: open socket #87 left in connection 53
> 2015/04/16 16:49:53 [alert] 15079#0: open socket #81 left in connection 46
> 2015/04/16 16:49:53 [alert] 15084#0: open socket #53 left in connection 22
>
> Here is the nginx.conf
>
> user  nginx;
> worker_processes  16;
> worker_rlimit_nofile 30; #2 filehandlers for each connection
> error_log /usr/local/nginx/logs/error.log crit;
> #access_log logs/access.log;
> #pidlogs/nginx.pid;
>
>
> events {
> worker_connections  6000;
> use epoll;
> }
> http {
> include   mime.types;
> default_type  application/octet-stream;
> client_max_body_size 3000M;
> client_body_buffer_size 2000M;
> sendfile_max_chunk 128k;
> client_header_buffer_size 256k;
> large_client_header_buffers 4 256k;
> output_buffers 1 512k;
> server_tokens off; #Conceals nginx version
>access_log  /usr/local/nginx/logs/access.log ;
>access_log off;
> sendfileoff;
>ignore_invalid_headers on;
> client_header_timeout  60m;
> client_body_timeout 60m;
> send_timeout 60m;
> reset_timedout_connection on;
>
> keepalive_timeout 15;
> include "/usr/local/nginx/conf/vhosts/*.conf";
> error_page 404 = /thumb.php;
> error_page   403 /forbidden.html;
> }
>
> If anyone can help me with this ?
>
> Regards.
> Shahzaib
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

open socket #84 left in connection

2015-04-20 Thread shahzaib shahzaib
Hi,

  We're using nginx to upload and serve videos files around 1GB of file
size via http. We've been receiving complains from some customers that
uploading has some issue and sometimes user are unable to upload videos
successfully. Server is installed with Nginx-1.4.7+php-fpm, ffmpeg, MP4Box.
On checking the nginx logs, we didn't got anything but Following messages :

2015/04/16 16:49:53 [alert] 15077#0: open socket #81 left in connection 49
2015/04/16 16:49:53 [alert] 15084#0: open socket #48 left in connection 19
2015/04/16 16:49:53 [alert] 15077#0: open socket #84 left in connection 51
2015/04/16 16:49:53 [alert] 15084#0: open socket #52 left in connection 21
2015/04/16 16:49:53 [alert] 15077#0: open socket #87 left in connection 53
2015/04/16 16:49:53 [alert] 15079#0: open socket #81 left in connection 46
2015/04/16 16:49:53 [alert] 15084#0: open socket #53 left in connection 22

Here is the nginx.conf

user  nginx;
worker_processes  16;
worker_rlimit_nofile 30; #2 filehandlers for each connection
error_log /usr/local/nginx/logs/error.log crit;
#access_log logs/access.log;
#pidlogs/nginx.pid;


events {
worker_connections  6000;
use epoll;
}
http {
include   mime.types;
default_type  application/octet-stream;
client_max_body_size 3000M;
client_body_buffer_size 2000M;
sendfile_max_chunk 128k;
client_header_buffer_size 256k;
large_client_header_buffers 4 256k;
output_buffers 1 512k;
server_tokens off; #Conceals nginx version
   access_log  /usr/local/nginx/logs/access.log ;
   access_log off;
sendfileoff;
   ignore_invalid_headers on;
client_header_timeout  60m;
client_body_timeout 60m;
send_timeout 60m;
reset_timedout_connection on;

keepalive_timeout 15;
include "/usr/local/nginx/conf/vhosts/*.conf";
error_page 404 = /thumb.php;
error_page   403 /forbidden.html;
}

If anyone can help me with this ?

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Internal Server Error !!

2015-03-27 Thread shahzaib shahzaib
Hi,

Just want to inform that we're still facing this issue with mp4.

Regards.
Shahzaib

On Mon, Mar 23, 2015 at 7:10 PM, shahzaib shahzaib 
wrote:

> Hi,
>
>Nginx logging mp4 related error intermittently.  Following is the log :
>
> 2015/03/23 19:01:53 [crit] 8671#0: *782950 pread()
> "/tunefiles/storage17/files/videos/2014/05/07/13994800482e2b0-360.mp4"
> failed (22: Invalid argument), client: 182.178.204.162, server:
> storage17.domain.com, request: "GET
> /files/videos/2014/05/07/13994800482e2b0-360.mp4?start=31.832 HTTP/1.1",
> host: "storage17.domain.com", referrer: "
> http://static.tune.pk/tune_player/tune.swf?v2";
>
> We've changed nginx-1.6.2 banner to as follows :
>
> nginx version: tune-webserver/1.0.4
> built by gcc 4.7.2 (Debian 4.7.2-5)
> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log
> --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
> --lock-path=/var/run/nginx.lock --user=nginx --group=nginx
> --with-http_flv_module --with-http_mp4_module --with-file-aio --with-ipv6
> --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
> -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'
> --with-ld-opt='-L /usr/lib/x86_64-linux-gnu'
>
> Could anyone please assist me regarding this issue?
>
> Regards.
> Shahzaib
>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Internal Server Error !!

2015-03-23 Thread shahzaib shahzaib
Hi,

   Nginx logging mp4 related error intermittently.  Following is the log :

2015/03/23 19:01:53 [crit] 8671#0: *782950 pread()
"/tunefiles/storage17/files/videos/2014/05/07/13994800482e2b0-360.mp4"
failed (22: Invalid argument), client: 182.178.204.162, server:
storage17.domain.com, request: "GET
/files/videos/2014/05/07/13994800482e2b0-360.mp4?start=31.832 HTTP/1.1",
host: "storage17.domain.com", referrer: "
http://static.tune.pk/tune_player/tune.swf?v2";

We've changed nginx-1.6.2 banner to as follows :

nginx version: tune-webserver/1.0.4
built by gcc 4.7.2 (Debian 4.7.2-5)
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock --user=nginx --group=nginx
--with-http_flv_module --with-http_mp4_module --with-file-aio --with-ipv6
--with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'
--with-ld-opt='-L /usr/lib/x86_64-linux-gnu'

Could anyone please assist me regarding this issue?

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Fake video sharing Android App !!

2015-03-17 Thread shahzaib shahzaib
@itpp, you're right but even if we can partially solve this problem, it'll
be sufficient for us. Well, using below method worked in our case :

location ~ \.(mp4)$ {
mp4;
root /var/www/html/conversion;
expires 1d;
valid_referers servers  domain.net content.domain.com ;
  if ($invalid_referer) {
return   403;
}
}

This config is only permitting domain.net and domain.com while preventing
any other referer header such as "empty" one.

On Tue, Mar 17, 2015 at 3:38 PM, itpp2012  wrote:

> Which can all be faked (eventually), build some kind of
> validation/authentication system before launching your app.
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,257269,257314#msg-257314
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Fake video sharing Android App !!

2015-03-17 Thread shahzaib shahzaib
@itpp, as i sent the logs above that referer_header for android requests
are empty, maybe blocking requests based on empty referer_header will
partially resolve our issue ? Following is the config i used to block empty
referer_header but in vain.

valid_referers server_names ~.;
if ($invalid_referer) {
return   403;
}

Android request log :


39.49.52.224 - - [15/Mar/2015:10:40:26 +0500] "GET
/files/thumbs/2015/03/14/1426310448973c5-1.jpg HTTP/1.1" 200 13096 "-"
"Dalvik/1.6.0 (Linux; U; Android 4.2.2; GT-S7582 Build/JDQ39)"


I might be putting this config under wrong location, following is the
content of android.conf and virtual.conf :

virtual.conf :

server {
listen  80;
server_name  conversion.domain.com;
client_max_body_size 8000m;
#limit_rate 180k;
#access_log  /websites/theos.in/logs/access.log  main;

location / {
root   /var/www/html/conversion;
index index.html index.htm index.php;
 #  autoindex on;
include android.conf;
}
location ~ \.(flv|jpg|jpeg)$ {
flv;
root /var/www/html/conversion;
expires 2d;
   include android.conf;
  valid_referers none blocked domain.net
www.domain.net  domain.com www.domain.com;
 if ($invalid_referer) {
return   403;
}
}
location ~ \.(mp4)$ {
mp4;
root /var/www/html/conversion;
expires 1d;
include android.conf;
 valid_referers none blocked domain.net www.domain.net
 domain.com www.domain.com;
if ($invalid_referer) {
return   403;
}
}

 # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
root /var/www/html/conversion;
fastcgi_pass   127.0.0.1:9000;
   fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
includefastcgi_params;
}

location ~ /\.ht {
deny  all;
}
}


android.conf :

#if ($http_user_agent ~* "Android") {
#   return 403;
#}

valid_referers server_names ~.;
if ($invalid_referer) {
return   403;
}

Regards.

Shahzaib


On Tue, Mar 17, 2015 at 2:10 PM, itpp2012  wrote:

> I'd use some kind of authentication based on a user logging in before
> allowing use of a service, an encrypted cookie or something along that
> line.
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,257269,257303#msg-257303
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Fake video sharing Android App !!

2015-03-17 Thread shahzaib shahzaib
@itpp thanks for suggestion but the problem is , this is the invalid way of
blocking requests belong to android and the reason is , our official
android app will be releasing soon and filtering based on this user-agent
will block valid users as well. So we need something different such as,
adding some custom header in official android app and filtering requests
based on that (Maybe).

@Francis, thanks for explanation and suggestion. As you suggested, i should
enable extra logging and currently following is the log format enabled on
nginx. Does nginx support extra logging format ? i want to log each
parameter to distinguish between valid and invalid requests. Following is
current log format :

log_format  main  '$remote_addr - $remote_user [$time_local] "$request"
'
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';


Thanks.
Shahzaib

On Mon, Mar 16, 2015 at 7:50 PM, Francis Daly  wrote:

> On Mon, Mar 16, 2015 at 06:45:30PM +0500, shahzaib shahzaib wrote:
>
> Hi there,
>
> > I have installed that android app and requested log against my ip is
> > following :
> >
> > 39.49.52.224 - - [15/Mar/2015:10:40:26 +0500] "GET
> > /files/thumbs/2015/03/14/1426310448973c5-1.jpg HTTP/1.1" 200 13096 "-"
> > "Dalvik/1.6.0 (Linux; U; Android 4.2.2; GT-S7582 Build/JDQ39)"
> >
> > where 39.49.52.224 is ip of my modem.
>
> So - you have the log line for one request that you would like to block.
>
> Do you have the log line for the matching request that you would like
> to allow?
>
> And that log line shows just two request headers plus an ip address. If
> that is enough to accurately distinguish between "yes" and "no" requests,
> you're good. If not, examine the entire request (either by extra logging
> in nginx, or by watching the network traffic involved in each).
>
> > I have also tried blocking specific user agent such as Android but
> neither
> > it worked (sure i am doing something wrong) nor this is the correct
> > solution :
> >
> > if ($http_user_agent ~* "Linux;Android 4.2.2") {
>
> Does that 19-character string appear in the user agent header? If not,
> the "if" will not match.
>
> (I don't see it in there.)
>
> If the most important thing is that "they" don't "steal" your bandwidth,
> you can just turn off your web server. Bandwidth saved.
>
> But presumably it is also important that some requests are handled as
> they currently are.
>
> Only you can say what distinguishes a "no" request from a "yes"
> request.
>
> And only you can say which "yes" requests you are happy to
> mis-characterise as "no" requests and reject.
>
> After you determine those, then you can decide how to configure nginx
> to implement the same test.
>
> (For example: check your logs from before this app started. Do all valid
> requests include Referer? Are you happy to block any actually-valid
> requests that omit Referer, in order to block all requests from this
> app? How long do you think it will take the app author to change their
> app to include a Referer, if you do that?)
>
> f
> --
> Francis Dalyfran...@daoine.org
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Fake video sharing Android App !!

2015-03-16 Thread shahzaib shahzaib
Hi,

I have installed that android app and requested log against my ip is
following :

39.49.52.224 - - [15/Mar/2015:10:40:26 +0500] "GET
/files/thumbs/2015/03/14/1426310448973c5-1.jpg HTTP/1.1" 200 13096 "-"
"Dalvik/1.6.0 (Linux; U; Android 4.2.2; GT-S7582 Build/JDQ39)"

where 39.49.52.224 is ip of my modem.

I have also tried blocking specific user agent such as Android but neither
it worked (sure i am doing something wrong) nor this is the correct
solution :

if ($http_user_agent ~* "Linux;Android 4.2.2") {
   return 403;
}


Thanks.
Shahzaib

On Mon, Mar 16, 2015 at 6:28 PM, Francis Daly  wrote:

> On Mon, Mar 16, 2015 at 04:09:30PM +0500, shahzaib shahzaib wrote:
>
> Hi there,
>
> > Webserver is nginx and hotlinking is already enabled but the issue with
> no
> > Referer_Header for the requests being generated by this android
> > application.
> >
> > What precautions should we take to prevent this application by using our
> > server's bandwidth ?
>
> You have "the requests that you wish to allow as normal". You have "the
> requests that you wish not to allow, since they come from this client".
>
> What part of the request that nginx sees puts it into the "yes" or
> "no" bucket?
>
> Put that in your configuration, so that "yes" does what happens now,
> and "no" returns a http error, or returns a different video inviting
> the client to get your official app.
>
> Perhaps their app uses a unique User-Agent header; or all "wanted"
> clients do include a Referer header?
>
> If you can't tell a "good" request from a "bad" one, you probably cannot
> configure nginx to.
>
> f
> --
> Francis Dalyfran...@daoine.org
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Fake video sharing Android App !!

2015-03-16 Thread shahzaib shahzaib
Guys, someone cloned our videosharing website and created a FAKE android
application using same name as our website and people considering it as our
app, which is not. The main problem we're facing is, the videos being
served from this android application are hotlinked to our server due to
which we're the one affected by its bandwidth cost.

Webserver is nginx and hotlinking is already enabled but the issue with no
Referer_Header for the requests being generated by this android
application.

What precautions should we take to prevent this application by using our
server's bandwidth ?

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Limit incoming bandwith with nginx !!

2015-01-30 Thread shahzaib shahzaib
Hi,

   is there a way we can limit incoming bandwidth (from Remote to linux
box) using nginx ? Nginx is forwarding user requests to different URL and
downloading videos locally due to which server's incoming port is choking
on 1Gbps for large number of concurrent users. If we can lower incoming
bandwidth to 500Mbps it'll surely help us.

Regards.
shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Geoip issue with nginx in front of varnish and apache !

2015-01-20 Thread shahzaib shahzaib
Thanks for reply Francis, adding following did the trick :)

set req.http.X-Forwarded-For = req.http.X-Forwarded-For;
set req.http.X-GeoIP = geoip.country_code(req.http.X-Forwarded-For);


Regards.
Shahzaib

On Wed, Jan 21, 2015 at 12:06 AM, Francis Daly  wrote:

> On Tue, Jan 20, 2015 at 11:38:21PM +0500, shahzaib shahzaib wrote:
>
> Hi there,
>
> > We've compile varnish with geoip module in order to cache country
> based
> > hashes, so far varnish<-> apache structure is working fine with geoip
> > module and caching requests based on countries but when we add another
> > Nginx proxy layer in front of varnish i.e nginx -> varnish - apache, the
> > geoip module stop tracking Country hashes and varnish shows following
> logs :
>
> It sounds like you need to do whatever it takes to convince varnish's
> geoip module to use the IP address in the X-Real-IP header, and not the
> actual client address.
>
> Check the varnish geoip module documentation.
>
> f
> --
> Francis Dalyfran...@daoine.org
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Geoip issue with nginx in front of varnish and apache !

2015-01-20 Thread shahzaib shahzaib
Hi,

We've compile varnish with geoip module in order to cache country based
hashes, so far varnish<-> apache structure is working fine with geoip
module and caching requests based on countries but when we add another
Nginx proxy layer in front of varnish i.e nginx -> varnish - apache, the
geoip module stop tracking Country hashes and varnish shows following logs :

TxHeader b X-GeoIP: Unknown

nginx : port 80
Varnish : port 6081
Apache : port 7172

So far, nginx is forwarding client ips to varnish but it looks like varnish
sessionstart value in varnishlog still showing ip : 127.0.0.1 due to which
it is unable to track client's country. Only if someone can point me to
right direction.

varnishlog :

   15 BackendOpen  b default 127.0.0.1 45806 127.0.0.1 7172
   15 BackendXID   b 1609403517
   15 TxRequestb GET
   15 TxURLb
/video/5708047/jeena-jeena-video-song-badlapur-atif-aslam
   15 TxProtocol   b HTTP/1.1
   15 TxHeader b Referer: http://beta2.domain.com/videos/
   15 TxHeader b X-Real-IP: 39.49.89.134
   15 TxHeader b X-Forwarded-Host: beta2.domain.com
   15 TxHeader b X-Forwarded-Server: beta2.domain.com
   15 TxHeader b Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
   15 TxHeader b User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36
   15 TxHeader b Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
   15 TxHeader b X-Forwarded-For: 39.49.89.134, 127.0.0.1
   15 TxHeader b host: default
   15 TxHeader b X-GeoIP: Unknown
   15 TxHeader b X-Varnish: 1609403517
   15 TxHeader b Accept-Encoding: gzip
   15 RxProtocol   b HTTP/1.1
   15 RxStatus b 200
   15 RxResponse   b OK
   15 RxHeader b Date: Tue, 20 Jan 2015 18:26:06 GMT
   15 RxHeader b Server: Apache
   15 RxHeader b Set-Cookie: PHPSESSID=pcl9rkh58s39fgjti139bgn6n1;
expires=Wed, 21-Jan-2015 18:26:06 GMT; path=/
   15 RxHeader b Expires: Thu, 19 Nov 1981 08:52:00 GMT
   15 RxHeader b Cache-Control: no-store, no-cache, must-revalidate,
post-check=0, pre-check=0
   15 RxHeader b Pragma: no-cache
   15 RxHeader b Set-Cookie:
fb_239452059417627_state=42cba63d4821f3964426e14b2833e8d0; expires=Tue,
20-Jan-2015 19:26:06 GMT; path=/
   15 RxHeader b Set-Cookie:
pageredir=http%3A%2F%2Fbeta2.domain.com%2Fvideo%2F5708047%2Fjeena-jeena-video-song-badlapur-atif-aslam;
expires=Tue, 20-Jan-2015 20:26:06 GMT; path=/
   15 RxHeader b Connection: close
   15 RxHeader b Transfer-Encoding: chunked
   15 RxHeader b Content-Type: text/html; charset=utf-8
   15 Fetch_Body   b 3(chunked) cls 0 mklen 1
   15 Length   b 127024
   15 BackendClose b default
   12 SessionOpen  c 127.0.0.1 51675 :6081
   12 ReqStart c 127.0.0.1 51675 1609403517
   12 RxRequestc GET
   12 RxURLc
/video/5708047/jeena-jeena-video-song-badlapur-atif-aslam
   12 RxProtocol   c HTTP/1.0
   12 RxHeader c Referer: http://beta2.domain.com/videos/
   12 RxHeader c Host: beta2.domain.com
   12 RxHeader c Cookie: __qca=P0-993092579-1421436407272;
__qca=P0-1309575897-1421485050924;
__utma=198843324.254214983.1421436407.1421439435.1421777481.2;
__utmb=198843324.5.10.1421777481; __utmc=198843324;
__utmz=198843324.1421439435.1.1.utmcsr=(direct)|utmccn=(direct)
   12 RxHeader c X-Real-IP: 39.49.89.134
   12 RxHeader c X-Forwarded-Host: beta2.domain.com
   12 RxHeader c X-Forwarded-Server: beta2.domain.com
   12 RxHeader c X-Forwarded-For: 39.49.89.134
   12 RxHeader c Connection: close
   12 RxHeader c Cache-Control: max-age=0
   12 RxHeader c Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
   12 RxHeader c User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36
   12 RxHeader c Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
   12 VCL_call c recv lookup
   12 VCL_call c hash
   12 Hash c
/video/5708047/jeena-jeena-video-song-badlapur-atif-aslam
   12 Hash c default
   12 Hash c Unknown
   12 VCL_return   c hash
   12 VCL_call c miss fetch
   12 Backend  c 15 default default
   12 TTL  c 1609403517 RFC 0 -1 -1 1421778367 0 1421778366
375007920 0
   12 VCL_call c fetch
   12 TTL  c 1609403517 VCL 3600 -1 -1 1421778367 -0
   12 VCL_return   c deliver
   12 ObjProtocol  c HTTP/1.1
   12 ObjResponse  c OK
   12 ObjHeaderc Date: Tue, 20 Jan 2015 18:26:06 GMT
   12 ObjHeaderc Server: Apache
   12 ObjHeaderc Set-Cookie: PHPSESSID=pcl9rkh58s39fgjti139bgn6n1;
expires=Wed, 21-Jan-2015 18:26:06 GMT; path=/
   12 ObjHeaderc Expires: Thu, 19 Nov 1981 08:52:00 GMT
   12 ObjHeaderc Pragma: no-cache
   12 ObjHeaderc Set-Cookie:
fb_239452059417627_state=42cba63d4821f3964426e14b2833e8d0; expires=Tue,
20-Jan-2015 19:26:06 GMT; path=/
   12 ObjHeaderc Set-Cookie:
pageredi

Re: Forward single request to upstream server via proxy_store !!

2014-09-29 Thread shahzaib shahzaib
Also, removing arguments after "?" also disabled the pseudo streaming. So i
think i can't apply this method !!

On Mon, Sep 29, 2014 at 6:05 PM, shahzaib shahzaib 
wrote:

> @RR, i would like to inform you that the issue regarding failed stream for
> 1st request is solved. Varnish was removing content-length header for 1st
> request . Enabling Esi processing has resolved this issue.
>
> set beresp.do_esi = true;
>
>
> http://stackoverflow.com/questions/23643233/how-do-i-disable-transfer-encoding-chunked-encoding-in-varnish
>
> thanks !!
>
> On Sat, Sep 27, 2014 at 10:41 AM, shahzaib shahzaib  > wrote:
>
>> >>In general it shouldn’t since the ‘?start=’ is handled by nginx and not
>> varnish, but I’m not exactly sure how the mp4 module of nginx handles a
>> proxied request.
>> You have to test it.
>>
>> Sure, i'll test it.
>>
>> sub vcl_fetch {
>> return (pass);
>> }
>>
>> You're right about return(pass), coalescing doesn't work with pass.
>>
>> >>In worst case scenario imho only the first request (before landing on
>> the proxy_store server) will “fail” eg play from the beginning instead of
>> the time set.
>> Well, i am facing more worse scenario that first request always fail to
>> stream and player(HTML5) keeps on loading.
>>
>> I'm already checking if there's some config issue with varnish or this is
>> the default behaviour(Which i don't think it is).
>>
>> Thanks @RR
>>
>> Shahzaib
>>
>>
>> On Fri, Sep 26, 2014 at 2:36 AM, Reinis Rozitis  wrote:
>>
>>> It will also prevent users seeking the video because the arguments after
>>>> "?" will remove whenever user will try to seek the video stream, isn't it ?
>>>>
>>>
>>> In general it shouldn’t since the ‘?start=’ is handled by nginx and not
>>> varnish, but I’m not exactly sure how the mp4 module of nginx handles a
>>> proxied request.
>>> You have to test it.
>>>
>>> In worst case scenario imho only the first request (before landing on
>>> the proxy_store server) will “fail” eg play from the beginning instead of
>>> the time set.
>>>
>>>
>>>
>>>  Well, only proxy_store is able to fulfill my requirements that is the
>>>> reason i'll have to stick with it.
>>>>
>>>
>>> Well you can try to use varnish as the streamer, just need some
>>> (web)player supporting byte-range requests for the seeking (
>>> http://flash.flowplayer.org/plugins/streaming/pseudostreaming.html ).
>>>
>>>
>>>  I am bit confused about the varnish. Actually, i don't need any kind of
>>>> caching within the varnish as nginx already doing it via proxy_store. I
>>>> just need varnish to merge the subsequent requests into 1 and forward it to
>>>> nginx and i think varnish is doing it pretty well. Nevertheless, i am
>>>> confused if malloc caching will have any odd effect on the stream behavior 
>>>> ?
>>>>
>>>
>>>
>>> You can try to pass the request without caching:
>>>
>>> sub vcl_fetch {
>>> return (pass);
>>> }
>>>
>>> (maybe even do it in the vcl_recv stage but again I'm not exactly sure
>>> if in that case the request coalescing works).
>>>
>>>
>>>
>>> rr
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>
>>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Forward single request to upstream server via proxy_store !!

2014-09-29 Thread shahzaib shahzaib
@RR, i would like to inform you that the issue regarding failed stream for
1st request is solved. Varnish was removing content-length header for 1st
request . Enabling Esi processing has resolved this issue.

set beresp.do_esi = true;

http://stackoverflow.com/questions/23643233/how-do-i-disable-transfer-encoding-chunked-encoding-in-varnish

thanks !!

On Sat, Sep 27, 2014 at 10:41 AM, shahzaib shahzaib 
wrote:

> >>In general it shouldn’t since the ‘?start=’ is handled by nginx and not
> varnish, but I’m not exactly sure how the mp4 module of nginx handles a
> proxied request.
> You have to test it.
>
> Sure, i'll test it.
>
> sub vcl_fetch {
> return (pass);
> }
>
> You're right about return(pass), coalescing doesn't work with pass.
>
> >>In worst case scenario imho only the first request (before landing on
> the proxy_store server) will “fail” eg play from the beginning instead of
> the time set.
> Well, i am facing more worse scenario that first request always fail to
> stream and player(HTML5) keeps on loading.
>
> I'm already checking if there's some config issue with varnish or this is
> the default behaviour(Which i don't think it is).
>
> Thanks @RR
>
> Shahzaib
>
>
> On Fri, Sep 26, 2014 at 2:36 AM, Reinis Rozitis  wrote:
>
>> It will also prevent users seeking the video because the arguments after
>>> "?" will remove whenever user will try to seek the video stream, isn't it ?
>>>
>>
>> In general it shouldn’t since the ‘?start=’ is handled by nginx and not
>> varnish, but I’m not exactly sure how the mp4 module of nginx handles a
>> proxied request.
>> You have to test it.
>>
>> In worst case scenario imho only the first request (before landing on the
>> proxy_store server) will “fail” eg play from the beginning instead of the
>> time set.
>>
>>
>>
>>  Well, only proxy_store is able to fulfill my requirements that is the
>>> reason i'll have to stick with it.
>>>
>>
>> Well you can try to use varnish as the streamer, just need some
>> (web)player supporting byte-range requests for the seeking (
>> http://flash.flowplayer.org/plugins/streaming/pseudostreaming.html ).
>>
>>
>>  I am bit confused about the varnish. Actually, i don't need any kind of
>>> caching within the varnish as nginx already doing it via proxy_store. I
>>> just need varnish to merge the subsequent requests into 1 and forward it to
>>> nginx and i think varnish is doing it pretty well. Nevertheless, i am
>>> confused if malloc caching will have any odd effect on the stream behavior ?
>>>
>>
>>
>> You can try to pass the request without caching:
>>
>> sub vcl_fetch {
>> return (pass);
>> }
>>
>> (maybe even do it in the vcl_recv stage but again I'm not exactly sure if
>> in that case the request coalescing works).
>>
>>
>>
>> rr
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Forward single request to upstream server via proxy_store !!

2014-09-26 Thread shahzaib shahzaib
>>In general it shouldn’t since the ‘?start=’ is handled by nginx and not
varnish, but I’m not exactly sure how the mp4 module of nginx handles a
proxied request.
You have to test it.

Sure, i'll test it.

sub vcl_fetch {
return (pass);
}

You're right about return(pass), coalescing doesn't work with pass.

>>In worst case scenario imho only the first request (before landing on the
proxy_store server) will “fail” eg play from the beginning instead of the
time set.
Well, i am facing more worse scenario that first request always fail to
stream and player(HTML5) keeps on loading.

I'm already checking if there's some config issue with varnish or this is
the default behaviour(Which i don't think it is).

Thanks @RR

Shahzaib


On Fri, Sep 26, 2014 at 2:36 AM, Reinis Rozitis  wrote:

> It will also prevent users seeking the video because the arguments after
>> "?" will remove whenever user will try to seek the video stream, isn't it ?
>>
>
> In general it shouldn’t since the ‘?start=’ is handled by nginx and not
> varnish, but I’m not exactly sure how the mp4 module of nginx handles a
> proxied request.
> You have to test it.
>
> In worst case scenario imho only the first request (before landing on the
> proxy_store server) will “fail” eg play from the beginning instead of the
> time set.
>
>
>
>  Well, only proxy_store is able to fulfill my requirements that is the
>> reason i'll have to stick with it.
>>
>
> Well you can try to use varnish as the streamer, just need some
> (web)player supporting byte-range requests for the seeking (
> http://flash.flowplayer.org/plugins/streaming/pseudostreaming.html ).
>
>
>  I am bit confused about the varnish. Actually, i don't need any kind of
>> caching within the varnish as nginx already doing it via proxy_store. I
>> just need varnish to merge the subsequent requests into 1 and forward it to
>> nginx and i think varnish is doing it pretty well. Nevertheless, i am
>> confused if malloc caching will have any odd effect on the stream behavior ?
>>
>
>
> You can try to pass the request without caching:
>
> sub vcl_fetch {
> return (pass);
> }
>
> (maybe even do it in the vcl_recv stage but again I'm not exactly sure if
> in that case the request coalescing works).
>
>
>
> rr
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Forward single request to upstream server via proxy_store !!

2014-09-25 Thread shahzaib shahzaib
@RR, thanks a lot for the explanation and examples. It really helped me :)

>>set req.url = regsub(req.url, "\?.*", "");

It will also prevent users seeking the video because the arguments after
"?" will remove whenever user will try to seek the video stream, isn't it ?

>>unset req.http.Cookie;
unset req.http.Accept-Encoding;
unset req.http.Cache-Control;

I'll apply it right at the top of vcl_recv.

>>If you insist on using proxy_store I would probably also add
proxy_ignore_client_abort on;

Well, only proxy_store is able to fulfill my requirements that is the
reason i'll have to stick with it.

I am bit confused about the varnish. Actually, i don't need any kind of
caching within the varnish as nginx already doing it via proxy_store. I
just need varnish to merge the subsequent requests into 1 and forward it to
nginx and i think varnish is doing it pretty well . Nevertheless, i am
confused if malloc caching will have any odd effect on the stream behavior
? Following is the curl request for video file on caching server and Age
parameter is also there :-

 curl -I
http://edge.files.com/files/videos/2014/09/23/1411461292920e4-720.mp4
HTTP/1.1 200 OK
Date: Thu, 25 Sep 2014 18:26:24 GMT
Content-Type: video/mp4
Last-Modified: Tue, 23 Sep 2014 08:36:11 GMT
ETag: "542130fb-5cd4456"
Age: 5
Content-Length: 97338454
Connection: keep-alive

Thanks !!
Shahzaib

On Thu, Sep 25, 2014 at 7:39 PM, Reinis Rozitis  wrote:

> 3 clients requested for test.mp4 (file size is 4mb) --> nginx --> file not
>> existed (proxy_store) --> varnish --> backend (fetch the file from origin).
>> When nginx proxied these three requests subsequently towards the
>> varnish,, despite of filling 4mb of tmp dir it was filled with 12MB which
>> means nginx is proxying all three requests towards the varnish server and
>> creating tmp files as long as the file is not downloaded. (The method was
>> failed)
>>
>
> That is expected, this setup only “guards” the content server.
>
>
>
>  Now varnish also has a flaw to send subsequent requests for same file
>> towards the nginx i.e
>>
>
> It's not a really flaw but default behaviour (different urls mean
> different content/cachable objects), but of course you can implement your
> own scenario:
>
>
> By adding:
>
> sub vcl_recv {
>set req.url = regsub(req.url, "\?.*", "");
> }
>
> will remove all the the arguments behind ? from the uri when forwarding to
> the content backend.
>
>
> For static content I usually also add something like:
>
> unset req.http.Cookie;
> unset req.http.Accept-Encoding;
> unset req.http.Cache-Control;
>
> to normalise the request and so varnish doesnt try to cache different
> versions of the same object.
>
>
> If you insist on using proxy_store I would probably also add
> proxy_ignore_client_abort on;  ( http://nginx.org/en/docs/http/
> ngx_http_proxy_module.html#proxy_ignore_client_abort ) to the nginx
> configuration. So the requests don't get repeated if the client
> closes/aborts the request early etc.
>
>
> rr
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Forward single request to upstream server via proxy_store !!

2014-09-25 Thread shahzaib shahzaib
@RR, i've prepared the local environment with the following structure :-

client --> nginx (edge) --> varnish --> backend (Origin)

When i tested this method i.e :-

3 clients requested for test.mp4 (file size is 4mb) --> nginx --> file not
existed (proxy_store) --> varnish --> backend (fetch the file from origin).

When nginx proxied these three requests subsequently towards the varnish,,
despite of filling 4mb of tmp dir it was filled with 12MB which means nginx
is proxying all three requests towards the varnish server and creating tmp
files as long as the file is not downloaded. (The method was failed)

Although On putting varnish in front of nginx solved this issue.

3 clients requested for test.mp4(file size is 4mb) --> varnish(proxying all
requests for mp4,jpg) --> nginx.(fetch the file from origin).

This time tmp dir was filled with the size of 4Mb which means varnish
combined those 3 subsequent requests into 1.

--

Now varnish also has a flaw to send subsequent requests for same file
towards the nginx i.e

1st user requested for file http://edge.files.com/videos/test.mp4. During
the downloading of first requested file, the second user also requested the
same file but with random seeking
http://edge.files.com/videos/test.mp4?start=33 . Now as the request uri is
changed, there are two different requests for the same file in varnish and
again nginx tmp directory was filled with 8MB instead of 4 which means
nginx downloaded the full file twice. So Random seeking will only work once
the file is cached locally, otherwise nginx will keep on creating tmp files
against random seekings.

I have two questions now :-

1. If there's way to prevent duplicate downloads for random seekings while
the file not downloaded yet ? Note :- We cannot disable mp4 module.
2. Should nginx in front of varnish never work as expected or i am doing
something wrong ?

Following are existing varnish in front of nginx configs. Please let me
know if something need to be fixed :-

varnish config :-

backend origin002 {
.host = "127.0.0.1";
.port = "8080";
}

backend origin003 {
.host = "127.0.0.1";
.port = "8080";
}


sub vcl_recv {

if   ( req.http.host == "origin002.files.com" ){
set req.backend_hint = origin002;
} elsif ( req.http.host == "origin003.files.com" ){
set req.backend_hint = origin003;
} elsif ( req.http.host == "origin004.files.com" ){
set req.backend_hint = origin004;
}

}

sub vcl_backend_response {


if (bereq.url ~ "^[^?]*\.(mp4|jpeg|jpg)(\?.*)?$"){
 set beresp.do_stream = true;
return (deliver);
}
set beresp.grace = 1m;

return (deliver);


}

sub vcl_deliver {

}

-
Nginx config :-


server {

listen   127.0.0.1:8080;
server_name  origin002.files.com;
root /var/www/html/tunefiles;
location ~ \.(mp4|jpeg|jpg)$ {
   root   /var/www/html/tunefiles;
mp4;
error_page 404 = @fetch;

}


location ~ \.(php)$ {
proxy_pass http://origin002.files.com:80;
}



location @fetch {
internal;
proxy_max_temp_file_size 0;
proxy_pass http://content.files.com:80$uri;
proxy_storeon;
proxy_store_access user:rw group:rw all:r;
root /var/www/html/tunefiles;
}

}


 I can also send the configs which were configured for nginx in front of
varnish (which didn't resolved my issue).

BTW, i am using malloc storage instead of file in varnish.

Thanks !!

On Wed, Sep 24, 2014 at 6:55 PM, shahzaib shahzaib 
wrote:

> @RR, That's great. Sure it will help me. I am starting to work with it on
> local environment and will get back to you once the progress started :)
>
> Thanks a lot for writing sample config for me !!
>
> On Wed, Sep 24, 2014 at 6:32 PM, Reinis Rozitis  wrote:
>
>> @RR. could you guide me a bit on it or point me to some guide to start
>>> with. I have worked with varnish regarding php caching so i have the basic
>>> knowledge of varnish but i am just not getting on how to make it work with
>>> proxy_store. :(
>>>
>>
>> Depending on your needs (for example SSL) you can put varnish in
>> different places in the setup:
>>
>>
>> If you use SSL (which varnish itself doesn't support) you can use your
>> proxy_store server as an SSL offloader:
>>
>> 1. [client] <- -> [nginx proxy_store server] <- -> [varnish] <- ->
>> [content_server]
>>
>> .. in thi

Re: Forward single request to upstream server via proxy_store !!

2014-09-24 Thread shahzaib shahzaib
@RR, That's great. Sure it will help me. I am starting to work with it on
local environment and will get back to you once the progress started :)

Thanks a lot for writing sample config for me !!

On Wed, Sep 24, 2014 at 6:32 PM, Reinis Rozitis  wrote:

> @RR. could you guide me a bit on it or point me to some guide to start
>> with. I have worked with varnish regarding php caching so i have the basic
>> knowledge of varnish but i am just not getting on how to make it work with
>> proxy_store. :(
>>
>
> Depending on your needs (for example SSL) you can put varnish in different
> places in the setup:
>
>
> If you use SSL (which varnish itself doesn't support) you can use your
> proxy_store server as an SSL offloader:
>
> 1. [client] <- -> [nginx proxy_store server] <- -> [varnish] <- ->
> [content_server]
>
> .. in this case when multiple requests land onto nginx proxy_store in case
> the file locally doesnt exist those are forwarded to varnish and combined
> into a single request to the content server.
>
> A simplistic/generic nginx config:
>
> location / {
>error_page  404 = @store;
> }
>
> location @store {
>internal;
>proxy_pass   http://imgstore;;
>proxy_store  on;
> }
>
>
> varnish config:
>
> backend default {
>.host = "content_server.ip";
> }
> sub vcl_recv {
>set req.backend = default;
> }
>
>
> Obviously add whatever else you need (like forwarded-for headers to pass
> the real client ip, cache expire times etc).
>
>
>
> 2. In case you don't use SSL:
>
> [client]  <- -> [varnish] <- ->  [content_server]
> (optionally you put nginx or some other software like stud or pound on top
> of varnish as SSL offloader (personally I use Shrpx from Spdylay (
> https://github.com/tatsuhiro-t/spdylay ))
>
> Then generic varnish config would look bassically the same:
>
> backend default {
>.host = "content_server.ip";
> }
> sub vcl_recv {
>set req.backend = default;
> }
>
> sub vcl_backend_response {
>set beresp.do_stream = true;
> }
>
>
>
> Hope that helps.
>
>
> rr
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Forward single request to upstream server via proxy_store !!

2014-09-24 Thread shahzaib shahzaib
@RR. could you guide me a bit on it or point me to some guide to start
with. I have worked with varnish regarding php caching so i have the basic
knowledge of varnish but i am just not getting on how to make it work with
proxy_store. :(

On Wed, Sep 24, 2014 at 2:42 AM, Reinis Rozitis  wrote:

> But i cannot switch with proxy_cache because we're mirroring the mp4 files
>> for random seeking using mp4 module and proxy_cache doesn't support random
>> seeking. Is there a way i can use bash script with proxy_store ? I want the
>> following logic to prevent duplicate downloads :-
>>
>
> You can try to put Varnish ( https://www.varnish-cache.org ) between your
> proxy_store and content server. It supports request coalescing.
>
> p.s. a branch of the 3.x tree and the new 4.x even does have stream
> support.
>
> rr
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Forward single request to upstream server via proxy_store !!

2014-09-23 Thread shahzaib shahzaib
But i cannot switch with proxy_cache because we're mirroring the mp4 files
for random seeking using mp4 module and proxy_cache doesn't support random
seeking. Is there a way i can use bash script with proxy_store ? I want the
following logic to prevent duplicate downloads :-

1st user :-

client (request the test.mp4) --> nginx (file not existed) --> check if
tmp.txt not existed --> create tmp.txt  --> download the test.mp4 from
origin --> remove tmp.txt

2nd user requesting the same test.mp4 :-

client (request test.mp4) --> nginx (file not existed)  --> tmp.txt already
existed (which means nginx already downloading the file) --> redirect user
towards the origin server(keep redirecting users as long as tmp.txt not
removed)

3rd user requesting the same test.mp4 :-

client (request test.mp4) --> nginx(file existed) --> serve from the cache.


SO tmp.txt plays the main role here and prevent the subsequent requests for
the same file but i have no idea how to implement it with nginx. Only if
someone point me towards right direction. :(

Regards.
Shahzaib

On Tue, Sep 23, 2014 at 9:41 PM, Valentin V. Bartenev 
wrote:

> On Tuesday 23 September 2014 19:34:23 shahzaib shahzaib wrote:
> > @Valentine, is proxy_cache_lock supported with proxy_store ?
>
> No.  But if you're asking, then you're using a wrong tool.
> The proxy_store feature is designed to be very simple and stupid.
>
> To meet your needs you should use the proxy_cache directive
> and its friends.
>
>   wbr, Valentin V. Bartenev
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Forward single request to upstream server via proxy_store !!

2014-09-23 Thread shahzaib shahzaib
@Valentine, is proxy_cache_lock supported with proxy_store ?

On Tue, Sep 23, 2014 at 7:03 PM, Valentin V. Bartenev 
wrote:

> On Tuesday 23 September 2014 00:06:56 shahzaib shahzaib wrote:
> > Is there any way with nginx that i could put an hold on the subsequent
> > requests and only proxy the single request for same file in order to
> > prevent filling up the tmp folder ? tmp is kept on filling up due to the
> > multiple users are accessing the same file and file is not downloaded
> yet.
> >
> [..]
>
> http://nginx.org/r/proxy_cache_lock
>
>   wbr, Valentin V. Bartenev
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Forward single request to upstream server via proxy_store !!

2014-09-22 Thread shahzaib shahzaib
Is there any way with nginx that i could put an hold on the subsequent
requests and only proxy the single request for same file in order to
prevent filling up the tmp folder ? tmp is kept on filling up due to the
multiple users are accessing the same file and file is not downloaded yet.

On Sun, Sep 21, 2014 at 2:05 PM, shahzaib shahzaib 
wrote:

> Hi,
>
>  When the multiple users request for same file on edge server via
> proxy_store and requested file is still not downloaded on the edge server,
> the nginx keeps on proxying those requests towards the origin server due to
> which network port is getting saturated on the edge server and file
> downloading taking 1~2hours. Is there a way that nginx would forward the
> only single request towards the origin server and download the requested
> file while holding back the other users and only serve them when the file
> is successfully downloaded on the edge server ?
>
> This way Incoming port(nload) on edge server will not be saturated !!
>
> Regards.
> Shahzaib
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Forward single request to upstream server via proxy_store !!

2014-09-21 Thread shahzaib shahzaib
Hi,

 When the multiple users request for same file on edge server via
proxy_store and requested file is still not downloaded on the edge server,
the nginx keeps on proxying those requests towards the origin server due to
which network port is getting saturated on the edge server and file
downloading taking 1~2hours. Is there a way that nginx would forward the
only single request towards the origin server and download the requested
file while holding back the other users and only serve them when the file
is successfully downloaded on the edge server ?

This way Incoming port(nload) on edge server will not be saturated !!

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: zero size buf in output !!

2014-09-19 Thread shahzaib shahzaib
Nope, it doesn't !!

On Fri, Sep 19, 2014 at 3:42 PM, Roman Arutyunyan  wrote:

> Will this error appear if you try this request again?
>
> On 17 Sep 2014, at 16:29, shahzaib shahzaib  wrote:
>
> > Well, i again received the same error but its much improvement in time
> frame. If the error was occurring after each 5min, now the same error is
> occurring after 30~50min.
> >
> > The conclusion is, nginx-1.7.4 is not 100% bug free from this issue.
> >
> > 2014/09/17 17:22:48 [alert] 28559#0: *27961 zero size buf in output t:0
> r:0 f:0 0477EE20 0477EE20-0477FE20 
> 0-0 while sending to client, client: 115.167.75.22, server: ldx.files.com,
> request: "GET /files/videos/2014/09/04/140984890338bc7-240.mp4 HTTP/1.1",
> >
> > [root@tw data]# nginx -V
> > nginx version: nginx/1.7.4
> > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)
> > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log
> --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
> --lock-path=/var/run/nginx.lock
> --http-client-body-temp-path=/var/cache/nginx/client_temp
> --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx
> --group=nginx --with-http_flv_module --with-http_mp4_module
> > You have mail in /var/spool/mail/root
> >
> >
> > Regards.
> > Shahzaib
> >
> > On Wed, Sep 17, 2014 at 4:25 PM, shahzaib shahzaib <
> shahzaib...@gmail.com> wrote:
> > Hi Maxim,
> >
> >   Upgraded nginx to 1.7.4 and looks like the issue is gone.
> >
> > Regards.
> > Shahzaib
> >
> >
> > On Wed, Aug 27, 2014 at 10:16 PM, Maxim Dounin 
> wrote:
> > Hello!
> >
> > On Wed, Aug 27, 2014 at 08:48:19PM +0500, shahzaib shahzaib wrote:
> >
> > > We're facing following error on edge server with nginx-1.6.1, using
> > > proxy_store on edge.
> > >
> > > 2014/08/27 20:35:05 [alert] 5701#0: *21244 zero size buf in output t:0
> r:0
> > > f:0 02579840 02579840-0257A840
>  0-0
> > > while sending to client, client: 119.160.118.123, server:
> > > storage4.content.com, request: "GET
> > > /files/videos/2013/06/30/137256108550d07-m.mp4 HTTP/1.1", upstream: "
> > > http://82.2.37.87:80/files/videos/2013/06/30/137256108550d07-m.mp4";,
> host: "
> > > storage4.content.com"
> > > 2014/08/27 20:35:28 [alert] 5687#0: *26261 zero size buf in output t:0
> r:0
> > > f:0 04F5F2D0 04F5F2D0-04F602D0
>  0-0
> > > while sending to client, client: 121.52.147.68, server:
> storage9.content.com,
> > > request: "GET /files/videos/2014/04/21/1398060531bb2e3-360.mp4
> HTTP/1.1",
> > > upstream: "
> > > http://9.7.248.180:80/files/videos/2014/04/21/1398060531bb2e3-360.mp4
> ",
> > > host: "storage9.content.com", referrer: "
> > > http://files.com/video/2618018/aashiqui-3-new-songs";
> > >
> > > nginx version: nginx/1.6.1
> > > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)
> > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
> > > --conf-path=/etc/nginx/nginx.conf
> --error-log-path=/var/log/nginx/error.log
> > > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
> > > --lock-path=/var/run/nginx.lock
> > > --http-client-body-temp-path=/var/cache/nginx/client_temp
> > > --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx
> > > --group=nginx --with-http_flv_module --with-http_mp4_module
> >
> > You may want to try 1.7.4 to see if it helps (there are some
> > potentially related changes in nginx 1.7.3).
> >
> > If it doesn't, providing debug log may be helpful.  See
> > http://wiki.nginx.org/Debugging for more hints.
> >
> > --
> > Maxim Dounin
> > http://nginx.org/
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy_store downloading half videos !!

2014-09-18 Thread shahzaib shahzaib
403 forbidden error was due to hotlinking protection on the origin server.
It was fixed.

On Fri, Sep 19, 2014 at 5:29 AM, Eliezer Croitoru 
wrote:

> I have seen your directives and I am not nginx expert but there was
> something like "try" for connections to a proxy server that will first try
> one direction and if not able to download from there use the other one.
>
> Take a peak here:
> http://forum.nginx.org/read.php?2,246125,246125
>
> You can define two backends: store and proxy.
> First try store and then proxy.
> (I do hope I am right about the assumption)
>
> Eliezer
>
> On 09/18/2014 08:58 PM, shahzaib shahzaib wrote:
>
>> This issue is fixed, now i am getting another issue. Whenever user
>> requests for new file which is not yet downloaded on the edge server,
>> user gets the 403 forbidden error on browser and on refreshing the
>> browser, the same video started to stream as well as download. Why the
>> proxy_store is showing 403 error on first time ?
>>
>>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy_store downloading half videos !!

2014-09-18 Thread shahzaib shahzaib
This issue is fixed, now i am getting another issue. Whenever user requests
for new file which is not yet downloaded on the edge server, user gets the
403 forbidden error on browser and on refreshing the browser, the same
video started to stream as well as download. Why the proxy_store is showing
403 error on first time ?

On Thu, Sep 18, 2014 at 6:45 PM, shahzaib shahzaib 
wrote:

> Looks like on using the proxy_pass http://fl008.origin.com:80
> <http://fl008.origin.com>$uri; it worked . Could :80 be the issue ?
>
> On Thu, Sep 18, 2014 at 6:43 PM, shahzaib shahzaib 
> wrote:
>
>> >>proxy_pass http://fl008.origin.com$uri;
>> It didn't work instead the error 502 gateway started to show up when
>> proxying the request via proxy_pass.
>>
>> On Thu, Sep 18, 2014 at 6:32 PM, shahzaib shahzaib > > wrote:
>>
>>> Thanks a lot for the solution Roman, i'll get back to you after applying
>>> the fix. :-)
>>>
>>> On Thu, Sep 18, 2014 at 6:29 PM, Roman Arutyunyan 
>>> wrote:
>>>
>>>>
>>>> Try this directive instead of yours to download the entire file from
>>>> the backend
>>>>
>>>> proxy_pass http://fl008.origin.com$uri;
>>>>
>>>>
>>>> On 18 Sep 2014, at 15:25, shahzaib shahzaib 
>>>> wrote:
>>>>
>>>> > Yes, the mp4 modules is enabled on origin as well as edge. Could you
>>>> please help me resolving the issue ?
>>>> >
>>>> > On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan 
>>>> wrote:
>>>> >
>>>> > On 18 Sep 2014, at 14:49, shahzaib shahzaib 
>>>> wrote:
>>>> >
>>>> > > Hi,
>>>> > >
>>>> > >We're using proxy_store on the edge server for replicating
>>>> requested mp4 files and some of our users reported that some of the videos
>>>> are half sized and therefore they are unable to stream whole video file on
>>>> their end (coming from the edge server). On digging into the access_logs of
>>>> nginx, i found the 500 internal server errors for 10~20 videos and on
>>>> checking the size of 500 error videos it was half of the size compare to
>>>> the mirrored video files on the origin. Please check the following error of
>>>> the culprit video link :-
>>>> > >
>>>> > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET
>>>> /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500
>>>> 588 "http://lw3.files.com/files/videos/2014/09/12/"; "Mozilla/4.0
>>>> (compatible; MSIE 8.0; Windows NT 6.0)"
>>>> > >
>>>> > > I'd like to inform that the issue is coming for 40% of the videos.
>>>> > >
>>>> > > error_log :-
>>>> > >
>>>> > > 2014/09/18 15:30:40 [error] 3883#0: *77490
>>>> "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start
>>>> time exceeds file duration, client: 175.110.88.213, server:
>>>> lw3.files.com, request: "GET
>>>> /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 HTTP/1.1"
>>>> > >
>>>> > > You can see the "start time exceeds error" on edge server but the
>>>> video link starting from start=736.8 exists on origin server.
>>>> > >
>>>> > > Nginx config :-
>>>> > >
>>>> > > server {
>>>> > >
>>>> > > listen   80;
>>>> > > server_name lw3.files.com;
>>>> > > root /var/www/html/tunefiles;
>>>> > > location ~ \.(mp4|jpeg|jpg)$ {
>>>> > >root   /var/www/html/tunefiles;
>>>> > > mp4;
>>>> > > error_page 404 = @fetch;
>>>> > >
>>>> > > }
>>>> > >
>>>> > >
>>>> > > location ~ \.(php)$ {
>>>> > > proxy_pass http://fl008.files.net:80;
>>>> > > }
>>>> > >
>>>> > >
>>>> > >
>>>> > > location @fetch {
>>>> > > internal;
>>>> > > proxy_pass http://fl008.origin.com:80;
>>>> > > proxy_storeon;
>>>> > > proxy_store_access user:rw group:rw all:r;
>>>> > > root /var/www/html/tunefiles;
>>>> > > }
>>>> >
>>>> > Do you have the mp4 module enabled at the origin?  If so then you
>>>> have partial mp4
>>>> > downloaded from there and stored locally.  Note proxy_pass without
>>>> URI passes
>>>> > client URIs to the origin keeping the arguments (including “start”).
>>>> >
>>>> > ___
>>>> > nginx mailing list
>>>> > nginx@nginx.org
>>>> > http://mailman.nginx.org/mailman/listinfo/nginx
>>>> >
>>>> > ___
>>>> > nginx mailing list
>>>> > nginx@nginx.org
>>>> > http://mailman.nginx.org/mailman/listinfo/nginx
>>>>
>>>> ___
>>>> nginx mailing list
>>>> nginx@nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>>
>>>
>>>
>>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy_store downloading half videos !!

2014-09-18 Thread shahzaib shahzaib
Looks like on using the proxy_pass http://fl008.origin.com:80
<http://fl008.origin.com>$uri; it worked . Could :80 be the issue ?

On Thu, Sep 18, 2014 at 6:43 PM, shahzaib shahzaib 
wrote:

> >>proxy_pass http://fl008.origin.com$uri;
> It didn't work instead the error 502 gateway started to show up when
> proxying the request via proxy_pass.
>
> On Thu, Sep 18, 2014 at 6:32 PM, shahzaib shahzaib 
> wrote:
>
>> Thanks a lot for the solution Roman, i'll get back to you after applying
>> the fix. :-)
>>
>> On Thu, Sep 18, 2014 at 6:29 PM, Roman Arutyunyan  wrote:
>>
>>>
>>> Try this directive instead of yours to download the entire file from the
>>> backend
>>>
>>> proxy_pass http://fl008.origin.com$uri;
>>>
>>>
>>> On 18 Sep 2014, at 15:25, shahzaib shahzaib 
>>> wrote:
>>>
>>> > Yes, the mp4 modules is enabled on origin as well as edge. Could you
>>> please help me resolving the issue ?
>>> >
>>> > On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan 
>>> wrote:
>>> >
>>> > On 18 Sep 2014, at 14:49, shahzaib shahzaib 
>>> wrote:
>>> >
>>> > > Hi,
>>> > >
>>> > >We're using proxy_store on the edge server for replicating
>>> requested mp4 files and some of our users reported that some of the videos
>>> are half sized and therefore they are unable to stream whole video file on
>>> their end (coming from the edge server). On digging into the access_logs of
>>> nginx, i found the 500 internal server errors for 10~20 videos and on
>>> checking the size of 500 error videos it was half of the size compare to
>>> the mirrored video files on the origin. Please check the following error of
>>> the culprit video link :-
>>> > >
>>> > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET
>>> /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500
>>> 588 "http://lw3.files.com/files/videos/2014/09/12/"; "Mozilla/4.0
>>> (compatible; MSIE 8.0; Windows NT 6.0)"
>>> > >
>>> > > I'd like to inform that the issue is coming for 40% of the videos.
>>> > >
>>> > > error_log :-
>>> > >
>>> > > 2014/09/18 15:30:40 [error] 3883#0: *77490
>>> "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start
>>> time exceeds file duration, client: 175.110.88.213, server:
>>> lw3.files.com, request: "GET
>>> /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 HTTP/1.1"
>>> > >
>>> > > You can see the "start time exceeds error" on edge server but the
>>> video link starting from start=736.8 exists on origin server.
>>> > >
>>> > > Nginx config :-
>>> > >
>>> > > server {
>>> > >
>>> > > listen   80;
>>> > > server_name lw3.files.com;
>>> > > root /var/www/html/tunefiles;
>>> > > location ~ \.(mp4|jpeg|jpg)$ {
>>> > >root   /var/www/html/tunefiles;
>>> > > mp4;
>>> > > error_page 404 = @fetch;
>>> > >
>>> > > }
>>> > >
>>> > >
>>> > > location ~ \.(php)$ {
>>> > > proxy_pass http://fl008.files.net:80;
>>> > > }
>>> > >
>>> > >
>>> > >
>>> > > location @fetch {
>>> > > internal;
>>> > > proxy_pass http://fl008.origin.com:80;
>>> > > proxy_storeon;
>>> > > proxy_store_access user:rw group:rw all:r;
>>> > > root /var/www/html/tunefiles;
>>> > > }
>>> >
>>> > Do you have the mp4 module enabled at the origin?  If so then you have
>>> partial mp4
>>> > downloaded from there and stored locally.  Note proxy_pass without URI
>>> passes
>>> > client URIs to the origin keeping the arguments (including “start”).
>>> >
>>> > ___
>>> > nginx mailing list
>>> > nginx@nginx.org
>>> > http://mailman.nginx.org/mailman/listinfo/nginx
>>> >
>>> > ___
>>> > nginx mailing list
>>> > nginx@nginx.org
>>> > http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>
>>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy_store downloading half videos !!

2014-09-18 Thread shahzaib shahzaib
>>proxy_pass http://fl008.origin.com$uri;
It didn't work instead the error 502 gateway started to show up when
proxying the request via proxy_pass.

On Thu, Sep 18, 2014 at 6:32 PM, shahzaib shahzaib 
wrote:

> Thanks a lot for the solution Roman, i'll get back to you after applying
> the fix. :-)
>
> On Thu, Sep 18, 2014 at 6:29 PM, Roman Arutyunyan  wrote:
>
>>
>> Try this directive instead of yours to download the entire file from the
>> backend
>>
>> proxy_pass http://fl008.origin.com$uri;
>>
>>
>> On 18 Sep 2014, at 15:25, shahzaib shahzaib 
>> wrote:
>>
>> > Yes, the mp4 modules is enabled on origin as well as edge. Could you
>> please help me resolving the issue ?
>> >
>> > On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan 
>> wrote:
>> >
>> > On 18 Sep 2014, at 14:49, shahzaib shahzaib 
>> wrote:
>> >
>> > > Hi,
>> > >
>> > >We're using proxy_store on the edge server for replicating
>> requested mp4 files and some of our users reported that some of the videos
>> are half sized and therefore they are unable to stream whole video file on
>> their end (coming from the edge server). On digging into the access_logs of
>> nginx, i found the 500 internal server errors for 10~20 videos and on
>> checking the size of 500 error videos it was half of the size compare to
>> the mirrored video files on the origin. Please check the following error of
>> the culprit video link :-
>> > >
>> > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET
>> /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500
>> 588 "http://lw3.files.com/files/videos/2014/09/12/"; "Mozilla/4.0
>> (compatible; MSIE 8.0; Windows NT 6.0)"
>> > >
>> > > I'd like to inform that the issue is coming for 40% of the videos.
>> > >
>> > > error_log :-
>> > >
>> > > 2014/09/18 15:30:40 [error] 3883#0: *77490
>> "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start
>> time exceeds file duration, client: 175.110.88.213, server: lw3.files.com,
>> request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8
>> HTTP/1.1"
>> > >
>> > > You can see the "start time exceeds error" on edge server but the
>> video link starting from start=736.8 exists on origin server.
>> > >
>> > > Nginx config :-
>> > >
>> > > server {
>> > >
>> > > listen   80;
>> > > server_name lw3.files.com;
>> > > root /var/www/html/tunefiles;
>> > > location ~ \.(mp4|jpeg|jpg)$ {
>> > >root   /var/www/html/tunefiles;
>> > > mp4;
>> > > error_page 404 = @fetch;
>> > >
>> > > }
>> > >
>> > >
>> > > location ~ \.(php)$ {
>> > > proxy_pass http://fl008.files.net:80;
>> > > }
>> > >
>> > >
>> > >
>> > > location @fetch {
>> > > internal;
>> > > proxy_pass http://fl008.origin.com:80;
>> > > proxy_storeon;
>> > > proxy_store_access user:rw group:rw all:r;
>> > > root /var/www/html/tunefiles;
>> > > }
>> >
>> > Do you have the mp4 module enabled at the origin?  If so then you have
>> partial mp4
>> > downloaded from there and stored locally.  Note proxy_pass without URI
>> passes
>> > client URIs to the origin keeping the arguments (including “start”).
>> >
>> > ___
>> > nginx mailing list
>> > nginx@nginx.org
>> > http://mailman.nginx.org/mailman/listinfo/nginx
>> >
>> > ___
>> > nginx mailing list
>> > nginx@nginx.org
>> > http://mailman.nginx.org/mailman/listinfo/nginx
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy_store downloading half videos !!

2014-09-18 Thread shahzaib shahzaib
Thanks a lot for the solution Roman, i'll get back to you after applying
the fix. :-)

On Thu, Sep 18, 2014 at 6:29 PM, Roman Arutyunyan  wrote:

>
> Try this directive instead of yours to download the entire file from the
> backend
>
> proxy_pass http://fl008.origin.com$uri;
>
>
> On 18 Sep 2014, at 15:25, shahzaib shahzaib  wrote:
>
> > Yes, the mp4 modules is enabled on origin as well as edge. Could you
> please help me resolving the issue ?
> >
> > On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan 
> wrote:
> >
> > On 18 Sep 2014, at 14:49, shahzaib shahzaib 
> wrote:
> >
> > > Hi,
> > >
> > >We're using proxy_store on the edge server for replicating
> requested mp4 files and some of our users reported that some of the videos
> are half sized and therefore they are unable to stream whole video file on
> their end (coming from the edge server). On digging into the access_logs of
> nginx, i found the 500 internal server errors for 10~20 videos and on
> checking the size of 500 error videos it was half of the size compare to
> the mirrored video files on the origin. Please check the following error of
> the culprit video link :-
> > >
> > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET
> /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500
> 588 "http://lw3.files.com/files/videos/2014/09/12/"; "Mozilla/4.0
> (compatible; MSIE 8.0; Windows NT 6.0)"
> > >
> > > I'd like to inform that the issue is coming for 40% of the videos.
> > >
> > > error_log :-
> > >
> > > 2014/09/18 15:30:40 [error] 3883#0: *77490
> "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start
> time exceeds file duration, client: 175.110.88.213, server: lw3.files.com,
> request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8
> HTTP/1.1"
> > >
> > > You can see the "start time exceeds error" on edge server but the
> video link starting from start=736.8 exists on origin server.
> > >
> > > Nginx config :-
> > >
> > > server {
> > >
> > > listen   80;
> > > server_name lw3.files.com;
> > > root /var/www/html/tunefiles;
> > > location ~ \.(mp4|jpeg|jpg)$ {
> > >root   /var/www/html/tunefiles;
> > > mp4;
> > > error_page 404 = @fetch;
> > >
> > > }
> > >
> > >
> > > location ~ \.(php)$ {
> > > proxy_pass http://fl008.files.net:80;
> > > }
> > >
> > >
> > >
> > > location @fetch {
> > > internal;
> > > proxy_pass http://fl008.origin.com:80;
> > > proxy_storeon;
> > > proxy_store_access user:rw group:rw all:r;
> > > root /var/www/html/tunefiles;
> > > }
> >
> > Do you have the mp4 module enabled at the origin?  If so then you have
> partial mp4
> > downloaded from there and stored locally.  Note proxy_pass without URI
> passes
> > client URIs to the origin keeping the arguments (including “start”).
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy_store downloading half videos !!

2014-09-18 Thread shahzaib shahzaib
Yes, the mp4 modules is enabled on origin as well as edge. Could you please
help me resolving the issue ?

On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan  wrote:

>
> On 18 Sep 2014, at 14:49, shahzaib shahzaib  wrote:
>
> > Hi,
> >
> >We're using proxy_store on the edge server for replicating requested
> mp4 files and some of our users reported that some of the videos are half
> sized and therefore they are unable to stream whole video file on their end
> (coming from the edge server). On digging into the access_logs of nginx, i
> found the 500 internal server errors for 10~20 videos and on checking the
> size of 500 error videos it was half of the size compare to the mirrored
> video files on the origin. Please check the following error of the culprit
> video link :-
> >
> > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET
> /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500
> 588 "http://lw3.files.com/files/videos/2014/09/12/"; "Mozilla/4.0
> (compatible; MSIE 8.0; Windows NT 6.0)"
> >
> > I'd like to inform that the issue is coming for 40% of the videos.
> >
> > error_log :-
> >
> > 2014/09/18 15:30:40 [error] 3883#0: *77490
> "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start
> time exceeds file duration, client: 175.110.88.213, server: lw3.files.com,
> request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8
> HTTP/1.1"
> >
> > You can see the "start time exceeds error" on edge server but the video
> link starting from start=736.8 exists on origin server.
> >
> > Nginx config :-
> >
> > server {
> >
> > listen   80;
> > server_name lw3.files.com;
> > root /var/www/html/tunefiles;
> > location ~ \.(mp4|jpeg|jpg)$ {
> >root   /var/www/html/tunefiles;
> > mp4;
> > error_page 404 = @fetch;
> >
> > }
> >
> >
> > location ~ \.(php)$ {
> > proxy_pass http://fl008.files.net:80;
> > }
> >
> >
> >
> > location @fetch {
> > internal;
> > proxy_pass http://fl008.origin.com:80;
> > proxy_storeon;
> > proxy_store_access user:rw group:rw all:r;
> > root /var/www/html/tunefiles;
> > }
>
> Do you have the mp4 module enabled at the origin?  If so then you have
> partial mp4
> downloaded from there and stored locally.  Note proxy_pass without URI
> passes
> client URIs to the origin keeping the arguments (including “start”).
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy_store downloading half videos !!

2014-09-18 Thread shahzaib shahzaib
nginx version: nginx/1.7.4
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx
--group=nginx --with-http_flv_module --with-http_mp4_module


On Thu, Sep 18, 2014 at 3:49 PM, shahzaib shahzaib 
wrote:

> Hi,
>
>We're using proxy_store on the edge server for replicating requested
> mp4 files and some of our users reported that some of the videos are half
> sized and therefore they are unable to stream whole video file on their end
> (coming from the edge server). On digging into the access_logs of nginx, i
> found the 500 internal server errors for 10~20 videos and on checking the
> size of 500 error videos it was half of the size compare to the mirrored
> video files on the origin. Please check the following error of the culprit
> video link :-
>
> 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET
> /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500
> 588 "http://lw3.files.com/files/videos/2014/09/12/"; "Mozilla/4.0
> (compatible; MSIE 8.0; Windows NT 6.0)"
>
> I'd like to inform that the issue is coming for 40% of the videos.
>
> error_log :-
>
> 2014/09/18 15:30:40 [error] 3883#0: *77490
> "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start
> time exceeds file duration, client: 175.110.88.213, server: lw3.files.com,
> request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8
> HTTP/1.1"
>
> You can see the "start time exceeds error" on edge server but the video
> link starting from start=736.8 exists on origin server.
>
> Nginx config :-
>
> server {
>
> listen   80;
> server_name lw3.files.com;
> root /var/www/html/tunefiles;
> location ~ \.(mp4|jpeg|jpg)$ {
>root   /var/www/html/tunefiles;
> mp4;
> error_page 404 = @fetch;
>
> }
>
>
> location ~ \.(php)$ {
> proxy_pass http://fl008.files.net:80;
> }
>
>
>
> location @fetch {
> internal;
> proxy_pass http://fl008.origin.com:80;
> proxy_storeon;
> proxy_store_access user:rw group:rw all:r;
> root /var/www/html/tunefiles;
> }
>
>
>
> }
>
> Regards.
> Shahzaib
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Proxy_store downloading half videos !!

2014-09-18 Thread shahzaib shahzaib
Hi,

   We're using proxy_store on the edge server for replicating requested mp4
files and some of our users reported that some of the videos are half sized
and therefore they are unable to stream whole video file on their end
(coming from the edge server). On digging into the access_logs of nginx, i
found the 500 internal server errors for 10~20 videos and on checking the
size of 500 error videos it was half of the size compare to the mirrored
video files on the origin. Please check the following error of the culprit
video link :-

203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET
/files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500
588 "http://lw3.files.com/files/videos/2014/09/12/"; "Mozilla/4.0
(compatible; MSIE 8.0; Windows NT 6.0)"

I'd like to inform that the issue is coming for 40% of the videos.

error_log :-

2014/09/18 15:30:40 [error] 3883#0: *77490
"/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start
time exceeds file duration, client: 175.110.88.213, server: lw3.files.com,
request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8
HTTP/1.1"

You can see the "start time exceeds error" on edge server but the video
link starting from start=736.8 exists on origin server.

Nginx config :-

server {

listen   80;
server_name lw3.files.com;
root /var/www/html/tunefiles;
location ~ \.(mp4|jpeg|jpg)$ {
   root   /var/www/html/tunefiles;
mp4;
error_page 404 = @fetch;

}


location ~ \.(php)$ {
proxy_pass http://fl008.files.net:80;
}



location @fetch {
internal;
proxy_pass http://fl008.origin.com:80;
proxy_storeon;
proxy_store_access user:rw group:rw all:r;
root /var/www/html/tunefiles;
}



}

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: zero size buf in output !!

2014-09-17 Thread shahzaib shahzaib
Well, i again received the same error but its much improvement in time
frame. If the error was occurring after each 5min, now the same error is
occurring after 30~50min.

The conclusion is, nginx-1.7.4 is not 100% bug free from this issue.

2014/09/17 17:22:48 [alert] 28559#0: *27961 zero size buf in output t:0 r:0
f:0 0477EE20 0477EE20-0477FE20  0-0
while sending to client, client: 115.167.75.22, server: ldx.files.com,
request: "GET /files/videos/2014/09/04/140984890338bc7-240.mp4 HTTP/1.1",

[root@tw data]# nginx -V
nginx version: nginx/1.7.4
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx
--group=nginx --with-http_flv_module --with-http_mp4_module
You have mail in /var/spool/mail/root


Regards.
Shahzaib

On Wed, Sep 17, 2014 at 4:25 PM, shahzaib shahzaib 
wrote:

> Hi Maxim,
>
>   Upgraded nginx to 1.7.4 and looks like the issue is gone.
>
> Regards.
> Shahzaib
>
>
> On Wed, Aug 27, 2014 at 10:16 PM, Maxim Dounin  wrote:
>
>> Hello!
>>
>> On Wed, Aug 27, 2014 at 08:48:19PM +0500, shahzaib shahzaib wrote:
>>
>> > We're facing following error on edge server with nginx-1.6.1, using
>> > proxy_store on edge.
>> >
>> > 2014/08/27 20:35:05 [alert] 5701#0: *21244 zero size buf in output t:0
>> r:0
>> > f:0 02579840 02579840-0257A840 
>> 0-0
>> > while sending to client, client: 119.160.118.123, server:
>> > storage4.content.com, request: "GET
>> > /files/videos/2013/06/30/137256108550d07-m.mp4 HTTP/1.1", upstream: "
>> > http://82.2.37.87:80/files/videos/2013/06/30/137256108550d07-m.mp4";,
>> host: "
>> > storage4.content.com"
>> > 2014/08/27 20:35:28 [alert] 5687#0: *26261 zero size buf in output t:0
>> r:0
>> > f:0 04F5F2D0 04F5F2D0-04F602D0 
>> 0-0
>> > while sending to client, client: 121.52.147.68, server:
>> storage9.content.com,
>> > request: "GET /files/videos/2014/04/21/1398060531bb2e3-360.mp4
>> HTTP/1.1",
>> > upstream: "
>> > http://9.7.248.180:80/files/videos/2014/04/21/1398060531bb2e3-360.mp4";,
>> > host: "storage9.content.com", referrer: "
>> > http://files.com/video/2618018/aashiqui-3-new-songs";
>> >
>> > nginx version: nginx/1.6.1
>> > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)
>> > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
>> > --conf-path=/etc/nginx/nginx.conf
>> --error-log-path=/var/log/nginx/error.log
>> > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
>> > --lock-path=/var/run/nginx.lock
>> > --http-client-body-temp-path=/var/cache/nginx/client_temp
>> > --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx
>> > --group=nginx --with-http_flv_module --with-http_mp4_module
>>
>> You may want to try 1.7.4 to see if it helps (there are some
>> potentially related changes in nginx 1.7.3).
>>
>> If it doesn't, providing debug log may be helpful.  See
>> http://wiki.nginx.org/Debugging for more hints.
>>
>> --
>> Maxim Dounin
>> http://nginx.org/
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: zero size buf in output !!

2014-09-17 Thread shahzaib shahzaib
Hi Maxim,

  Upgraded nginx to 1.7.4 and looks like the issue is gone.

Regards.
Shahzaib

On Wed, Aug 27, 2014 at 10:16 PM, Maxim Dounin  wrote:

> Hello!
>
> On Wed, Aug 27, 2014 at 08:48:19PM +0500, shahzaib shahzaib wrote:
>
> > We're facing following error on edge server with nginx-1.6.1, using
> > proxy_store on edge.
> >
> > 2014/08/27 20:35:05 [alert] 5701#0: *21244 zero size buf in output t:0
> r:0
> > f:0 02579840 02579840-0257A840 
> 0-0
> > while sending to client, client: 119.160.118.123, server:
> > storage4.content.com, request: "GET
> > /files/videos/2013/06/30/137256108550d07-m.mp4 HTTP/1.1", upstream: "
> > http://82.2.37.87:80/files/videos/2013/06/30/137256108550d07-m.mp4";,
> host: "
> > storage4.content.com"
> > 2014/08/27 20:35:28 [alert] 5687#0: *26261 zero size buf in output t:0
> r:0
> > f:0 04F5F2D0 04F5F2D0-04F602D0 
> 0-0
> > while sending to client, client: 121.52.147.68, server:
> storage9.content.com,
> > request: "GET /files/videos/2014/04/21/1398060531bb2e3-360.mp4 HTTP/1.1",
> > upstream: "
> > http://9.7.248.180:80/files/videos/2014/04/21/1398060531bb2e3-360.mp4";,
> > host: "storage9.content.com", referrer: "
> > http://files.com/video/2618018/aashiqui-3-new-songs";
> >
> > nginx version: nginx/1.6.1
> > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)
> > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
> > --conf-path=/etc/nginx/nginx.conf
> --error-log-path=/var/log/nginx/error.log
> > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
> > --lock-path=/var/run/nginx.lock
> > --http-client-body-temp-path=/var/cache/nginx/client_temp
> > --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx
> > --group=nginx --with-http_flv_module --with-http_mp4_module
>
> You may want to try 1.7.4 to see if it helps (there are some
> potentially related changes in nginx 1.7.3).
>
> If it doesn't, providing debug log may be helpful.  See
> http://wiki.nginx.org/Debugging for more hints.
>
> --
> Maxim Dounin
> http://nginx.org/
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

zero size buf in output !!

2014-08-27 Thread shahzaib shahzaib
We're facing following error on edge server with nginx-1.6.1, using
proxy_store on edge.



2014/08/27 20:35:05 [alert] 5701#0: *21244 zero size buf in output t:0 r:0
f:0 02579840 02579840-0257A840  0-0
while sending to client, client: 119.160.118.123, server:
storage4.content.com, request: "GET
/files/videos/2013/06/30/137256108550d07-m.mp4 HTTP/1.1", upstream: "
http://82.2.37.87:80/files/videos/2013/06/30/137256108550d07-m.mp4";, host: "
storage4.content.com"
2014/08/27 20:35:28 [alert] 5687#0: *26261 zero size buf in output t:0 r:0
f:0 04F5F2D0 04F5F2D0-04F602D0  0-0
while sending to client, client: 121.52.147.68, server: storage9.content.com,
request: "GET /files/videos/2014/04/21/1398060531bb2e3-360.mp4 HTTP/1.1",
upstream: "
http://9.7.248.180:80/files/videos/2014/04/21/1398060531bb2e3-360.mp4";,
host: "storage9.content.com", referrer: "
http://files.com/video/2618018/aashiqui-3-new-songs";

nginx version: nginx/1.6.1
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx
--group=nginx --with-http_flv_module --with-http_mp4_module

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Combining proxy_store and proxy_cache values !!

2014-08-27 Thread shahzaib shahzaib
Hi,

  I am using proxy_store on edge server to replicate mp4 files from origin
server when user request for the video file. The reason i am not using
proxy_cache is, when user seeks through the mp4 video,proxy_cache keeps on
downloading whole video file with different Keys if multiple users
accessing single video with different cache Keys i.e

http://file.video.com?start=32
http://file.video.com?start=54

The problem with proxy_store is, its not much flexible as proxy_cache has
different variables to adjust and most useful is proxy_cache_min_use.

I need proxy_store to check for requested file and if file is requested
first time it should be serve from origin server instead of downloading the
whole file. Can i use some option like proxy_cache_min_use for proxy_store
? If not, is there some other way for it ?

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Download full mp4 file with proxy_cache or proxy_store !!

2014-07-25 Thread shahzaib shahzaib
mp4 seeking is filling up disk rapidly on edge server using proxy_cache and
also incoming bandwidth is always higher than outgoing bandwidth (nload).
Maybe, people are seeking to mp4 files and full videos are getting download
again and again.

How can i manage the mp4 seeking on edge server ? will proxy_store resolve
the issue ?  I really need to find the solution.

Btw, nginx version is 1.6

Regards.



On Mon, Jun 23, 2014 at 11:06 PM, shahzaib shahzaib 
wrote:

> >> You can use proxy_store with the mp4 module.
> So, proxy_store is able to download whole mp4 file once and than server
> that file locally without fetching each time from the origin if users seek
> through the video ?
>
>
> On Mon, Jun 23, 2014 at 7:43 PM, Roman Arutyunyan  wrote:
>
>>
>> On 23 Jun 2014, at 17:15, itpp2012  wrote:
>>
>> > Roman Arutyunyan Wrote:
>> > ---
>> >> Moreover the mp4 module does not work over proxy cache.  That means
>> >> even if you fix the cache key issue
>> >> mp4 seeking will not work.  You need to have a local mp4 file to be
>> >> able to seek mp4 like that.
>> >
>> > Hmm, what about a hack, if the file is cached keep a link to the cached
>> file
>> > and its original name, if the next request matches a cached file and its
>> > original name and a seek is requested then pass the cache via its
>> original
>> > name to allow seeking on the local (but cached) file.
>>
>> You can use proxy_store with the mp4 module.
>>
>> Having a link to a nginx cache file is wrong since cache file has
>> internal header and
>> HTTP headers.  Cached mp4 entry is not a valid mp4 meaning you can’t play
>> it directly
>> without stripping headers.
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy_pass Directive !!

2014-07-20 Thread shahzaib shahzaib
>>1.2.3.4 -> request xx.mp4 -> edge 5.6.7.8 (I don't have that file) -> send
client address of origin and tell client to re-initiate file request with
origin address.

@itpp, you're always the light of hope in darkness :-).Thats the exact
solution i need, rewrite is not recommended in our solution because we're
using View directive of bind where same domain test.com will resolve to
edge as well as origin server on the basis of client's ip. So i cannot
rewrite test.com back to test.com. You mentioned the solution
HTTP-EQUIV="REFRESH" . Is that fine to use this method also, could you tell
me how to use it with origin ip in nginx ? So client will resend the
request to origin instead of edge.




On Sun, Jul 20, 2014 at 12:15 AM, Payam Chychi  wrote:

>  Use a resirect, keep it clean, simple, and compliant
>
> Why waste reaources when you dont have to?
>
> --
> Payam Chychi
> Network Engineer / Security Specialist
>
> On Saturday, July 19, 2014 at 10:47 AM, Anoop Alias wrote:
>
>
>
>
> The Proxying server does not download the entire file ;save it to disk and
> then serve from that .
>
> The proxy simply buffers the content (which is config manageable) and
> serve the end user (browser ) .So the proxy will not face a high disk i/o
> load like the origin ..
>
>
>
> --
> *Anoop P Alias*
> GNUSYS 
>  ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy_pass Directive !!

2014-07-19 Thread shahzaib shahzaib
>>If nginx needs to proxy_pass, then before nginx sends content to client,
nginx asks upstream for content, and upstream sends content to nginx. So
the full file also goes from upstream to nginx.

Means, both server's i/o will be used if the requested file to upstream is
720p.mp4?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Proxy_pass Directive !!

2014-07-19 Thread shahzaib shahzaib
I am confused about the proxy_pass directive. Suppose, i need to serve an
mp4 file from Origin server and using proxy_pass directive in Edge server,
whose resources(I/o,bandwidth,Ram) will be used ? Edge or Origin ?
Following is the topology to server mp4 file :-

client (request mp4 file) --> edge(ip is 1.2.3.4 so don't serve it locally
and forward it to origin) --> origin (serves the requested mp4 file).

Now, when origin serves that mp4 file, will the mp4 file first goes edge
server and than client is served via edge proxy ? Also mp4 is a big file
and i am curious to know whose HDD i/o will be used to serve this file ,
Edge/Origin ?

Thanks
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_cache not serving file from edge server !!

2014-07-01 Thread shahzaib shahzaib
Our caching method is :-

client > origin ---> edge.


On Tue, Jul 1, 2014 at 4:57 PM, shahzaib shahzaib 
wrote:

> We've an origin and edge server with nginx-1.6 . Origin web-server(Located
> in U.S) is configured with nginx_geo_module and edge(Local ISP) is
> configured with proxy_cache in order to cache files from origin server and
> serve from their lately. We're using following method for caching with
> proxy_cache :-
>
> 1. client (1.1.1.1) sends mp4 request to origin webserver and geo_module
> in origin checks, if the ip is 1.1.1.1 then pass that client to the edge
> server using proxy_pass.
>
> 2. Edge, checks if the file is in proxy_cache than it should serve the
> file locally and if file is not in proxy_cache, it'll pass back the request
> to origin server and client will be served from origin server as well as
> requested file will also be cached in local server, so next time the edge
> will not have to pass request again to origin server and serve the same
> file via locally.
>
> But, looks like our caching is not working as expected. Our ISP is
> complaining that, whenever edge server serves the file, instead of serving
> that file to local client (1.1.1.1) it serves the file back to origin
> server(U.S) and all outgoing bandwidth is going back to U.S instead of
> local clients (Offcourse bandwidth not being saved).
>
> So i want to ask, if the origin server is passing request to edge server,
> the cached file must be served locally but the request going back to the
> origin server even the cache status: HIT. Following are my configs :-
>
> ORIGIN :-
>
> geo $TW {
>   default 0;
> 1.1.1.1 1;
>
> }
>
>
>
> server {
> listen  80;
> server_name  origin.files.com origin.gear.net  origin.gear.com;
> location / {
> root   /var/www/html/files;
> index index.html index.htm index.php;
>
> }
>
>
> location ~ \.(mp4|jpg)$ {
>
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header X-Forwarded-For
> $proxy_add_x_forwarded_for;
> proxy_set_header Host $http_host;
> if ($TW) {
> proxy_pass http://tw002.edge.com:80;
> }
>  mp4;
> root /var/www/html/files;
>
>
> expires 7d;
> valid_referers none blocked  video.pk *.video.pk blog.video.pk *.
> facebook.com *.twitter.com *.files.com *.gear.net video.tv *.video.tv
> videomedia.tv www.videomedia.tv embed.videomedia.tv;
> if ($invalid_referer) {
> return   403;
> }
> }
>
>  # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
> location ~ \.php$ {
> root /var/www/html/files;
> fastcgi_pass   127.0.0.1:9000;
>fastcgi_index  index.php;
> fastcgi_param  SCRIPT_FILENAME
> $document_root$fastcgi_script_name;
> includefastcgi_params;
> }
>
> location ~ /\.ht {
> deny  all;
> }
> }
>
> EDGE :-
>
> #proxy_ignore_headers "Set-Cookie";
> proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=static:100m
> loader_threshold=200 loader_files=500 inactive=1d
> max_size=62g;
>
>
> server {
>
> listen   80;
> server_name  tw002.edge.com;
> root /var/www/html/files;
> location ~ \.(mp4|jpeg|jpg)$ {
>root   /var/www/html/files;
> mp4;
> try_files $uri @getfrom_origin;
>
> }
>
>
> location @getfrom_origin {
> proxy_pass http://origin.files.com:80;
> #   proxy_cache_valid 200 302   60m;
> proxy_cache_valid  15d;
> proxy_cache static;
> proxy_cache_min_uses 1;
> }
>
>
>
> }
>
> Help will be highly appreciated.
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

proxy_cache not serving file from edge server !!

2014-07-01 Thread shahzaib shahzaib
We've an origin and edge server with nginx-1.6 . Origin web-server(Located
in U.S) is configured with nginx_geo_module and edge(Local ISP) is
configured with proxy_cache in order to cache files from origin server and
serve from their lately. We're using following method for caching with
proxy_cache :-

1. client (1.1.1.1) sends mp4 request to origin webserver and geo_module in
origin checks, if the ip is 1.1.1.1 then pass that client to the edge
server using proxy_pass.

2. Edge, checks if the file is in proxy_cache than it should serve the file
locally and if file is not in proxy_cache, it'll pass back the request to
origin server and client will be served from origin server as well as
requested file will also be cached in local server, so next time the edge
will not have to pass request again to origin server and serve the same
file via locally.

But, looks like our caching is not working as expected. Our ISP is
complaining that, whenever edge server serves the file, instead of serving
that file to local client (1.1.1.1) it serves the file back to origin
server(U.S) and all outgoing bandwidth is going back to U.S instead of
local clients (Offcourse bandwidth not being saved).

So i want to ask, if the origin server is passing request to edge server,
the cached file must be served locally but the request going back to the
origin server even the cache status: HIT. Following are my configs :-

ORIGIN :-

geo $TW {
  default 0;
1.1.1.1 1;

}



server {
listen  80;
server_name  origin.files.com origin.gear.net  origin.gear.com;
location / {
root   /var/www/html/files;
index index.html index.htm index.php;

}


location ~ \.(mp4|jpg)$ {

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
if ($TW) {
proxy_pass http://tw002.edge.com:80;
}
 mp4;
root /var/www/html/files;


expires 7d;
valid_referers none blocked  video.pk *.video.pk blog.video.pk *.
facebook.com *.twitter.com *.files.com *.gear.net video.tv *.video.tv
videomedia.tv www.videomedia.tv embed.videomedia.tv;
if ($invalid_referer) {
return   403;
}
}

 # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
root /var/www/html/files;
fastcgi_pass   127.0.0.1:9000;
   fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME
$document_root$fastcgi_script_name;
includefastcgi_params;
}

location ~ /\.ht {
deny  all;
}
}

EDGE :-

#proxy_ignore_headers "Set-Cookie";
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=static:100m
loader_threshold=200 loader_files=500 inactive=1d
max_size=62g;


server {

listen   80;
server_name  tw002.edge.com;
root /var/www/html/files;
location ~ \.(mp4|jpeg|jpg)$ {
   root   /var/www/html/files;
mp4;
try_files $uri @getfrom_origin;

}


location @getfrom_origin {
proxy_pass http://origin.files.com:80;
#   proxy_cache_valid 200 302   60m;
proxy_cache_valid  15d;
proxy_cache static;
proxy_cache_min_uses 1;
}



}

Help will be highly appreciated.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Download full mp4 file with proxy_cache or proxy_store !!

2014-06-23 Thread shahzaib shahzaib
>> You can use proxy_store with the mp4 module.
So, proxy_store is able to download whole mp4 file once and than server
that file locally without fetching each time from the origin if users seek
through the video ?


On Mon, Jun 23, 2014 at 7:43 PM, Roman Arutyunyan  wrote:

>
> On 23 Jun 2014, at 17:15, itpp2012  wrote:
>
> > Roman Arutyunyan Wrote:
> > ---
> >> Moreover the mp4 module does not work over proxy cache.  That means
> >> even if you fix the cache key issue
> >> mp4 seeking will not work.  You need to have a local mp4 file to be
> >> able to seek mp4 like that.
> >
> > Hmm, what about a hack, if the file is cached keep a link to the cached
> file
> > and its original name, if the next request matches a cached file and its
> > original name and a seek is requested then pass the cache via its
> original
> > name to allow seeking on the local (but cached) file.
>
> You can use proxy_store with the mp4 module.
>
> Having a link to a nginx cache file is wrong since cache file has internal
> header and
> HTTP headers.  Cached mp4 entry is not a valid mp4 meaning you can’t play
> it directly
> without stripping headers.
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Download full mp4 file with proxy_cache or proxy_store !!

2014-06-23 Thread shahzaib shahzaib
>>Hmm, what about a hack, if the file is cached keep a link to the cached
file
and its original name, if the next request matches a cached file and its
original name and a seek is requested then pass the cache via its original
name to allow seeking on the local (but cached) file.

That means, i should have double storage, one for cached files via
proxy_cache and other for local files via rsync.


On Mon, Jun 23, 2014 at 6:15 PM, itpp2012  wrote:

> Roman Arutyunyan Wrote:
> ---
> > Moreover the mp4 module does not work over proxy cache.  That means
> > even if you fix the cache key issue
> > mp4 seeking will not work.  You need to have a local mp4 file to be
> > able to seek mp4 like that.
>
> Hmm, what about a hack, if the file is cached keep a link to the cached
> file
> and its original name, if the next request matches a cached file and its
> original name and a seek is requested then pass the cache via its original
> name to allow seeking on the local (but cached) file.
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,251037,251108#msg-251108
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Download full mp4 file with proxy_cache or proxy_store !!

2014-06-23 Thread shahzaib shahzaib
@Roman thanks for reply,

>> your second request has a different cache key, so the
file is downloaded again.

Means, if a user seeks through the video i.e http://url/test.mp4?start=99 ,
the whole file again gets downloaded or the partial part of the file from
99sec to onward gets downloaded ? If the whole file downloaded again each
time, does nginx support something like, if user seeks through the video
start=99 and the rest of the file gets download instead of the whole file ?
Is the rsync only solution if not nginx?




On Mon, Jun 23, 2014 at 5:22 PM, Roman Arutyunyan  wrote:

>
> On 19 Jun 2014, at 10:12, shahzaib shahzaib  wrote:
>
> >
> >   we're using two servers (one proxy and one backend). Proxy server
> is using proxy_cache to cache mp4 files from backend server and working
> fine. When i stream a full video from cache, the header response gives me
> the cache-status: HIT but whenever i seek the mp4 file i.e
> http://url/test.mp4?start=33 , the Cache-status changes to : MISS . Does
> that mean, the proxy server is again downloading the same file after the 33
> seconds ?
>
> Since default proxy_cache_key has $args in it, your second request has a
> different cache key, so the
> file is downloaded again.
>
> Moreover the mp4 module does not work over proxy cache.  That means even
> if you fix the cache key issue
> mp4 seeking will not work.  You need to have a local mp4 file to be able
> to seek mp4 like that.
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Download full mp4 file with proxy_cache or proxy_store !!

2014-06-20 Thread shahzaib shahzaib
@Lukas, we're using nginx-1.6 and byte range caching is already enabled by
default(i guess). Below is the curl request :-

 curl -H Range:bytes=16- -I
http://videos.files.com/files/videos/2014/06/20/14032606291de19-360.mp4
HTTP/1.1 206 Partial Content
Server: nginx
Date: Fri, 20 Jun 2014 13:36:05 GMT
Content-Type: video/mp4
Content-Length: 25446010
Connection: keep-alive
Last-Modified: Fri, 20 Jun 2014 11:04:11 GMT
ETag: "53a4152b-184468a"
Expires: Fri, 27 Jun 2014 13:36:05 GMT
Cache-Control: max-age=604800
X-Cache-Status: HIT
Content-Range: bytes 16-25446025/25446026

Could you tell me how can i check using curl that if nginx downloading the
whole file each time the user seek the video with mp4 psuedo module  i.e
http://url/files/videos/test-360.mp4?start=39.

I am newbie to proxy_cache and much confused about the behavior.

I know rsync is better solution but it cannot cache the videos on fly
instead to have a schedule to rsync file in off-peak hours. We want to
cache only videos which are accessed 10 times and nginx is doing well with
proxy_cache_min directive.


On Thu, Jun 19, 2014 at 8:36 PM, Lukas Tribus  wrote:

> > we're using two servers (one proxy and one backend). Proxy server
> > is using proxy_cache to cache mp4 files from backend server and working
> > fine. When i stream a full video from cache, the header response gives
> > me the cache-status: HIT but whenever i seek the mp4 file i.e
> > http://url/test.mp4?start=33 , the Cache-status changes to : MISS .
> > Does that mean, the proxy server is again downloading the same file
> > after the 33 seconds ?
> >
> > Can't i use nginx proxy_cache to download whole mp4 file and and than
> > seek from it instead of fetching the file again and again ? Does
> > proxy_store has this functionality if not proxy_cache ?
>
> You would not have this problem with local files (rsync'ing them to your
> server, as was previsouly suggested in the other thread).
>
>
> What nginx release are you using? You probably need at least 1.5.13 as
> per:
> http://mailman.nginx.org/pipermail/nginx/2014-June/044118.html
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: using 2000+ ip prefixes in nginx geo module !!

2014-06-19 Thread shahzaib shahzaib
looks like i have got the issue. Any requests comes from the ip located in
geo.conf will be forwarded to a domain whose ip resolve into 110.93.X.X.
Now when a request comes from the ip 110.93.X.X , nginx somehow unable to
proxy_pass this prefix(110.93.X.X) it to the ip 110.93.X.X and shows the
bad gateway error.


On Fri, Jun 20, 2014 at 9:57 AM, shahzaib shahzaib 
wrote:

> Issue will only resolve once i remove 110.93.192.0/18 US; from geo.conf.
>
>
> On Fri, Jun 20, 2014 at 9:55 AM, shahzaib shahzaib 
> wrote:
>
>> I removed /24 on per your suggestion and also used different code for
>> override but the issue persists. Modified geo.conf :-
>>
>> 39.49.59.0/24 PK;
>> 110.93.192.0/18 US;
>>
>> 117.20.16.0/20 TW;
>> 119.63.128.0/20 TW;
>> 202.163.104.6/32 ARY;
>> 203.124.63.0/24 CM;
>> 221.132.112.0/21 TW;
>>
>> 110.93.192.0/24 TW; is not added now.
>>
>>
>> On Fri, Jun 20, 2014 at 1:12 AM, Steve Wilson > > wrote:
>>
>>>  These 2 overlap
>>>
>>> 110.93.192.0/24 TW;
>>> 110.93.192.0/18 TW;
>>>
>>> The /24 is within the /18. In this instance you want to remove the /24.
>>>
>>> It might be worth investigating if you've got any others that overlap. I
>>> think you can probably override with a different country code but using the
>>> same makes no sense.
>>>
>>> Steve.
>>>
>>>
>>> On 19/06/14 21:06, shahzaib shahzaib wrote:
>>>
>>>For testing purpose, i have added only few prefixes :-
>>>
>>>  geo {
>>>  default 0;
>>>  include geo.conf;
>>> }
>>>
>>>  geo.conf
>>>
>>> 39.49.59.0/24 PK;
>>> 110.93.192.0/24 TW;
>>> 110.93.192.0/18 TW;
>>> 117.20.16.0/20 TW;
>>> 119.63.128.0/20 TW;
>>> 202.163.104.6/32 ARY;
>>> 203.124.63.0/24 CM;
>>> 221.132.112.0/21 TW;
>>>
>>>
>>>  Now, whenever some ip from the list send request, nginx reply with
>>> gateway timeout :-
>>>
>>> curl -I
>>> http://files.com/files/videos/2014/06/10/14023918368e3411-240.mp4
>>> HTTP/1.1 504 Gateway Time-out
>>> Server: nginx
>>> Date: Thu, 19 Jun 2014 19:59:50 GMT
>>> Content-Type: text/html
>>> Content-Length: 176
>>> Connection: keep-alive
>>>
>>>  In order to resolve this error, i have to manually remove a network
>>> from the file which is 110.93.192.0/18 TW;
>>>
>>>  What so suspicious with this prefix 110.93.192.0/18 TW ? Why it is
>>> causing to crash every other requests ?
>>>
>>>
>>>
>>>  On Fri, Jun 20, 2014 at 12:07 AM, Jonathan Matthews <
>>> cont...@jpluscplusm.com> wrote:
>>>
>>>> On 19 June 2014 19:59, shahzaib shahzaib  wrote:
>>>> > We've added 2000+ ip prefixes in a file "geo.conf" included in nginx
>>>> vhost
>>>> > by using ngx-http_geo_module and received the following warning :-
>>>> >
>>>> > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22
>>>> ",
>>>> > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40
>>>>
>>>>  What makes you think that this error message is incorrect?
>>>> If it's correct and you have a duplicate entry, resolving the problem
>>>> should be pretty simple ...
>>>>
>>>> ___
>>>> nginx mailing list
>>>> nginx@nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>>
>>>
>>>
>>>
>>> ___
>>> nginx mailing 
>>> listnginx@nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>>
>>>
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>
>>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: using 2000+ ip prefixes in nginx geo module !!

2014-06-19 Thread shahzaib shahzaib
Issue will only resolve once i remove 110.93.192.0/18 US; from geo.conf.


On Fri, Jun 20, 2014 at 9:55 AM, shahzaib shahzaib 
wrote:

> I removed /24 on per your suggestion and also used different code for
> override but the issue persists. Modified geo.conf :-
>
> 39.49.59.0/24 PK;
> 110.93.192.0/18 US;
>
> 117.20.16.0/20 TW;
> 119.63.128.0/20 TW;
> 202.163.104.6/32 ARY;
> 203.124.63.0/24 CM;
> 221.132.112.0/21 TW;
>
> 110.93.192.0/24 TW; is not added now.
>
>
> On Fri, Jun 20, 2014 at 1:12 AM, Steve Wilson 
> wrote:
>
>>  These 2 overlap
>>
>> 110.93.192.0/24 TW;
>> 110.93.192.0/18 TW;
>>
>> The /24 is within the /18. In this instance you want to remove the /24.
>>
>> It might be worth investigating if you've got any others that overlap. I
>> think you can probably override with a different country code but using the
>> same makes no sense.
>>
>> Steve.
>>
>>
>> On 19/06/14 21:06, shahzaib shahzaib wrote:
>>
>>For testing purpose, i have added only few prefixes :-
>>
>>  geo {
>>  default 0;
>>  include geo.conf;
>> }
>>
>>  geo.conf
>>
>> 39.49.59.0/24 PK;
>> 110.93.192.0/24 TW;
>> 110.93.192.0/18 TW;
>> 117.20.16.0/20 TW;
>> 119.63.128.0/20 TW;
>> 202.163.104.6/32 ARY;
>> 203.124.63.0/24 CM;
>> 221.132.112.0/21 TW;
>>
>>
>>  Now, whenever some ip from the list send request, nginx reply with
>> gateway timeout :-
>>
>> curl -I
>> http://files.com/files/videos/2014/06/10/14023918368e3411-240.mp4
>> HTTP/1.1 504 Gateway Time-out
>> Server: nginx
>> Date: Thu, 19 Jun 2014 19:59:50 GMT
>> Content-Type: text/html
>> Content-Length: 176
>> Connection: keep-alive
>>
>>  In order to resolve this error, i have to manually remove a network from
>> the file which is 110.93.192.0/18 TW;
>>
>>  What so suspicious with this prefix 110.93.192.0/18 TW ? Why it is
>> causing to crash every other requests ?
>>
>>
>>
>>  On Fri, Jun 20, 2014 at 12:07 AM, Jonathan Matthews <
>> cont...@jpluscplusm.com> wrote:
>>
>>> On 19 June 2014 19:59, shahzaib shahzaib  wrote:
>>> > We've added 2000+ ip prefixes in a file "geo.conf" included in nginx
>>> vhost
>>> > by using ngx-http_geo_module and received the following warning :-
>>> >
>>> > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22",
>>> > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40
>>>
>>>  What makes you think that this error message is incorrect?
>>> If it's correct and you have a duplicate entry, resolving the problem
>>> should be pretty simple ...
>>>
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>
>>
>>
>> ___
>> nginx mailing 
>> listnginx@nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx
>>
>>
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: using 2000+ ip prefixes in nginx geo module !!

2014-06-19 Thread shahzaib shahzaib
I removed /24 on per your suggestion and also used different code for
override but the issue persists. Modified geo.conf :-

39.49.59.0/24 PK;
110.93.192.0/18 US;
117.20.16.0/20 TW;
119.63.128.0/20 TW;
202.163.104.6/32 ARY;
203.124.63.0/24 CM;
221.132.112.0/21 TW;

110.93.192.0/24 TW; is not added now.


On Fri, Jun 20, 2014 at 1:12 AM, Steve Wilson 
wrote:

>  These 2 overlap
>
> 110.93.192.0/24 TW;
> 110.93.192.0/18 TW;
>
> The /24 is within the /18. In this instance you want to remove the /24.
>
> It might be worth investigating if you've got any others that overlap. I
> think you can probably override with a different country code but using the
> same makes no sense.
>
> Steve.
>
>
> On 19/06/14 21:06, shahzaib shahzaib wrote:
>
>For testing purpose, i have added only few prefixes :-
>
>  geo {
>  default 0;
>  include geo.conf;
> }
>
>  geo.conf
>
> 39.49.59.0/24 PK;
> 110.93.192.0/24 TW;
> 110.93.192.0/18 TW;
> 117.20.16.0/20 TW;
> 119.63.128.0/20 TW;
> 202.163.104.6/32 ARY;
> 203.124.63.0/24 CM;
> 221.132.112.0/21 TW;
>
>
>  Now, whenever some ip from the list send request, nginx reply with
> gateway timeout :-
>
> curl -I  http://files.com/files/videos/2014/06/10/14023918368e3411-240.mp4
> HTTP/1.1 504 Gateway Time-out
> Server: nginx
> Date: Thu, 19 Jun 2014 19:59:50 GMT
> Content-Type: text/html
> Content-Length: 176
> Connection: keep-alive
>
>  In order to resolve this error, i have to manually remove a network from
> the file which is 110.93.192.0/18 TW;
>
>  What so suspicious with this prefix 110.93.192.0/18 TW ? Why it is
> causing to crash every other requests ?
>
>
>
>  On Fri, Jun 20, 2014 at 12:07 AM, Jonathan Matthews <
> cont...@jpluscplusm.com> wrote:
>
>> On 19 June 2014 19:59, shahzaib shahzaib  wrote:
>> > We've added 2000+ ip prefixes in a file "geo.conf" included in nginx
>> vhost
>> > by using ngx-http_geo_module and received the following warning :-
>> >
>> > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22",
>> > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40
>>
>>  What makes you think that this error message is incorrect?
>> If it's correct and you have a duplicate entry, resolving the problem
>> should be pretty simple ...
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
>
> ___
> nginx mailing 
> listnginx@nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: using 2000+ ip prefixes in nginx geo module !!

2014-06-19 Thread shahzaib shahzaib
For testing purpose, i have added only few prefixes :-

geo {
default 0;
include geo.conf;
}

geo.conf

39.49.59.0/24 PK;
110.93.192.0/24 TW;
110.93.192.0/18 TW;
117.20.16.0/20 TW;
119.63.128.0/20 TW;
202.163.104.6/32 ARY;
203.124.63.0/24 CM;
221.132.112.0/21 TW;


Now, whenever some ip from the list send request, nginx reply with gateway
timeout :-

curl -I  http://files.com/files/videos/2014/06/10/14023918368e3411-240.mp4
HTTP/1.1 504 Gateway Time-out
Server: nginx
Date: Thu, 19 Jun 2014 19:59:50 GMT
Content-Type: text/html
Content-Length: 176
Connection: keep-alive

In order to resolve this error, i have to manually remove a network from
the file which is 110.93.192.0/18 TW;

What so suspicious with this prefix 110.93.192.0/18 TW ? Why it is causing
to crash every other requests ?



On Fri, Jun 20, 2014 at 12:07 AM, Jonathan Matthews  wrote:

> On 19 June 2014 19:59, shahzaib shahzaib  wrote:
> > We've added 2000+ ip prefixes in a file "geo.conf" included in nginx
> vhost
> > by using ngx-http_geo_module and received the following warning :-
> >
> > 2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22",
> > value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40
>
> What makes you think that this error message is incorrect?
> If it's correct and you have a duplicate entry, resolving the problem
> should be pretty simple ...
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

using 2000+ ip prefixes in nginx geo module !!

2014-06-19 Thread shahzaib shahzaib
We've added 2000+ ip prefixes in a file "geo.conf" included in nginx vhost
by using ngx-http_geo_module and received the following warning :-

2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.24.96.0/22",
value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:40
2014/06/19 23:52:46 [warn] 1633#0: duplicate network "103.251.176.0/22",
value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:50
2014/06/19 23:52:46 [warn] 1633#0: duplicate network "202.141.224.0/19",
value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:1312
2014/06/19 23:52:46 [warn] 1633#0: duplicate network "202.142.160.0/19",
value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:1355
2014/06/19 23:52:46 [warn] 1633#0: duplicate network "202.5.136.0/21",
value: "tw", old value: "tw" in /usr/local/nginx/conf/geo.conf:1528

Due to it, nginx showing 504 gateway error for all ips included in geo.conf
file
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Download full mp4 file with proxy_cache or proxy_store !!

2014-06-18 Thread shahzaib shahzaib
  we're using two servers (one proxy and one backend). Proxy server is
using proxy_cache to cache mp4 files from backend server and working fine.
When i stream a full video from cache, the header response gives me the
cache-status: HIT but whenever i seek the mp4 file i.e
http://url/test.mp4?start=33 , the Cache-status changes to : MISS . Does
that mean, the proxy server is again downloading the same file after the 33
seconds ?

Can't i use nginx proxy_cache to download whole mp4 file and and than seek
from it instead of fetching the file again and again ? Does proxy_store has
this functionality if not proxy_cache ?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-18 Thread shahzaib shahzaib
@Jonathon, yes you're right i should not post off-topic here, offcourse i
thought as nginx has tremendous amount of capabilities and there might be
alternative possibility of BGP too but i was wrong. I would be thankful if
you help me on ngx-http_geo_module as it is related to nginx and help me
with the following problem :-

-
Our local ISP provided us with some testing ip prefixes to check nginx
based caching.  i.e
geo {
default 0;
10.0.0.0/8 1;
39.23.2.0/24 1;
112.50.192.0/18 1;
}

Now whenever we add the prefix 112.50.192.0/18 in geo {} , all the requests
coming from the 39.23.2.0/24 and 10.0.0.0/8 returns nginx 504 gateway error
and videos failed to stream. To resolve this issue, we have to remove
112.50.192.0/18 1; from geo block.



On Thu, Jun 19, 2014 at 1:05 AM, Jonathan Matthews 
wrote:

> On 18 Jun 2014 20:45, "shahzaib shahzaib"  wrote:
> >
> > >>why not host those file on a professional CDN instead of in-house?
> > Because 80% of the traffic is from our country and 50% of that traffic
> is from the ISP we're talking to and this is the reason we deployed the
> caching box on this ISP edge.
>
> But, as this now pretty off-topic thread is repeatedly demonstrating, you
> haven't deployed diddly squat. You've just chucked a server in a rack and
> are having to rely on unpaid, debugging-by-email advice from an
> pseudonymous mailing list to get it even near functional. Let alone
> properly defined and understood.
>
> If your *business* needs to do this, pay a professional person or
> organisation to help you like others have suggested. The alternative, which
> you appear to be ending up with, is a black box of hacks known only to
> yourself and potentially understood by no-one, which will SPoF on you,
> personally, until you leave that organisation. You don't want that. Trust
> me.
> 
>
> Just my 2 cents,
> Jonathan
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-18 Thread shahzaib shahzaib
>>why not host those file on a professional CDN instead of in-house?
Because 80% of the traffic is from our country and 50% of that traffic is
from the ISP we're talking to and this is the reason we deployed the
caching box on this ISP edge.


On Thu, Jun 19, 2014 at 12:35 AM, Lukas Tribus  wrote:

> Hi,
>
>
> > ok, but i have no idea why ISP is asking for BGP and matter of fact is,
> > i'll have to make BGP work somehow, so local caching server will fetch
> > the new subnets from ISP router automatically (and i don't know how).
>
> I strongly suggest you hire some consultant who can help you setting
> all those things up, because this is clearly a task too complex for
> a single mailing list thread and some nginx configurations.
>
> Also, why not host those file on a professional CDN instead of in-house?
>
> https://www.google.com/search?q=mp4+streaming+cdn
>
>
>
> Lukas
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-18 Thread shahzaib shahzaib
>>geo {
default 0;
10.0.0.0/8 1;
39.23.2.0/24 1;
112.50.192.0/18 1;
}

Sorry i didn't write accurately here but it is 10.0.0.0/8 1; in nginx
config, so the problem is not the wrong syntax for geo {}.


On Wed, Jun 18, 2014 at 11:24 PM, itpp2012  wrote:

> shahzaib1232 Wrote:
> ---
> >
> > Btw, our local ISP provided us with some testing ip prefixes to check
> > nginx
> > based caching.  i.e
> > geo {
> > default 0;
> > 10.0.0.0/8
> > 39.23.2.0/24 1;
> > 112.50.192.0/18 1;
> > }
> >
>
> Typo??
>
> geo {
> default 0;
> 10.0.0.0/8 1;
> 39.23.2.0/24 1;
> 112.50.192.0/18 1;
> }
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249997,251019#msg-251019
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-18 Thread shahzaib shahzaib
ok, but i have no idea why ISP is asking for BGP and matter of fact is,
i'll have to make BGP work somehow, so local caching server will fetch the
new subnets from ISP router automatically (and i don't know how).

Btw, our local ISP provided us with some testing ip prefixes to check nginx
based caching.  i.e
geo {
default 0;
10.0.0.0/8
39.23.2.0/24 1;
112.50.192.0/18 1;
}

Now whenever we add the prefix 112.50.192.0/18 in geo {} , all the requests
coming from the 39.23.2.0/24 and 10.0.0.0/8 returns 504 gateway error and
videos failed to stream. To resolve this issue, we have to remove
112.50.192.0/18 1; from geo block.



On Wed, Jun 18, 2014 at 12:50 AM, itpp2012  wrote:

> You don't need to do anything with a dns that is only local to the clients
> served by the ISP.
>
> Suppose I am in Africa;
> Question to my ISP: I'd like to go to new-york
> ISP: new-york is located in south-Africa
>
> Suppose I am in the US;
> Question to my ISP: I'd like to go to new-york
> ISP: new-york is located in the US
>
> The DNS is just a pointer, where ever you have an edge server make the dns
> name point to it, when not point the dns to origin.
> Every ISP client gets the DNS servers from their ISP, its really simple.
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249997,250957#msg-250957
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Best method for adding GeoIP support

2014-06-17 Thread shahzaib shahzaib
Maybe ngx_geo_module could help you, it's comes built-in with nginx and
doesn't need re-compilation.

http://nginx.org/en/docs/http/ngx_http_geo_module.html


On Wed, Jun 18, 2014 at 11:51 AM, Kurt Cancemi 
wrote:

> Hello,
>
> There is no way to do this with the packages from nginx.org, without
> recompiling nginx, with the --with-http_geoip_module build flag. Unless
> you do it on another level (e.g. with the geoip php extension) which I am
> assuming you don't want. You could set up your own repo.
>
> ---
> Kurt Cancemi
> http://www.getwnmp.org
>
> ---
> Kurt Cancemi
> http://www.getwnmp.org
>
>
> On Tue, Jun 17, 2014 at 9:59 PM, TheBritishGeek 
> wrote:
>
>> We have just started to work with Nginx and have installed by adding the
>> nginx repositry to our debian 7 installs. It works almost perfectly out of
>> the box as such. However we need to add GeoIP support, so the question is
>> what is the best method of doing this.  I really don't want to compile our
>> own install and break the simple apt-get upgrade path in the future if we
>> have a choice.
>>
>> Any advice would be greatly appreciated.
>>
>> Posted at Nginx Forum:
>> http://forum.nginx.org/read.php?2,250978,250978#msg-250978
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-17 Thread shahzaib shahzaib
>>Why not use a DNS for the clients?
How i would be sure that request coming from the ISP-1 on the DNS server ?
and then point it to the local caching server? I mean i can use View
directive of BIND to route specific ips (local ISP clients) to the local
caching server and what if tomorrow the ISP has added more clients to their
network ? I'll also have to add those new ip prefixes to DNS server.


On Tue, Jun 17, 2014 at 11:45 PM, shahzaib shahzaib 
wrote:

> >>Why not use a DNS for the clients?
> How i would be sure that request coming from the ISP-1 on the DNS server
> and then point it to the local caching server? I mean i can use View
> directive of BIND to route specific ips (local ISP clients) to the local
> caching server and what if tomorrow the ISP has added more clients to their
> network ? I'll also have those new ip prefixes to DNS server.
>
> Please correct me if i am wrong.
>
>
> On Tue, Jun 17, 2014 at 11:32 PM, itpp2012  wrote:
>
>> shahzaib1232 Wrote:
>> ---
>> > i don't think the solution rdns will be suitable for us. I have
>> > checked the
>> > zebra software to make linux a BGP router
>> > http://www.techrepublic.com/article/use-zebra-to-set-up-a-linux-bgp-os
>> > pf-router/
>> >
>> > Could you tell me if BGP is capable of doing what we want? Because our
>> > local ISP supports this method and i have no idea how to implement it.
>> >
>> > Functionality we need, is to auto detect the new ip prefixes from
>> > local ISP
>> > so they'll not have to provide us thousands of prefixes on daily
>> > basis.
>>
>> Why not use a DNS for the clients? your making things too complicated.
>>
>> Client-1-request@ISP-1 -> edge1.streaming.au ISP-1-DNS -> 12.34.56.78
>> (which
>> is your edge box)
>> Client-1-request@ISP-2 -> edge1.streaming.au ISP-2-DNS -> 99.88.77.66
>> (which
>> is your box in the US)
>>
>> Anyone from ISP-1 will always be directed to the edge systems, anyone else
>> to where-ever you point the dns.
>>
>> ISP's also use regional DNS servers which allows you more edge systems
>> closer to the users.
>>
>> Anyway, BGP see http://bird.network.cz/ (netflix solution)
>>
>> Posted at Nginx Forum:
>> http://forum.nginx.org/read.php?2,249997,250953#msg-250953
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-17 Thread shahzaib shahzaib
>>Why not use a DNS for the clients?
How i would be sure that request coming from the ISP-1 on the DNS server
and then point it to the local caching server? I mean i can use View
directive of BIND to route specific ips (local ISP clients) to the local
caching server and what if tomorrow the ISP has added more clients to their
network ? I'll also have those new ip prefixes to DNS server.

Please correct me if i am wrong.


On Tue, Jun 17, 2014 at 11:32 PM, itpp2012  wrote:

> shahzaib1232 Wrote:
> ---
> > i don't think the solution rdns will be suitable for us. I have
> > checked the
> > zebra software to make linux a BGP router
> > http://www.techrepublic.com/article/use-zebra-to-set-up-a-linux-bgp-os
> > pf-router/
> >
> > Could you tell me if BGP is capable of doing what we want? Because our
> > local ISP supports this method and i have no idea how to implement it.
> >
> > Functionality we need, is to auto detect the new ip prefixes from
> > local ISP
> > so they'll not have to provide us thousands of prefixes on daily
> > basis.
>
> Why not use a DNS for the clients? your making things too complicated.
>
> Client-1-request@ISP-1 -> edge1.streaming.au ISP-1-DNS -> 12.34.56.78
> (which
> is your edge box)
> Client-1-request@ISP-2 -> edge1.streaming.au ISP-2-DNS -> 99.88.77.66
> (which
> is your box in the US)
>
> Anyone from ISP-1 will always be directed to the edge systems, anyone else
> to where-ever you point the dns.
>
> ISP's also use regional DNS servers which allows you more edge systems
> closer to the users.
>
> Anyway, BGP see http://bird.network.cz/ (netflix solution)
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249997,250953#msg-250953
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-17 Thread shahzaib shahzaib
i don't think the solution rdns will be suitable for us. I have checked the
zebra software to make linux a BGP router
http://www.techrepublic.com/article/use-zebra-to-set-up-a-linux-bgp-ospf-router/

Could you tell me if BGP is capable of doing what we want? Because our
local ISP supports this method and i have no idea how to implement it.

Functionality we need, is to auto detect the new ip prefixes from local ISP
so they'll not have to provide us thousands of prefixes on daily basis.


On Tue, Jun 17, 2014 at 10:11 PM, itpp2012  wrote:

> shahzaib1232 Wrote:
> ---
> > Thanks itpp but the issue is still same and still the ip is from the
> > main
> > server in inspect element as well as in local-caching nginx access
> > logs, i
> > am getting the client ip as main-server's ip instead of original
> > client ip
>
> It could be the case the traffic you are getting is from the ISP proxy
> which
> could mean that any traffic is from that ISP only, which makes it easier to
> determine which to serve from local. Ask the ISP from where the traffic is
> coming from, if it is a proxy then proxy=local.
>
> > So, the specific subnet coming from our ISP to the main server will be
> > routed to the local caching server and our ISP will have to tell us
> > each
> > time to add specific ip prefix in the nginx config to route them
> > towards
> > their caching server. So the problem is, whenever few hundreds ip
> > prefixes
> > are added to their network, they'll have to provide us those prefixes
> > in
> > order to enable caching for newly added ips.
>
> See above, if this is not the case look into
> https://github.com/flant/nginx-http-rdns if a client has something like
> 'p1234.adsl-pool2-auckland.au' you can redirect based on a part of the
> client dns name, your ISP can tell you which DHCP named pools there are.
>
> If you can't get the client IP of hostname you gonna need to do some
> wiresharking to see where the info is, if it is anywhere.
> If the ISP is using a proxy to pass clients to your server ask them to add
> a
> header with the client ip/hostname.
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249997,250950#msg-250950
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-17 Thread shahzaib shahzaib
Thanks itpp but the issue is still same and still the ip is from the main
server in inspect element as well as in local-caching nginx access logs, i
am getting the client ip as main-server's ip instead of original client ip
and i am sure that i am doing something wrong. Well i have another question
now, as our test with CIDR notation worked well with nginx geo module and
nginx decided to route specific ips to specific server (caching server).

So, the specific subnet coming from our ISP to the main server will be
routed to the local caching server and our ISP will have to tell us each
time to add specific ip prefix in the nginx config to route them towards
their caching server. So the problem is, whenever few hundreds ip prefixes
are added to their network, they'll have to provide us those prefixes in
order to enable caching for newly added ips.

We just had a chat with our local ISP and he said that you should use some
services like BGP to automatically detect if any new ip prefixes are added
to our network and we'll not have to tell you each time we add some ip
prefixes to our network.

Could you guide me how could i make this work in our environment. The basic
architecture of our network is :-

Two static servers (serving mp4,jpg). One server located in US and one
server located in Local ISP.

I hope you can put me on some track as you did in the past and provide me
some kick start to work with BGP.




On Mon, Jun 16, 2014 at 10:41 PM, itpp2012  wrote:

> shahzaib1232 Wrote:
> ---
>
> > Maybe i need to add some variable to get original server ip ?
>
> https://www.google.nl/#q=nginx+geo+remote+ip+address
>
>
> http://www.cyberciti.biz/faq/nginx-redirect-backend-traffic-based-upon-client-ip-address/
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249997,250895#msg-250895
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-16 Thread shahzaib shahzaib
Hello itpp,i have been  abled to use ngx_http_geo_module. Now the request
coming from local ISP will first go to the main server (US) and then main
server will check if the ip is 1.2.3.4 so it'll direct the request to the
local caching server and than caching server will check if the file is
cached or it should again get the file from main server and cache it
locally.

When i tested it locally, it worked fine but the file URL in firebug is
coming from MAIN server when it should have come from the local caching
server. I can also see the caching directory size increases when the
matching client via geo module is directed to the local caching server but
the URL remains the same in firebug.

US config :-

geo $TW {
  default 0;
 192.168.1.0/24 1;
}



server {
listen  80;
server_name  002.files.com;
#   limit_rate 600k;
location / {
root   /var/www/html/files;
index index.html index.htm index.php;
 #  autoindex on;
}


location ~ \.(mp4|jpeg|jpg)$ {
mp4;
root /var/www/html/files;

 if ($TW) {
proxy_pass http://192.168.22.32:80;
}

expires 7d;
valid_referers none blocked  domain.com *.domain.com blog.domain.com
*.facebook.com *.twitter.com *.files.com *.pump.net domain.tv *.domain.tv
domainmedia.tv www.domainmedia.tv embed.domainmedia.tv;
if ($invalid_referer) {
return   403;
}
}
}

Edge config :-

proxy_ignore_headers "Set-Cookie";
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=static:100m
loader_threshold=300 loader_files=10 inactive=1d
max_size=30m;


proxy_temp_path /data/nginx/tmp 1 2;
add_header X-Cache-Status $upstream_cache_status;
 add_header Accept-Ranges bytes;
max_ranges 512;



server {

listen   80;
server_name  192.168.22.32;
root /var/www/html/files;
location ~ \.(mp4|jpeg|jpg)$ {
   root   /var/www/html/files;
mp4;
try_files $uri @getfrom_origin;

}


location @getfrom_origin {
proxy_pass http://002.files.com:80;
proxy_cache_valid 200 302   60m;
proxy_cache_valid any 1m;
proxy_cache static;
proxy_cache_min_uses 1;
}


Maybe i need to add some variable to get original server ip ?



<http://nginx.org/en/docs/http/ngx_http_geo_module.html>
<http://nginx.org/en/docs/http/ngx_http_geo_module.html>


On Fri, Jun 6, 2014 at 8:56 PM, shahzaib shahzaib 
wrote:

> Thanks a lot itpp. :) I'll look into it and get back to you.
>
> Thanks again for quick solution :)
>
>
> On Fri, Jun 6, 2014 at 8:26 PM, itpp2012  wrote:
>
>> shahzaib1232 Wrote:
>> ---
>> > @itpp I am currenlty proceeding with proxy_cache method just because i
>> > had
>> > to done this in emergency mode due to boss pressure :-|. I have a
>> > quick
>> > question, can i make nginx to cache files for specific clients ?
>> >
>> > Like, if our caching servers are deployed by only single ISP named
>> > "ptcl".
>> > So if ip from ptcl client is browsing video, only his requested file
>> > should
>> > be cached not for any other client, does nginx support that ??
>>
>> You could do this based on some IP ranges or via
>> https://github.com/flant/nginx-http-rdns
>>
>> See
>>
>> http://serverfault.com/questions/380642/nginx-how-to-redirect-users-with-certain-ip-to-special-page
>> and
>>
>> http://www.cyberciti.biz/faq/nginx-redirect-backend-traffic-based-upon-client-ip-address/
>>
>> Posted at Nginx Forum:
>> http://forum.nginx.org/read.php?2,249997,250707#msg-250707
>>
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Really high disk i/o !!

2014-06-16 Thread shahzaib shahzaib
Our server HDD i/o is constant on 8MB/s and i/o utilization + await is very
high due to which nginx video streaming is really slow and we're receiving
complains from our users regarding slow streaming of the videos.

We're using 12X3TB SATA HDD Hardware-Raid10 with 16G RAM
OS Centos 6.4

 8MB/s w/r should not be issue for 12X3TB SATA HDD. Maybe i need to tweak
some nginx buffers or kernels in order to reduce the high io wait ?

Could someone point me to the right direction ? We can't afford SAS Drives
right now and have to go with the SATA.

Linux 2.6.32-431.17.1.el6.x86_64 (storage17)06/16/2014
_x86_64_(8 CPU)

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda 123.66   336.85  118.877.67 19496.19  2759.42
175.88 2.40   18.93   6.27  79.35

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda  75.00 8.67  108.002.00 18117.3385.33
165.48 1.88   17.05   6.86  75.47

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda  76.00 0.00   94.000.33 17192.00 2.67
182.28 1.50   16.04   7.47  70.47

Any help would be highly appreciated.

Regards.
Shahzaib
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: http_geo_module invalid option during compile !!

2014-06-09 Thread shahzaib shahzaib
./configure --help | grep geo
  --with-http_geoip_module   enable ngx_http_geoip_module
  --without-http_geo_module disable ngx_http_geo_module

Alright, its enabled by default.

Thanks



On Mon, Jun 9, 2014 at 12:30 PM, Francis Daly  wrote:

> On Mon, Jun 09, 2014 at 11:45:22AM +0500, shahzaib shahzaib wrote:
> > Does nginx Geo module work on nginx ? I am getting the following error
> > during compiling nginx-1.4.7 with it :
> >
> >  ./configure  --with-http_mp4_module --with-http_flv_module
> > --with-http_geoip_module --with-http_geo_module
> --sbin-path=/usr/local/sbin
> > --with-debug
> > ./configure: error: invalid option "--with-http_geo_module"
>
> ./configure --help | grep geo
>
> Some modules are default-excluded, and must be added with "--with-";
> some are default-included, and must be removed with "--without-".
>
> f
> --
> Francis Dalyfran...@daoine.org
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

http_geo_module invalid option during compile !!

2014-06-08 Thread shahzaib shahzaib
Does nginx Geo module work on nginx ? I am getting the following error
during compiling nginx-1.4.7 with it :

 ./configure  --with-http_mp4_module --with-http_flv_module
--with-http_geoip_module --with-http_geo_module --sbin-path=/usr/local/sbin
--with-debug
./configure: error: invalid option "--with-http_geo_module"
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching servers in Local ISPs !!

2014-06-06 Thread shahzaib shahzaib
Thanks a lot itpp. :) I'll look into it and get back to you.

Thanks again for quick solution :)


On Fri, Jun 6, 2014 at 8:26 PM, itpp2012  wrote:

> shahzaib1232 Wrote:
> ---
> > @itpp I am currenlty proceeding with proxy_cache method just because i
> > had
> > to done this in emergency mode due to boss pressure :-|. I have a
> > quick
> > question, can i make nginx to cache files for specific clients ?
> >
> > Like, if our caching servers are deployed by only single ISP named
> > "ptcl".
> > So if ip from ptcl client is browsing video, only his requested file
> > should
> > be cached not for any other client, does nginx support that ??
>
> You could do this based on some IP ranges or via
> https://github.com/flant/nginx-http-rdns
>
> See
>
> http://serverfault.com/questions/380642/nginx-how-to-redirect-users-with-certain-ip-to-special-page
> and
>
> http://www.cyberciti.biz/faq/nginx-redirect-backend-traffic-based-upon-client-ip-address/
>
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,249997,250707#msg-250707
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

  1   2   >