Nginx 响应慢

2021-03-09 10:41:59 +08:00
 vegetableChick

项目使用的是 django + uwsgi + nginx

下面是我的 uwsgi 和 nginx 的一些配置, 还有请求日志

1

uwsgi.ini

[uwsgi]
pythonpath=/xxx
static-map=/static=/xxx/static
chdir=/xxx
env=DJANGO_SETTINGS_MODULE=conf.settings
module=xxx.wsgi
master=True
pidfile=logs/xxx.pid
vacuum=True
max-requests=100000
enable-threads=true
processes=16
threads=32
listen=1024
log-slow=3000
daemonize=logs/wsgi.log
stats=/tmp/xxx/socket/stats.socket
http=0.0.0.0:6187
buffer-size=220000000
socket-timeout=1500
harakiri=1500
http-timeout=1500

reqeust log

[pid: 10550|app: 0|req: 549/6061] 103.218.240.105 () {50 vars in 1037 bytes} 
[Mon Mar  8 15:24:30 2021] GET /api/v2/analysis/xxxx => generated 3890508 bytes in 397 msecs
 (HTTP/1.1 200) 5 headers in 222 bytes (1 switches on core 16)

2

nginx.conf

worker_processes  12;


events {
    use epoll;
    worker_connections  65535;
}


http {
    include       mime.types;
    include       log_format.conf;
    include       upstream.conf;
    default_type  application/octet-stream;

    sendfile        on;
    tcp_nopush     on;

    keepalive_timeout  1800;
    server_tokens off;

    client_max_body_size 100m;
    gzip  on;
    gzip_min_length 1k;
    gzip_buffers 4 16k;
    gzip_comp_level 5;
    gzip_types text/plain application/json application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
    gzip_vary off;
    include "site-enabled/*.conf";
}

upstream.conf


upstream bv_crm_server_proxy_line {
        server proxy.xxxx.cn:6187  weight=100 fail_timeout=0;
        keepalive 500;
}

log_format.conf

log_format upstream '$remote_addr - $host [$time_local] "$request" '
                    '$status $body_bytes_sent $request_time $upstream_response_time '
                    '"$http_user_agent" "$http_x_forwarded_for" ';


site-enabled.xxx.conf

server {
    listen 7020;
    server_name  xxxx.xx.cn;
    client_max_body_size 100M;
    access_log  logs/xxx.log  upstream;
    root /home/smb/web/xxx/dist;
    client_header_buffer_size 16k;
    large_client_header_buffers 4 16k;

    location ^~ /api/ {
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_send_timeout 1800;
        proxy_connect_timeout 1800;
        proxy_read_timeout 1800;

        proxy_ignore_client_abort on;
        proxy_pass http://bv_crm_server_proxy_line;
    }
   

    location / {
        try_files $uri /index.html =404;
    }
}

192.168.12.12 - xxx.cn [08/Mar/2021:15:24:34 +0800] "GET /api/v2/analysis/xxx HTTP/1.1" 
200 531500 4.714 4.714 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 
(KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36" "103.120.18.243"



现在 nginx 的响应速度很慢,请求大佬帮忙看一下是不是哪里配置的有问题。

感谢

2031 次点击
所在节点    NGINX
8 条回复
brader
2021-03-09 10:51:55 +08:00
你可否先尝试排除 nginx 之外的问题,仅仅测试 nginx ?
```
location /{
default_type text/plain;
return 200 "hello nginx!\n";
}
```
defunct9
2021-03-09 11:00:23 +08:00
upstream fastcgi_backend {
server 127.0.0.1:9000;

keepalive 8;
}

server {
...

location /fastcgi/ {
fastcgi_pass fastcgi_backend;
fastcgi_keep_conn on;
...
}
}
vegetableChick
2021-03-09 11:06:45 +08:00
@brader 线上的机器感觉没法试。。其他请求 nginx 响应是正常的, 这个请求返回的是一个 3m 左右的 json, 不知道和这个有没有关系
vegetableChick
2021-03-09 11:09:35 +08:00
@defunct9 感谢回复, 我没看懂您这个配置的意思。。可以大概说一下不
chendy
2021-03-09 11:16:14 +08:00
服务器带宽多大? 3m 的 json,10m 的机器要差不多 3s 才能发完
barrysn
2021-03-09 11:23:14 +08:00
设置日志格式,看看 nginx 日志里记录的时间 哪个长,
先确认问题出在哪里

$request_time – Full request time, starting when NGINX reads the first byte from the client and ending when NGINX sends the last byte of the response body
$upstream_connect_time – Time spent establishing a connection with an upstream server
$upstream_header_time – Time between establishing a connection to an upstream server and receiving the first byte of the response header
$upstream_response_time – Time between establishing a connection to an upstream server and receiving the last byte of the response body
brader
2021-03-09 11:30:32 +08:00
@vegetableChick 可以测试的,你可以不干涉你原有的东西,另增加一个匹配规则来测试就好了。
如果你返回一个 3m 大小的东西,有一定访问人数,访问频繁,服务器带宽不够的话,确实会非常慢的,打个比方,你这就好像是,分发下载 app,不使用 oss 来分发,而使用服务器带宽硬抗。
defunct9
2021-03-09 11:31:27 +08:00
o , 你这用的是 uwsgi,我建议换 fastcgi,用 fastcgi 的 keepalive 特性。加快速度 。
而且呢
#include uwsgi_params;
#uwsgi_pass unix:///var/www/script/uwsgi.sock; # 指定 uwsgi 的 sock 文件所有动态请求
用这种会更快。
anyway,开 ssh,让我上去看看

这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。

https://www.v2ex.com/t/759868

V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。

V2EX is a community of developers, designers and creative people.

© 2021 V2EX