Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] mihomo tun模式下用掉所有tcp端口,导致断网 #1267

Open
7 tasks done
ansemz opened this issue May 15, 2024 · 3 comments
Open
7 tasks done

[Bug] mihomo tun模式下用掉所有tcp端口,导致断网 #1267

ansemz opened this issue May 15, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@ansemz
Copy link

ansemz commented May 15, 2024

Verify steps

  • I have read the documentation and understand the meaning of all configuration items I have written, avoiding a large number of seemingly useful options or default values.
  • I have not reviewed the documentation and resolve this issue.
  • I have not searched the Issue Tracker for the problem I am going to raise.
  • I have tested with the latest Alpha branch version, and the issue still persists.
  • I have provided server and client configuration files and processes that can reproduce the issue locally, rather than a desensitized complex client configuration file.
  • I have provided the simplest configuration that can reproduce the error I reported, rather than relying on remote servers, TUN, graphical client interfaces, or other closed-source software.
  • I have provided complete configuration files and logs, rather than providing only parts that I believe are useful due to confidence in my own intelligence.

Operating System

Windows

System Version

windows 10 22H2

Mihomo Version

C:\Program Files\Clash Verge>clash-meta.exe -v
Mihomo Meta v1.18.4 windows amd64 with go1.22.2 Wed May 8 04:44:33 UTC 2024
Use tags: with_gvisor

Configuration File

mixed-port: 7890
allow-lan: true
mode: rule
log-level: info
external-controller: 0.0.0.0:9090

tun:
  stack: mixed
  device: Meta
  auto-route: true
  auto-detect-interface: true
  dns-hijack:
  - any:53
  strict-route: false
  enable: true

dns:
  enable: true
  prefer-h3: true
  listen: :1053
  # listen: 0.0.0.0:1053
  ipv6: false
  enhanced-mode: fake-ip
  fake-ip-range: 198.18.0.1/16
  fake-ip-filter:
    - '*.lan'
  use-hosts: true
  default-nameserver:
    - 192.168.2.11
    - 192.168.2.175
  nameserver:
    - 192.168.2.11
    - 192.168.2.175
  proxy-server-nameserver:
    - 114.114.114.114
    - 223.5.5.5
  fallback:
    - tls://8.8.4.4
    - tls://1.1.1.1
  fallback-filter:
    geoip: true
    geoip-code: CN
    geosite:
      - gfw
    ipcidr:
      - 240.0.0.0/4
    domain:
      - '+.google.com'
      - '+.facebook.com'
      - '+.youtube.com'

proxies:
  - name: "httpintra"
    type: http
    server: 192.168.2.192
    port: 10088
  - name: "socks5intra"
    type: socks5
    server: 192.168.2.192
    port: 10080
    udp: true
  - name: "s.test1.com"
    dialer-proxy: "socks5intra"
    type: vmess
    server: s.test1.com
    port: 443
    uuid: 7a3cfe21-fb22-4be9-bca3-75e473e845fb
    alterId: 0
    cipher: auto
    udp: true
    tls: true
    skip-cert-verify: true
    servername: s.test1.com
    network: ws
    ws-opts:
      path: /
      headers:
        Host: s.test1.com
  - name: "s.test2.com"
    dialer-proxy: "socks5intra"
    type: vless
    server: s.test2.com
    port: 443
    uuid: debe4909-806b-4ed7-ae51-f7c5346e0dfb
    alterId: 0
    cipher: auto
    udp: true
    tls: true
    skip-cert-verify: true
    servername: s.test2.com
    network: ws
    ws-opts:
      path: /
      headers:
        Host: s.test2.com

proxy-providers:
  ripaojiedian:
    type: http
    url: "https://raw.githubusercontent.com/ripaojiedian/freenode/main/clash"
    path: ./proxy_providers/ripaojiedian.yaml
    interval: 86400
    health-check:
      enable: true
      url: https://www.gstatic.com/generate_204
      interval: 300
      timeout: 5000
      expected-status: 204
    override:
      skip-cert-verify: true
      udp: true
      dialer-proxy: socks5intra
      routing-mark: 233
      ip-version: ipv4-prefer
      additional-prefix: "ripaojiedian |"
  chromego:
    type: http
    url: "https://chromego-sub.netlify.app/sub/merged_proxies_new.yaml"
    path: ./proxy_providers/chromego.yaml
    interval: 86400
    health-check:
      enable: true
      url: https://www.gstatic.com/generate_204
      interval: 300
      timeout: 5000
      expected-status: 204
    override:
      skip-cert-verify: true
      udp: true
      dialer-proxy: socks5intra
      routing-mark: 234
      ip-version: ipv4-prefer
      additional-prefix: "chromego |"

  localfile:
    type: file
    path: ./proxy_providers/localfile.yaml
    health-check:
      enable: true
      url: https://www.gstatic.com/generate_204
      interval: 300

proxy-groups:
  - name: "proxy"
    type: select
    proxies:
      - s.test2.com
      - s.test1.com
      - auto
      - DIRECT
    use:
      - ripaojiedian
      - chromego
    url: 'https://www.gstatic.com/generate_204'
    interval: 300
    lazy: true
    timeout: 5000
  - name: "auto"
    type: url-test
    proxies:
      - s.test2.com
      - s.test1.com
    url: 'https://www.gstatic.com/generate_204'
    interval: 300
  - name: "pg-ripaojiedian"
    type: url-test
    use:
      - ripaojiedian
    url: 'https://www.gstatic.com/generate_204'
    interval: 300
  - name: "pg-chromego"
    type: url-test
    use:
      - chromego
    url: 'https://www.gstatic.com/generate_204'
    interval: 300

rules:
  - IP-CIDR,10.0.0.0/8,DIRECT
  - IP-CIDR,100.64.0.0/10,DIRECT
  - IP-CIDR,127.0.0.0/8,DIRECT
  - IP-CIDR,192.168.0.0/16,DIRECT
  - IP-CIDR,198.18.0.0/16,DIRECT
  - IP-CIDR,224.0.0.0/4,DIRECT
  - IP-CIDR6,::1/128,DIRECT
  - IP-CIDR6,fc00::/7,DIRECT
  - IP-CIDR6,fe80::/10,DIRECT
  - IP-CIDR6,fd00::/8,DIRECT
  - GEOIP,CN,socks5intra
  - GEOIP,CLOUDFLARE,socks5intra
  - MATCH,pg-chromego

Description

tun模式使用大概一天以后,就无法发起任何对外的链接。之前都是重启clash解决的。但是总这样太烦人了。就花了点时间找一下原因。在日志里有大量的这类记录
2024-05-15 08:57:30 INFO - [clash]: time="2024-05-15T08:57:30.8253182+08:00" level=warning msg="[TCP] dial pg-chromego (match Match/) mihomo --> chromego-sub.netlify.app:443 error: 192.168.2.192:10080 connect error: connect failed: dial tcp 192.168.2.192:10080: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full."
在windows事件查看器里有事件ID为4227和4231的记录
4227
TCP/IP 无法建立传出连接,因为选定的本地终结点最近用于连接到相同的远程终结点。 当以高速率打开和关闭传出连接时,会导致所有可用的本地端口被使用,并迫使 TCP/IP 重新使用本地端口进行传出连接,此时通常会产生这种错误。为了最大限度地降低数 据受到损坏的风险,在给定的本地终结点和给定的远程终结点之间的连续连接中, TCP/IP 标准需要等待一段最短的时间段。
4231
有关从全局 TCP 端口空间分配一个短端口号的请求失败,因为所有此类端口都在使用中。
用netstat -aqno发现所有的端口都被clash使用,端口状态为BOUND
......
TCP 0.0.0.0:64503 0.0.0.0:0 BOUND 15984
TCP 0.0.0.0:64504 0.0.0.0:0 BOUND 15984
TCP 0.0.0.0:64505 0.0.0.0:0 BOUND 15984
TCP 0.0.0.0:64506 0.0.0.0:0 BOUND 15984
TCP 0.0.0.0:64507 0.0.0.0:0 BOUND 15984
TCP 0.0.0.0:64508 0.0.0.0:0 BOUND 15984
TCP 0.0.0.0:64509 0.0.0.0:0 BOUND 15984
TCP 0.0.0.0:64510 0.0.0.0:0 BOUND 15984
TCP 0.0.0.0:64511 0.0.0.0:0 BOUND 15984
......

不知道是mihomo的问题,还是我配置文件的问题,如果是配置文件有问题,麻烦告诉我怎么改一下。谢谢。

Reproduction Steps

暂时不知道复现条件,不过在该pc上不管用那个版本的mihomo(1.18.1-1.18.4)都是一样的。

Logs

2024-05-15 09:38:20 INFO - [clash]: time="2024-05-15T09:38:20.930473+08:00" level=warning msg="[TCP] dial pg-chromego (match Match/) mihomo --> raw.githubusercontent.com:443 error: 192.168.2.192:10080 connect error: connect failed: dial tcp 192.168.2.192:10080: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full."
2024-05-15 09:38:21 INFO - [clash]: time="2024-05-15T09:38:21.1343278+08:00" level=warning msg="[TCP] dial pg-chromego (match Match/) mihomo --> chromego-sub.netlify.app:443 error: 192.168.2.192:10080 connect error: connect failed: dial tcp 192.168.2.192:10080: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full."
2024-05-15 09:38:21 INFO - [clash]: time="2024-05-15T09:38:21.5123393+08:00" level=warning msg="[TCP] dial pg-chromego (match Match/) mihomo --> chromego-sub.netlify.app:443 error: 192.168.2.192:10080 connect error: connect failed: dial tcp 192.168.2.192:10080: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full."
2024-05-15 09:38:21 INFO - [clash]: time="2024-05-15T09:38:21.7362769+08:00" level=warning msg="[TCP] dial pg-chromego (match Match/) mihomo --> chromego-sub.netlify.app:443 error: 192.168.2.192:10080 connect error: connect failed: dial tcp 192.168.2.192:10080: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full."
2024-05-15 09:38:21 INFO - [clash]: time="2024-05-15T09:38:21.8787982+08:00" level=warning msg="[TCP] dial pg-chromego (match Match/) mihomo --> raw.githubusercontent.com:443 error: 192.168.2.192:10080 connect error: connect failed: dial tcp 192.168.2.192:10080: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full."
2024-05-15 09:38:21 INFO - [clash]: time="2024-05-15T09:38:21.9326374+08:00" level=warning msg="[TCP] dial pg-chromego (match Match/) mihomo --> raw.githubusercontent.com:443 error: 192.168.2.192:10080 connect error: connect failed: dial tcp 192.168.2.192:10080: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full."
@ansemz ansemz added the bug Something isn't working label May 15, 2024
@ansemz
Copy link
Author

ansemz commented May 15, 2024

查了一下BOUND的意思,似乎是等同于TIME_WAIT。我的理解是clash一直没有释放端口。
https://learn.microsoft.com/zh-cn/troubleshoot/windows-client/networking/tcp-ip-port-exhaustion-troubleshooting
可能由于有一个前置代理192.168.2.192:10080,每次clash连接前置socks代理都是短链接,连接关闭后,却不释放端口。
从提交完ISSUE,重启clash之后到现在,处于BOUND状态的端口增加到34个了。

@ansemz
Copy link
Author

ansemz commented May 15, 2024

再更新一下进展:)
我发现可能和tun的模式有关系。如果用gvisor和mixed。BOUND状态的端口数量会一直缓慢增加,估计一直到耗尽所有可用端口。
我换成system之后,短时间观察BOUND状态的端口数量有升有降,目前不会超过200个。

@ansemz
Copy link
Author

ansemz commented May 21, 2024

时隔4天,再更新一下情况。
目前BOUND状态的端口增加到7000多个了。tun堆栈模式是system。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant