Web3Auth is currently very busy error

Hello,
Our users started experiencing this type of error quite often:

In our own tests we were able to reproduce the error 2-3 out of 10 attempts.

Custom Auth error from Dev Console:

Could not get result from torus nodes
Sorry, the Torus Network that powers Web3Auth is currently very busy.
We will generate your key in time. Pls try again later.

Failed to fetch

Is it a temporary issue or we can do something to fix it on our side? The issue is critical for us, since it blocks the entire login flow for our users.

Please assist.

Hello,
We experience this issue in 10 out of 100 new user logins on Mainnet with our custom verifier. So it is quite unstable and we are looking for a solution.
Could you please advice how the issue can be fixed on Mainnet or how we can switch to Cyan Mainnet (which is way more stable in out tests) without loosing our exsting users keys.

Is there a way to migrate a custom verifier with its users from Mainnet to Cyan Mainnet?

Hello @shahbaz @yashovardhan,

Can you please help with the issue related to Mainnet network. Given the bad performance of Mainnet network we consider switching to Cyan Mainnet, but before it we need to clarify two question:

  1. Is it somehow possible to migrate out custom verifier and associated user keys?
  2. What is the root of our problem with Mainnet can we be sure that this problem will not appear on Cyan Mainnet in future?

Thanks

Hey @sergey.kambalin

Thanks for your question.

As I see, Mainnet was recently facing degraded performance due to some of the nodes being down. However, this issue is currently fixed. Talking about the general performance, are your maximum users based out of the USA? Cyan network is meant to perform really well in that area.

Now talking about the migration, changing the network definitely changes the associated user keys. You can make some kind of a flow where users can migrate their assets to the new key generated.

Unfortunately, I can for sure not say what is the root cause of the problem since we’re unable to reproduce this error generally on our devices. Can you share some stats of the user which are facing this issue? We’ll look internally and get back to you.

Hello @yashovardhan

According our QA report, when she received the underlying error, the following request stalled for 1 minute then returned 504 error.

POST https://torus-node.binancex.dev/jrpc

{
  "jsonrpc": "2.0",
  "method": "KeyAssign",
  "id": 10,
  "params": {
    "verifier": "cere-wallet",
    "verifier_id": "veronika.filipenko+78458fggfgfbg64646@cere.io"
  },
  "torus-timestamp": "1684928654",
  "torus-nonce": "bd4cb2026632a588dc7a79a8fe4d271e867406f1cb48946f27322e6974782ae1",
  "torus-signature": "Hdd1NmfhFY6pf4oWFnA2yKbyyi/6Mvc7nYI1uA8sRrl2cqrcbMtalM7pLZ5Cg4TXwZMfleffx+LM02c125fFhBw="
}

During most of our debugs some of the nodes stalled for about a minute and it led to the error. So imo the issue is that a node does not respond at all (not even error) and holds the request and then the entire reconstruction fails due to 60 sec timeout.

Here the node-details request:

GET https://fnd.tor.us/node-details?network=mainnet&verifier=cere-wallet&verifierId=veronika.filipenko+78458fggfgfbg64646@cere.io

{
  "nodeDetails": {
    "currentEpoch": "19",
    "torusNodeEndpoints": [
      "https://torus-19.torusnode.com/jrpc",
      "https://torus-node.ens.domains/jrpc",
      "https://torus-node.matic.network/jrpc",
      "https://torus.zilliqa.network/jrpc",
      "https://torus-mainnet.cosmos.network/jrpc",
      "https://torus2.etherscan.com/jrpc",
      "https://torus-node-v2.skalelabs.com/jrpc",
      "https://torus-node.binancex.dev/jrpc",
      "https://torusnode.ont.io/jrpc"
    ],
    "torusIndexes": [1, 2, 3, 4, 5, 6, 7, 8, 9],
    "torusNodePub": [
      {
        "X": "bbe83c64177c3775550e6ba6ac2bc059f6847d644c9e4894e42c60d7974d8c2b",
        "Y": "82b49a7caf70def38cdad2740af45c1e4f969650105c5019a29bb18b21a9acb5"
      },
      {
        "X": "c208cac4ef9a47d386097a9c915b28e9cb89213abee8d26a17198ee261201b0d",
        "Y": "c7db2fe4631109f40833de9dc78d07e35706549ee48fa557b33e4e75e1047873"
      },
      {
        "X": "ca1766bb426d4ca5582818a0c5439d560ea64f5baa060793ab29dd3d0ceacfe",
        "Y": "d46c1d08c40e1306e1bca328c2287b8268166b11a1ba4b8442ea2ad0c5e32152"
      },
      {
        "X": "c3934dd2f6f4b3d2e1e398cc501e143c1e1a381b52feb6d1525af34d16253768",
        "Y": "71f5141a5035799099f5ea3e241e66946bc55dc857ac3bd7d6fcdb8dcd3eeeef"
      },
      {
        "X": "22e66f1929631d00bf026227581597f085fd94fd952fc0dca9f0833398b5c064",
        "Y": "6088b3912e10a1e9d50355a609c10db7d188f16a2e2fd7357e51bf4f6a74f0a1"
      },
      {
        "X": "9dc9fa410f3ce9eb70df70cdea00a49f2c4cc7a31c08c0dab5f863ed35ff5139",
        "Y": "627a291cb87a75c61da3f65d6818e1e05e360217179817ed27e8c73bca7ec122"
      },
      {
        "X": "118b9fc07e97b096d899b9f6658463ce6a8caa64038e37fc969df4e6023dd8c6",
        "Y": "baf9fa4e51770f4796ea165dd03a769b8606681a38954a0a92c4cbffd6609ce9"
      },
      {
        "X": "8a6d8b925da15a273dec3d8f8395ec35cd6878f274b2b180e4e106999db64043",
        "Y": "96f67f870c157743da0b1eb84d89bf30500d74dc84c11f501ee1cb013acc8c46"
      },
      {
        "X": "39cecb62e863729f572f7dfc46c24867981bf04bb405fed0df39e33984bfade5",
        "Y": "61c2364434012e68a2be2e9952805037e52629d7762fafc8e10e9fb5bad8f790"
      }
    ]
  },
  "success": true
}
  • https://torus-node.ens.domains/jrpc responded with 502 error
  • https://torus-node.binancex.dev/jrpc stalled for 1 min and then responded with 504 error

Hey @sergey.kambalin

Indeed this is an error of the nodes of the mainnet network. This error is expected to resolve within the next two weeks, as we are migrating our mainnet network to the new architecture, following which the performance will be much smoother and faster than before.

I would request you to kindly wait until then.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.