In our own tests we were able to reproduce the error 2-3 out of 10 attempts.
Custom Auth error from Dev Console:
Could not get result from torus nodes
Sorry, the Torus Network that powers Web3Auth is currently very busy.
We will generate your key in time. Pls try again later.
Failed to fetch
Is it a temporary issue or we can do something to fix it on our side? The issue is critical for us, since it blocks the entire login flow for our users.
Hello,
We experience this issue in 10 out of 100 new user logins on Mainnet with our custom verifier. So it is quite unstable and we are looking for a solution.
Could you please advice how the issue can be fixed on Mainnet or how we can switch to Cyan Mainnet (which is way more stable in out tests) without loosing our exsting users keys.
Is there a way to migrate a custom verifier with its users from Mainnet to Cyan Mainnet?
Can you please help with the issue related to Mainnet network. Given the bad performance of Mainnet network we consider switching to Cyan Mainnet, but before it we need to clarify two question:
Is it somehow possible to migrate out custom verifier and associated user keys?
What is the root of our problem with Mainnet can we be sure that this problem will not appear on Cyan Mainnet in future?
As I see, Mainnet was recently facing degraded performance due to some of the nodes being down. However, this issue is currently fixed. Talking about the general performance, are your maximum users based out of the USA? Cyan network is meant to perform really well in that area.
Now talking about the migration, changing the network definitely changes the associated user keys. You can make some kind of a flow where users can migrate their assets to the new key generated.
Unfortunately, I can for sure not say what is the root cause of the problem since we’re unable to reproduce this error generally on our devices. Can you share some stats of the user which are facing this issue? We’ll look internally and get back to you.
During most of our debugs some of the nodes stalled for about a minute and it led to the error. So imo the issue is that a node does not respond at all (not even error) and holds the request and then the entire reconstruction fails due to 60 sec timeout.
Indeed this is an error of the nodes of the mainnet network. This error is expected to resolve within the next two weeks, as we are migrating our mainnet network to the new architecture, following which the performance will be much smoother and faster than before.