-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
fix(table-editor): use CTE optimization for table-editor selection #35071
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Skipped Deployment
|
This pull request has been ignored for the connected project Preview Branches by Supabase. |
Tested on production — was able to reproduce the timeout with just 10k rows, like so: On preview branch, I'm able to load the table successfully. Tested:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did some basic smoke testing on the preview - it all looks great! 😄
Also checked opening tables outside of the public schema (both protected schemas (e.g auth) and custom schemas), no issues there too
given that unit tests + e2e tests + smoke tests are all passing
manually verified the updated SQL when retrieving the table rows too
reckon this should be good to go 🙏🙂
Investigating some long running queries (+60s timeout) over
postgres-meta
, I've noticed that some of them were due to queries crafted from thetable-preview-editor
within studio.Investigating further, I've noticed that our current select queries in some case will have runtime increase depending of the number of rows within a designed table.
This is partially due to the fact that we perform conditional transformations over columns depending of the length of the columns to avoid overfetching too much data, but instead only show a small preview.
This is then limited by the preview editor limit and paginations and ordered with this final result:
The issue with this approach is that postgres will perform a full table scan, and apply the condition over each row. Hence, the more truncated rows the table has, the longer it take for the preview query to execute.
Instead, we use a CTE optimization reducing the number of rows to work with by applying filters, limit, offsets and order by before applying the columns selection / truncation logic. Turning the query into something like:
This drop a query time from +30s into a <80ms.
To test this I've created a table within the
stress-table-editor-project
project on staging: https://studio-staging-git-fix-table-editor-fetch-long-tables-supabase.vercel.app/dashboard/project/otexzejpktdrckprodjw/editor/127355?filter=id%3Aeq%3A54371And inserted 100k rows in it, over a table with a dozen of text fields with random strings. Using this SQL script:
This table editor preview will fail to load for this project with timeout on supabase.staging but should work on this PR preview.
What I have tested: