HNNewShowAskJobs
Built with Tanstack Start
Moving tables across PostgreSQL instances(ananthakumaran.in)
85 points by ananthakumaran 7 days ago | 5 comments
  • fourseventy4 days ago

    I'm in the middle of doing a major version upgrade of postgres from pg15 to pg18 using the same logical replication techniques that this article talks about. As this article mentions, dropping the indexes on the new database before replication is key. Otherwise the replication takes forever because each insert needs to update the index as well.

    • dspillett4 days ago |parent

      This is the same even if doing bulk or batched inserts. MS's process of importing an exported DB (when using that method, i.e. with Azure SQL rather than an on-prem restore to on-prem where a page-level backup is of course much more efficient) doesn't create indexes until after the full data import has completed.

    • hinkley3 days ago |parent

      Dropping indexes and constraints because constraint checks can cause table scans if there are no indexes.

      Our old “drop/create” scripts would go 5-10x faster that way, setting up sample data for dev environments.

  • Zaheer4 days ago

    I recently learned that on RDS you can import data from S3. Handy feature for accomplishing similar goal: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_...

  • vbilopav3 days ago

    Why not Foreign Data Wrappers?