pgBadger and postgres_fdw

  • Jump to comment-1
    Colin 't Hart<colinthart@gmail.com>
    Jan 21, 2026, 8:19 AM UTC
    Hi,
    One of my clients makes extensive use of postgres_fdw. After a migration
    performance isn't great. pgBadger reports show the slowest queries all
    being `fetch 100 from c2`.
    Anyone have any tricks for being able to associate those fetches with the
    queries that were used when declaring the server-side cursor?
    Thanks,
    Colin
    • Jump to comment-1
      Adrian Klaver<adrian.klaver@aklaver.com>
      Jan 21, 2026, 3:43 PM UTC
      On 1/21/26 00:18, Colin 't Hart wrote:
      Hi,
      One of my clients makes extensive use of postgres_fdw. After a migration > performance isn't great. pgBadger reports show the slowest queries all > being `fetch 100 from c2`.
      Anyone have any tricks for being able to associate those fetches with > the queries that were used when declaring the server-side cursor?
      This is going to need a lot more information. To start:
      1) Migration of what and from what version to what version?
      2) Where are the Postgres databases relative to each other on the network?
      3) What versions of Postgres if not covered in 1.
      4) If Postgres was what was being updated was an analyze done on the instances?
      5) Show a complete query using EXPLAIN ANALYZE.
      6) Define slow.
      Thanks,
      Colin
      --
      Adrian Klaver
      adrian.klaver@aklaver.com
      • Jump to comment-1
        Colin 't Hart<colinthart@gmail.com>
        Jan 21, 2026, 4:13 PM UTC
        1. Migration from one server to another. Newer OS (Debian 12 vs Ubuntu
        20.04), same version of Postgres (17).
        2. postgres_fdw is to different databases within the same cluster.
        3. 17
        4. No new analyze was done; migration was achieved by moving the disks
        between the virtual servers. We reindexed all text indexes to allow for the
        new glibc version on Debian 12.
        5. That's the thing: I have no idea which queries the `fetch 100 from c2`
        are associated with because the `c2` seems to be reused for each query. The
        psycopg python library generates unique server-side cursor names, but
        postgres_fdw doesn't.
        6. The 19 slowest queries in a 4 hour period are between 2 and 37 minutes,
        with an average of over 10 minutes; they are all `fetch 100 from c2`.
        The slowness itself isn't my question here; it was caused by having too few
        cores in the new environment, while the application was still assuming the
        higher core count and generating too many concurrent processes.
        My question is how to identify which connections / queries from
        postgres_fdw are generating the `fetch 100 from c2` queries, which, in
        turn, may quite possibly lead to a feature request for having these named
        uniquely.
        Thanks,
        Colin
        On Wed, 21 Jan 2026 at 16:43, Adrian Klaver <adrian.klaver@aklaver.com>
        wrote:
        On 1/21/26 00:18, Colin 't Hart wrote:
        Hi,

        One of my clients makes extensive use of postgres_fdw. After a migration
        performance isn't great. pgBadger reports show the slowest queries all
        being `fetch 100 from c2`.

        Anyone have any tricks for being able to associate those fetches with
        the queries that were used when declaring the server-side cursor?

        This is going to need a lot more information. To start:

        1) Migration of what and from what version to what version?

        2) Where are the Postgres databases relative to each other on the network?

        3) What versions of Postgres if not covered in 1.

        4) If Postgres was what was being updated was an analyze done on the
        instances?

        5) Show a complete query using EXPLAIN ANALYZE.

        6) Define slow.

        Thanks,

        Colin


        --
        Adrian Klaver
        adrian.klaver@aklaver.com
        • Jump to comment-1
          Laurenz Albe<laurenz.albe@cybertec.at>
          Jan 21, 2026, 6:57 PM UTC
          On Wed, 2026-01-21 at 17:12 +0100, Colin 't Hart wrote:
          My question is how to identify which connections / queries from postgres_fdw are
          generating the `fetch 100 from c2` queries, which, in turn, may quite possibly
          lead to a feature request for having these named uniquely.
          I would inverstigate that on the remote database.
          If the user that postgresfdw uses to connect is remoteuser, you could
          ALTER ROLE remote_user SET log_min_duretion_statement = 0;
          Then any statements executed through postgres_fdw would be logged.
          If you have %x in loglineprefix, you can find the DECLARE statement that declared
          the cursor that takes so long to fetch. Not very comfortale, but it should work.
          Yours,
          Laurenz Albe
        • Jump to comment-1
          Adrian Klaver<adrian.klaver@aklaver.com>
          Jan 21, 2026, 4:59 PM UTC
          On 1/21/26 08:12, Colin 't Hart wrote:
          6. The 19 slowest queries in a 4 hour period are between 2 and 37 > minutes, with an average of over 10 minutes; they are all `fetch 100 > from c2`.
          The slowness itself isn't my question here; it was caused by having too > few cores in the new environment, while the application was still > assuming the higher core count and generating too many concurrent processes.
          My question is how to identify which connections / queries from > postgres_fdw are generating the `fetch 100 from c2` queries, which, in > turn, may quite possibly lead to a feature request for having these > named uniquely.
          My guess not.
          See:
          https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/postgres_fdw.c
          Starting at line ~5212
          fetch_size = 100;
          and ending at line ~5234
          / Construct command to fetch rows from remote. /
          snprintf(fetch_sql, sizeof(fetch_sql), "FETCH %d FROM c%u",
          		 fetch_size, cursor_number);
          So c2 is a cursor number.
          Thanks,
          Colin
          -- Adrian Klaver
          adrian.klaver@aklaver.com
          • Jump to comment-1
            Adrian Klaver<adrian.klaver@aklaver.com>
            Jan 21, 2026, 5:20 PM UTC
            On 1/21/26 08:59, Adrian Klaver wrote:
            On 1/21/26 08:12, Colin 't Hart wrote:
            6. The 19 slowest queries in a 4 hour period are between 2 and 37 >> minutes, with an average of over 10 minutes; they are all `fetch 100 >> from c2`.
            The slowness itself isn't my question here; it was caused by having >> too few cores in the new environment, while the application was still >> assuming the higher core count and generating too many concurrent >> processes.
            My question is how to identify which connections / queries from >> postgres_fdw are generating the `fetch 100 from c2` queries, which, in >> turn, may quite possibly lead to a feature request for having these >> named uniquely.
            My guess not.
            See:
            https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/ > postgres_fdw.c
            Starting at line ~5212
            fetch_size = 100;
            and ending at line ~5234
            / Construct command to fetch rows from remote. /
            snprintf(fetchsql, sizeof(fetchsql), "FETCH %d FROM c%u",
            fetchsize, cursornumber);
            So c2 is a cursor number.
            If I am following this something postgres_fdw does to fetch the result in batches, so all the queries will have them.
            FYI, the fetch_size can be changed, see here:
            https://www.postgresql.org/docs/17/postgres-fdw.html#POSTGRES-FDW-CONFIGURATION-PARAMETERS
            F.36.1.4. Remote Execution Options
            If you want connection/query information I would enable from here:
            https://www.postgresql.org/docs/17/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
            log_connections
            log_disconnections
            And at least temporarily:
            log_statement = 'all'
            The above will generate a lot of logs so you don't want to keep set for too long.
            Thanks,

            Colin
            -- Adrian Klaver
            adrian.klaver@aklaver.com