# Identifying Slow Queries

Slow queries can significantly affect application performance and user experience. PostgreSQL offers built-in tools to analyze and identify these slow operations. On Elestio, whether you’re connected via terminal, inside a Docker Compose container, or using PostgreSQL CLI tools, you can use several methods to pinpoint and fix performance issues. This guide walks through various techniques to identify slow queries, interpret execution plans, and apply optimizations.

## **Analyzing Slow Queries Using Terminal**

When connected to your PostgreSQL service via terminal, you can use built-in tools like <span class="s3">psql</span> and SQL functions to observe how queries behave. This method is useful for immediate, ad hoc diagnostics in production or staging environments. You can use simple commands to view currently running queries, analyze individual query plans, and measure runtime performance. These steps help determine which queries are taking the most time and why.

Use <span class="s1">psql</span> to connect directly to your PostgreSQL instance. This provides access to administrative and diagnostic SQL commands.

```
psql -U <username> -h <host> -d <database>
```

Now use the following command to show the query plan the database will use. It highlights whether PostgreSQL will perform a sequential scan, index scan, or other operation.

```postgresql
EXPLAIN SELECT * FROM orders WHERE customer_id = 42;
```

Another type of command that executes the query and returns actual runtime and row counts. Comparing planned and actual rows helps determine if the planner is misestimating costs.

```postgresql
EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 42;
```

Lastly, monitor queries in real time using the following command. This view lists all active queries, sorted by duration. It helps you identify queries that are taking too long and might need optimization.

```postgresql
SELECT pid, now() - query_start AS duration, query
FROM pg_stat_activity
WHERE state = 'active'
ORDER BY duration DESC;
```

## **Analyzing Slow Queries in Docker Compose Environments**

If your PostgreSQL is deployed using Docker Compose on Elestio, you can inspect and troubleshoot slow queries from within the container. This method is useful when the PostgreSQL instance is isolated inside a container and not accessible directly from the host. Logs and query data can be collected from inside the service container using PostgreSQL tools or by checking configuration files.

```
docker-compose exec postgres bash
```

This command opens a shell inside the running PostgreSQL container. From here, you can run commands like <span class="s1">psql</span> or view logs. Use the same <span class="s1">psql</span> interface from inside the container to interact with the database and execute analysis commands.

```bash
psql -U $POSTGRES_USER -d $POSTGRES_DB
```

Next, edit postgresql<span class="s1">.conf</span> inside the container to enable slow query logging:

```
log_min_duration_statement = 500
log_statement = 'none'
```

This setting logs all queries that take longer than 500 milliseconds. You may need to restart the container for these settings to take effect.

## **Using CLI Tools to Analyze Query Performance**

PostgreSQL offers CLI-based tools and extensions like <span class="s2">pg\_stat\_statements</span> for long-term query performance analysis. These tools provide aggregated metrics over time, helping you spot frequently executed but inefficient queries. This section shows how to use PostgreSQL extensions and views to collect detailed statistics.

```postgresql
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
```

This extension logs each executed query along with performance metrics such as execution time and row count. The next command shows the queries that have consumed the most total execution time. These are strong candidates for indexing or rewriting.

```postgresql
SELECT query, calls, total_time, mean_time, rows
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 10;
```

## **Understanding Execution Plans and Metrics**

PostgreSQL’s query planner produces execution plans that describe how a query will be executed. Reading these plans can help identify operations that slow down performance, such as full table scans or repeated joins. Comparing estimated and actual rows processed can also reveal outdated statistics or inefficient filters. Understanding these elements is key to choosing the right optimization strategy.

#### **Key elements to understand:**

- <span class="s1">**Seq Scan**</span>: A full table scan; slow on large tables unless indexed.
- <span class="s1">**Index Scan**</span>: Uses an index for fast lookup; typically faster than a sequential scan.
- <span class="s1">**Cost**</span>: Estimated cost of the query, used by the planner to decide the best execution path.
- <span class="s1">**Rows**</span>: Estimated vs. actual rows; large mismatches indicate bad planning or outdated stats.
- <span class="s1">**Execution Time**</span>: Total time it took to run the query; from `<span class="s2">EXPLAIN ANALYZE</span>`.

Use these metrics to compare how the query was expected to run versus how it actually performed.

## **Optimizing Queries for Better Performance**

Once you’ve identified slow queries, the next step is to optimize them. Optimizations may involve adding indexes, rewriting SQL statements, or updating statistics. The goal is to reduce scan times, avoid redundant operations, and guide the planner to more efficient execution paths. Performance tuning is iterative—test after each change.

#### **Common optimization steps:**

- <span class="s1">**Add indexes**</span> to columns used in <span class="s2">WHERE</span>, <span class="s2">JOIN</span>, and <span class="s2">ORDER BY</span> clauses.
- <span class="s1">**Use EXPLAIN ANALYZE**</span> before and after changes to measure impact.
- <span class="s1">**Avoid SELECT \***</span> to reduce data transfer and memory use.
- <span class="s1">**Use LIMIT**</span> to restrict row output when only a subset is needed.
- <span class="s1">**Run ANALYZE**</span> to update PostgreSQL's internal statistics and improve planner accuracy:

```postgresql
ANALYZE;
```

By focusing on frequent and long-running queries, you can make improvements that significantly reduce overall load on the database.