PostgreSQL

Tech Optimizer
April 8, 2025
Cloudflare has made Hyperdrive available on the free plan of Cloudflare Workers, allowing developers to create high-performance global applications that connect to SQL databases. Hyperdrive simplifies database connectivity by using existing drivers and connection strings, reducing the need for extensive refactoring. It has been adopted by Cloudflare's engineering teams for various functions, demonstrating its effectiveness in addressing common challenges in application development. Hyperdrive significantly improves performance, with a benchmark showing latency reduction from 1200 ms to 500 ms when using Hyperdrive instead of a direct connection, and further to 320 ms with caching enabled. It employs transaction-mode connection pooling to efficiently manage database connections, minimizing overhead and ensuring optimal performance for serverless applications. Hyperdrive's architecture includes a split connection approach that reduces latency by conducting necessary round trips over shorter distances. It also features a regional pool strategy for selecting data centers based on the inferred location of the Worker, optimizing connection latency. The system includes a dual-layer caching strategy to enhance query performance and reduce load on the origin database. Developers can easily start using Hyperdrive by executing a simple command or using a dashboard to set up a sample Worker application with their existing Postgres database.
Tech Optimizer
April 2, 2025
Amazon RDS Proxy now supports TLS 1.3 for connections to Amazon Aurora PostgreSQL and RDS for PostgreSQL database instances, enhancing security with stronger cryptographic algorithms and a streamlined handshake process. The Proxy automatically negotiates the highest security level during connection setup and can be configured to enforce TLS 1.3 exclusively. TLS 1.3 support is also available for RDS Proxy for MySQL engines. RDS Proxy is a fully managed database proxy that improves performance, reliability, scalability, and security for RDS and Amazon Aurora databases.
Tech Optimizer
April 2, 2025
A malware campaign has compromised over 1,500 PostgreSQL servers using fileless techniques to deploy cryptomining payloads. The attack, linked to the threat actor group JINX-0126, exploits publicly exposed PostgreSQL instances with weak or default credentials. The attackers utilize advanced evasion tactics, including unique hashes for binaries and fileless execution of the miner payload, making detection difficult. They exploit PostgreSQL’s COPY ... FROM PROGRAM function to execute malicious payloads and perform system discovery commands. The malware includes a binary named “postmaster,” which mimics legitimate processes, and a secondary binary named “cpu_hu” for cryptomining operations. Nearly 90% of cloud environments host PostgreSQL databases, with about one-third being publicly exposed, providing easy entry points for attackers. Each wallet associated with the campaign had around 550 active mining workers, indicating the extensive scale of the attack. Organizations are advised to implement strong security configurations to protect their PostgreSQL instances.
Tech Optimizer
April 2, 2025
PostgreSQL is an open-source relational database management system known for its extensibility, which allows developers to enhance its capabilities through various extensions and plugins. The pgstattuple extension provides detailed statistics at the tuple level from PostgreSQL tables and indexes, revealing key metrics such as the number of live tuples, dead tuples, average length of live tuples, total free space, and percentages of free space and dead tuples. These metrics help database administrators identify potential health and performance issues, such as excessive table bloat or index fragmentation. Both Amazon Aurora and Amazon RDS support the pgstattuple extension, which can be activated using the command CREATE EXTENSION pgstattuple;. Functions like pgstattuple(relation) and pgstatindex(index) can be used to analyze physical storage and index statistics. Bloat occurs when unused space is left behind after UPDATE and DELETE operations, and the autovacuum process in PostgreSQL automates the cleanup of dead tuples. However, if autovacuum fails, manual intervention may be necessary. Regular monitoring of bloat is essential for maintaining performance, and metrics from pgstattuple can help optimize autovacuum settings. The pg_cron extension can automate VACUUM operations to manage bloat proactively. Index bloat can also be detected using pgstatindex, and significantly bloated indexes can be rebuilt using REINDEX or pg_repack. Best practices for using pgstattuple include estimating bloat with check_postgres, analyzing physical storage, monitoring dead_tuple_percent, and avoiding interference on highly active tables.
Tech Optimizer
April 2, 2025
Over 1,500 PostgreSQL instances exposed to the internet have been targeted by a cryptocurrency mining malware campaign called JINX-0126. Attackers exploit weak credentials to access PostgreSQL servers and use the "COPY ... FROM PROGRAM SQL" command for arbitrary command execution. They deploy a shell script to terminate existing cryptominers and deliver the pg_core binary. A Golang binary, disguised as the PostgreSQL multi-user database server, is then downloaded to establish persistence and escalate privileges, leading to the execution of the latest XMRig cryptominer variant. JINX-0126 employs advanced tactics, including unique hashes for binaries and fileless miner payload execution, to evade detection by cloud workload protection platforms.
Tech Optimizer
April 2, 2025
Bun v1.2 has been released, enhancing compatibility with Node.js and introducing a native S3 object storage API and a built-in Postgres client alongside the existing SQLite client. The update focuses on Node.js compatibility, achieving a 90% pass rate on the Node.js test suite for core modules. The team adapted the Node test suite for Bun to address challenges with error message verification. New features include support for the node:http2 module, which offers a 2x speed enhancement, and additional support for node:dgram, node:cluster, and node:zlib. The built-in S3 support allows file operations with a 5x speed improvement over Node.js packages. The new Postgres client includes optimizations such as automatic prepared statements and connection pooling, potentially increasing read speeds by 50% compared to popular Node.js Postgres clients. Bun is developed in Zig and uses WebKit’s JavaScriptCore as its JavaScript engine, with the first version launched in September 2023.
Tech Optimizer
April 1, 2025
Crunchy Data has released an optimized version of its Crunchy Data Warehouse for Kubernetes, integrating Postgres-native Apache Iceberg for enhanced analytics. This version supports both analytical and operational workloads by combining traditional Postgres tables with transactional Iceberg tables. Key features include managed Iceberg tables in PostgreSQL, high-performance analytics through DuckDB integration, the ability to query raw data files in S3, flexible data import/export options, and seamless integration with various analytics tools. The system is designed to be developer-friendly and supports automated, scalable deployments across different infrastructures.
Tech Optimizer
April 1, 2025
- An event-driven architecture utilizing Kafka, MongoDB, and PostgreSQL is employed for data management, ensuring real-time tracking and auditing. - A PostgreSQL trigger on the customer table monitors INSERT, UPDATE, and DELETE operations and uses the LISTEN/NOTIFY mechanism to publish changes. - A Spring Boot listener, CustomerChangeListener, monitors database changes and sends structured events to Apache Kafka via KafkaProducerService. - A Kafka topic named customer_events is created to manage customer change events, with KafkaProducerService publishing these events and KafkaConsumerService listening for them. - Events received by KafkaConsumerService are stored in a MongoDB collection called customer_history, which captures details about changes for auditing. - The MongoDB customer_history collection serves as a repository for historical customer changes, including who made the change, what was altered, when it occurred, and the rationale. - A project structure must be established, and the Maven pom.xml file updated with dependencies for Spring Boot, PostgreSQL, MongoDB, and Kafka. - Application properties need to be configured to connect to PostgreSQL, MongoDB, and the Kafka broker. - The main application file is CustomerTrackingApplication.java, which runs the service. - CustomerController.java manages CRUD operations for customer data, triggering database actions and Kafka notifications. - CustomerService.java contains business logic for managing customer data and interacts with PostgreSQL and Kafka. - A history table and trigger must be created in PostgreSQL to log all changes to the customer table. - CustomerChangeListener.java listens for notifications from PostgreSQL and sends relevant data to Kafka. - Kafka producer and consumer services manage messages related to customer changes, ensuring accurate history in MongoDB. - All changes (insertions, updates, deletions) are stored in the customer_history collection in MongoDB.
Search