druid

Apache Druid: a high performance real-time analytics database.

APACHE-2.0 License

Stars
13.4K
Committers
639

Bot releases are hidden (Show)

druid - Latest Release

Published by cryptoe 7 months ago

Druid 29.0.1

Apache Druid 29.0.1 is a patch release that fixes some issues in the Druid 29.0.0 release.

Bug fixes

  • Added type verification for INSERT and REPLACE to validate that strings and string arrays aren't mixed #15920
  • Concurrent replace now allows pending Peon segments to be upgraded using the Supervisor #15995
  • Changed the targetDataSource attribute to return a string containing the name of the datasource. This reverts the breaking change introduced in Druid 29.0.0 for INSERT and REPLACE MSQ queries #16004 #16031
  • Decreased the size of the distribution Docker image #15968
  • Fixed an issue with SQL-based ingestion where string inputs, such as from CSV, TSV, or string-value fields in JSON, are ingested as null values when they are typed as LONG or BIGINT #15999
  • Fixed an issue where a web console-generated Kafka supervisor spec has flattenSpec in the wrong location #15946
  • Fixed an issue with filters on expression virtual column indexes incorrectly considering values null in some cases for expressions which translate null values into not null values #15959
  • Fixed an issue where the data loader crashes if the incoming data can't be parsed #15983
  • Improved DOUBLE type detection in the web console #15998
  • Web console-generated queries now only set the context parameter arrayIngestMode to array when you explicitly opt in to use arrays #15927
  • The web console now displays the results of an MSQ query that writes to an external destination through the EXTERN function #15969

Incompatible changes

Changes to targetDataSource in EXPLAIN queries

Druid 29.0.1 includes a breaking change that restores the behavior for targetDataSource to its 28.0.0 and earlier state, different from Druid 29.0.0 and only 29.0.0. In 29.0.0, targetDataSource returns a JSON object that includes the datasource name. In all other versions, targetDataSource returns a string containing the name of the datasource.

If you're upgrading from any version other than 29.0.0, there is no change in behavior.

If you are upgrading from 29.0.0, this is an incompatible change.

#16004

Dependency updates

  • Updated PostgreSQL JDBC Driver version to 42.7.2 #15931

Credits

@abhishekagarwal87
@adarshsanjeev
@AmatyaAvadhanula
@clintropolis
@cryptoe
@dependabot[bot]
@ektravel
@gargvishesh
@gianm
@kgyrtkirk
@LakshSingla
@somu-imply
@techdocsmith
@vogievetsky

druid - Druid 29.0.0

Published by LakshSingla 8 months ago

Apache Druid 29.0.0 contains over 350 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 67 contributors.

See the complete set of changes for additional details, including bug fixes.

Review the upgrade notes before you upgrade to Druid 29.0.0.
If you are upgrading across multiple versions, see the Upgrade notes page, which lists upgrade notes for the most recent Druid versions.

# Important features, changes, and deprecations

This section contains important information about new and existing features.

# MSQ export statements (experimental)

Druid 29.0.0 adds experimental support for export statements to the MSQ task engine. This allows query tasks to write data to an external destination through the EXTERN function.

#15689

# SQL PIVOT and UNPIVOT (experimental)

Druid 29.0.0 adds experimental support for the SQL PIVOT and UNPIVOT operators.

The PIVOT operator carries out an aggregation and transforms rows into columns in the output. The following is the general syntax for the PIVOT operator:

PIVOT (aggregation_function(column_to_aggregate)
  FOR column_with_values_to_pivot
  IN (pivoted_column1 [, pivoted_column2 ...])
)

The UNPIVOT operator transforms existing column values into rows. The following is the general syntax for the UNPIVOT operator:

UNPIVOT (values_column 
  FOR names_column
  IN (unpivoted_column1 [, unpivoted_column2 ... ])
)

# Range support in window functions (experimental)

Window functions (experimental) now support ranges where both endpoints are unbounded or are the current row. Ranges work in strict mode, which means that Druid will fail queries that aren't supported. You can turn off strict mode for ranges by setting the context parameter windowingStrictValidation to false.

The following example shows a window expression with RANGE frame specifications:

(ORDER BY c)
(ORDER BY c RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
(ORDER BY c RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING)

#15703 #15746

# Improved INNER joins

Druid now supports arbitrary join conditions for INNER join. Any sub-conditions that can't be evaluated as part of the join are converted to a post-join filter. Improved join capabilities allow Druid to more effectively support applications like Tableau.

#15302

# Improved concurrent append and replace (experimental)

You no longer have to manually determine the task lock type for concurrent append and replace (experimental) with the taskLockType task context. Instead, Druid can now determine it automatically for you. You can use the context parameter "useConcurrentLocks": true for individual tasks and datasources or enable concurrent append and replace at a cluster level using druid.indexer.task.default.context.

#15684

# First and last aggregators for double, float, and long data types

Druid now supports first and last aggregators for the double, float, and long types in native and MSQ ingestion spec and MSQ queries. Previously, they were only supported for native queries. For more information, see First and last aggregators.

#14462

Additionally, the following functions can now return numeric values:

  • EARLIEST and EARLIEST_BY
  • LATEST and LATEST_BY

You can use these functions as aggregators at ingestion time.

#15607

# Support for logging audit events

Added support for logging audit events and improved coverage of audited REST API endpoints.
To enable logging audit events, set config druid.audit.manager.type to log in both the Coordinator and Overlord or in common.runtime.properties. When you set druid.audit.manager.type to sql, audit events are persisted to metadata store.

In both cases, Druid audits the following events:

  • Coordinator
    • Update load rules
    • Update lookups
    • Update coordinator dynamic config
    • Update auto-compaction config
  • Overlord
    • Submit a task
    • Create/update a supervisor
    • Update worker config
  • Basic security extension
    • Create user
    • Delete user
    • Update user credentials
    • Create role
    • Delete role
    • Assign role to user
    • Set role permissions

#15480 #15653

Also fixed an issue with the basic auth integration test by not persisting logs to the database.

#15561

# Enabled empty ingest queries

The MSQ task engine now allows empty ingest queries by default. Previously, ingest queries that produced no data would fail with the InsertCannotBeEmpty MSQ fault.
For more information, see Empty ingest queries in the upgrade notes.

#15674 #15495

In the web console, you can use a toggle to control whether an ingestion fails if the ingestion query produces no data.

#15627

# MSQ support for Google Cloud Storage

The MSQ task engine now supports Google Cloud Storage (GCS). You can use durable storage with GCS. See Durable storage configurations for more information.

#15398

# Experimental extensions

Druid 29.0.0 adds the following extensions.

# DDSketch

A new DDSketch extension is available as a community contribution. The DDSketch extension (druid-ddsketch) provides support for approximate quantile queries using the DDSketch library.

#15049

# Spectator histogram

A new histogram extension is available as a community contribution. The Spectator-based histogram extension (druid-spectator-histogram) provides approximate histogram aggregators and percentile post-aggregators based on Spectator fixed-bucket histograms.

#15340

# Delta Lake

A new Delta Lake extension is available as a community contribution. The Delta Lake extension (druid-deltalake-extensions) lets you use the Delta Lake input source to ingest data stored in a Delta Lake table into Apache Druid.

#15755

# Functional area and related changes

This section contains detailed release notes separated by areas.

# Web console

# Support for array types

Added support for array types for all the ingestion wizards.

Load data

When loading multi-value dimensions or arrays using Druid's Query console, note the value of the arrayIngestMode parameter. Druid now configures the arrayIngestMode parameter in the data loading flow, and its value can persist across the SQL tab, even if you execute unrelated Data Manipulation Language (DML) operations within the same tab.

#15588

# File inputs for query detail archive

The Load query detail archive now supports loading queries by selecting a JSON file directly or dragging the file into the dialog.

Load data

#15632

# Improved lookup dialog

The lookup dialog in the web console now includes following optional fields. See JDBC lookup for more information.

  • Jitter seconds
  • Load timeout seconds
  • Max heap percentage

Lookup dialog

#15472

# Improved time chart brush and added auto-granularity

Improved the web console Explore view as follows:

  • Added the notion of timezone in the explore view.
  • Time chart is now able to automatically pick a granularity if "auto" is selected (which is the default) based on the current time filter extent.
  • Brush is now automatically enabled in the time chart.
  • Brush interval snaps to the selected time granularity.
  • Added a highlight bubble to all visualizations (except table because it has its own).

#14990

# Other web console improvements

  • Added the ability to detect multiple EXPLAIN PLAN queries in the workbench and run them individually #15570
  • Added the ability to sort a segment table on start and end when grouping by interval #15720
  • Improved the time shift for compare logic in the web console to include literals #15433
  • Improved robustness of time shifting in tables in Explore view #15359
  • Improved ingesting data using the web console #15339
  • Improved management proxy detection #15453
  • Fixed rendering on a disabled worker #15712
  • Fix an issue where waitUntilSegmentLoad would always be set to true even if explicitly set to false #15781
  • Enabled table driven query modification actions to work with slices #15779

# General ingestion

# Added system fields to input sources

Added the option to return system fields when defining an input source. This allows for ingestion of metadata, such as an S3 object's URI.

#15276

# Changed how Druid allocates weekly segments

When the requested granularity is a month or larger but a segment can't be allocated, Druid resorts to day partitioning.
Unless explicitly specified, Druid skips week-granularity segments for data partitioning because these segments don't align with the end of the month or more coarse-grained intervals.

#15589

# Changed how empty or null array columns are stored

Columns ingested with the auto column indexer that contain only empty or null containing arrays are now stored as ARRAY<LONG> instead of COMPLEX<json>.

#15505

# Enabled skipping compaction for datasources with partial-eternity segments

Druid now skips compaction for datasources with segments that have an interval start or end which coincides with Eternity interval end-points.

#15542

# Kill task improvements

Improved kill tasks as follows:

  • Resolved an issue where the auto-kill feature failed to honor the specified buffer period. This occurred when multiple unused segments within an interval were marked as unused at different times.
  • You can submit kill tasks with an optional parameter maxUsedStatusLastUpdatedTime. When set to a date time, the kill task considers segments in the specified interval marked as unused no later than this time. The default behavior is to kill all unused segments in the interval regardless of the time when segments where marked as unused.

#15710

# Segment allocation improvements

Improved segment allocation as follows:

  • Enhanced polling in segment allocation queue #15590
  • Fixed an issue in segment allocation that could cause loss of appended data when running interleaved append and replace tasks #15459

# Other ingestion improvements

  • Added a default implementation for the evalDimension method in the RowFunction interface #15452
  • Added a configurable delay to the Peon service that determines how long a Peon should wait before dropping a segment #15373
  • Improved metadata store updates by attempting to retry updates rather than failing #15141
  • Improved the error message you get when taskQueue reaches maxSize #15409
  • Fixed an issue with columnar frames always writing multi-valued columns where the input column had hasMultipleValues = UNKNOWN #15300
  • Fixed a race condition where there were multiple attempts to publish segments for the same sequence #14995
  • Fixed a race condition that can occur at high streaming concurrency #15174
  • Fixed an issue where complex types that are also numbers were assumed to also be double #15272
  • Fixed an issue with unnecessary retries triggered when exceptions like IOException obfuscated S3 exceptions #15238
  • Fixed segment retrieval when the input interval does not lie within the years [1000, 9999] #15608
  • Fixed empty strings being incorrectly converted to null values #15525
  • Simplified IncrementalIndex and OnHeapIncrementalIndex by removing some parameters #15448
  • Updated active task payloads being accessed from memory before reverting to the metadata store #15377
  • Updated OnheapIncrementalIndex to no longer try to offer a thread-safe "add" method #15697

# SQL-based ingestion

# Added castToType parameter

Added optional castToType parameter to auto column schema.

#15417

# Improved the EXTEND operator

The EXTEND operator now supports the following array types: VARCHAR ARRAY, BIGINT ARRAY, FLOAT ARRAY, and DOUBLE ARRAY.

The following example shows an extern input with Druid native input types ARRAY<STRING>, ARRAY<LONG> and STRING:

EXTEND (a VARCHAR ARRAY, b BIGINT ARRAY, c VARCHAR)

#15458

# Improved tombstone generation to honor granularity specified in a REPLACE query

MSQ REPLACE queries now generate tombstone segments honoring the segment granularity specified in the query rather than generating irregular tombstones. If a query generates more than 5000 tombstones, Druid returns an MSQ TooManyBucketsFault error, similar to the behavior with data segments.

#15243

# Improved hash joins using filters

Improved consistency of JOIN behavior for queries using either the native or MSQ task engine to prune based on base (left-hand side) columns only.

#15299

# Configurable page size limit

You can now limit the pages size for results of SELECT queries run using the MSQ task engine. See rowsPerPage in the SQL-based ingestion reference.

# Streaming ingestion

# Improved Amazon Kinesis automatic reset

Changed Amazon Kinesis automatic reset behavior to only reset the checkpoints for partitions where sequence numbers are unavailable.

#15338

# Querying

# Added IPv6_MATCH SQL function

Added IPv6_MATCH SQL function for matching IPv6 addresses in a subnet:

IPV6_MATCH(address, subnet)

#15212

# Added JSON_QUERY_ARRAY function

Added JSON_QUERY_ARRAY which is similar to JSON_QUERY except the return type is always ARRAY<COMPLEX<json>> instead of COMPLEX<json>. Essentially, this function allows extracting arrays of objects from nested data and performing operations such as UNNEST, ARRAY_LENGTH, ARRAY_SLICE, or any other available ARRAY operations.

#15521

# Added support for aggregateMultipleValues

Improved the ANY_VALUE(expr) function to support the boolean option aggregateMultipleValues. The aggregateMultipleValues option is enabled by default. When you run ANY_VALUE on an MVD, the function returns the stringified array. If aggregateMultipleValues is set to false, ANY_VALUE returns the first value instead.

#15434

# Added native arrayContainsElement filter

Added native arrayContainsElement filter to improve performance when using ARRAY_CONTAINS on array columns.

#15366 #15455

Also ARRAY_OVERLAP now uses the arrayContainsElement filter when filtering ARRAY typed columns, so that it can use indexes like ARRAY_CONTAINS.

#15451

# Added index support

Improved nested JSON columns as follows:

  • Added ValueIndexes and ArrayElementIndexes for nested arrays.
  • Added ValueIndexes for nested long and double columns.

#15752

# Improved timestamp_extract function

The timestamp_extract(expr, unit, [timezone]) Druid native query function now supports dynamic values.

#15586

# Improved JSON_VALUE and JSON_QUERY

Added support for using expressions to compute the JSON path argument for JSON_VALUE and JSON_QUERY functions dynamically. The JSON path argument doesn't have to be a constant anymore.

#15320

# Improved filtering performance for lookups

Enhanced filtering performance for lookups as follows:

  • Added sqlReverseLookupThreshold SQL query context parameter. sqlReverseLookupThreshold represents the maximum size of an IN filter that will be created as part of lookup reversal #15832
  • Improved loading and dropping of containers for lookups to reduce inconsistencies during updates #14806
  • Changed behavior for initialization of lookups to load the first lookup as is, regardless of cache status #15598

# Enabled query request queuing by default when total laning is turned on

When query scheduler threads are less than server HTTP threads, total laning turns on.
This reserves some HTTP threads for non-query requests such as health checks.
The total laning previously would reject any query request that exceeds the lane capacity.
Now, excess requests will instead be queued with a timeout equal to MIN(Integer.MAX_VALUE, druid.server.http.maxQueryTimeout).

#15440

# Other querying improvements

  • Added a supplier that can return NullValueIndex to be used by NullFilter. This improvement should speed up is null and is not null filters on JSON columns #15687
  • Added an option to compare results with relative error tolerance #15429
  • Added capability for the Broker to access datasource schemas defined in the catalog when processing SQL queries #15469
  • Added CONCAT flattening and filter decomposition #15634
  • Enabled ARRAY_TO_MV to support expression inputs #15528
  • Improved ExpressionPostAggregator to handle ARRAY types output by the grouping engine #15543
  • Improved the error message you get when there's an error in the specified interval #15454
  • Improved how three-valued logic is handled #15629
  • Improved error reporting for math functions #14987
  • Improved handling of COALESCE, SEARCH, and filter optimization #15609
  • Increased memory available for subqueries when the query scheduler is configured to limit queries below the number of server threads #15295
  • Optimized SQL planner for filter expressions by introducing column indexes for expression virtual columns #15585
  • Optimized queries involving large NOT IN operations #15625
  • Fixed an issue with nested empty array fields #15532
  • Fixed NPE with virtual expression with unnest #15513
  • Fixed an issue with AND and OR operators and numeric nvl not clearing out stale null vectors for vector expression processing #15587
  • Fixed an issue with filtering columns when using partial paths such as in JSON_QUERY #15643
  • Fixed queries that raise an exception when sketches are stored in cache #15654
  • Fixed queries involving JSON functions that failed when using negative indexes #15650
  • Fixed an issue where queries involving filters on TIME_FLOOR could encounter ClassCastException when comparing RangeValue in CombineAndSimplifyBounds #15778

# Data management

# Changed numCorePartitions to 0 for tombstones

Tombstone segments now have 0 core partitions. This means they can be dropped or removed independently without affecting availability of other appended segments in the same co-partition space. Prior to this change, removing tombstones with 1 core partition that contained appended segments in the partition space could make the appended segments unavailable.

#15379

# Clean up duty for non-overlapping eternity tombstones

Added MarkEternityTombstonesAsUnused to clean up non-overlapping eternity tombstones—tombstone segments that either start at -INF or end at INF and don't overlap with any overshadowed used segments in the datasource.

Also added a new metric segment/unneededEternityTombstone/count to count the number of dropped non-overshadowed eternity tombstones per datasource.

#15281

# Enabled skipping compaction for datasources with partial-eternity segments

Druid now skips compaction for datasources with segments that have their interval start or end coinciding with Eternity interval end-points.

#15542

# Enhanced the JSON parser unexpected token logging

The JSON parser unexpected token error now includes the context of the expected VALUE_STRING token. This makes it easier to track mesh/proxy network error messages and to avoid unnecessary research into Druid server rest endpoint responses.

#15176

# Other data management improvements

  • Fixed an issue where the Broker would return an HTTP 400 status code instead of 503 when a Coordinator was temporarily unavailable, such as during a rolling upgrade #15756
  • Added user identity to Router query request logs #15126
  • Improved process to retrieve segments from metadata store by retrieving segments in batches #15305
  • Improved logging messages when skipping auto-compaction for a data source #15460
  • Improved compaction by modifying the segment iterator to skip intervals without data #15676
  • Increased _acceptQueueSize based on value of net.core.somaxconn #15596
  • Optimized the process to mark segments as unused #15352
  • Updated auto-compaction to preserve spatial dimensions rather than rewrite them into regular string dimensions #15321

# Metrics and monitoring

  • Added worker status and duration metrics in live and task reports #15180
  • Updated serviceName for segment/count metric to match the configured metric name within the StatsD emitter #15347

# Extensions

# Basic security improvements

The computed hash values of passwords are now cached for the druid-basic-security extension to boost authentication validator performance.

#15648

# DataSketches improvements

  • Improved performance of HLL sketch merge aggregators #15162
  • Updated histogram post-aggregators for Quantiles and KLL sketches for when all values in the sketch are equal. Previously these queries fail but now return [N, 0, 0, ...], where N is the number of values in the sketch, and the length of the list is equal to the value assigned to numBins #15381

# Microsoft Azure improvements

  • Added support for Azure Storage Accounts authentication options #15287
  • Added support for Azure Government when using Microsoft Azure Storage for deep storage #15523
  • Fixed the batchDeleteFiles method in Azure Storage #15730

# Kubernetes improvements

  • Added cleanup lifecycle management for MiddleManager-less task scheduling #15133
  • Fixed an issue where the Overlord does not start when a cluster does not use a MiddleManager or ZooKeeper #15445
  • Improved logs and status messages for MiddleManager-less ingestion #15527

# Kafka emitter improvements

  • Added a config option to the Kafka emitter that lets you mask sensitive values for the Kafka producer. This feature is optional and will not affect prior configs for the emitter #15485
  • Resolved InterruptedException logging in ingestion task logs #15519

# Prometheus emitter improvements

You can configure the pushgateway strategy to delete metrics from Prometheus push gateway on task shutdown using the following Prometheus emitter configurations:

  • druid.emitter.prometheus.deletePushGatewayMetricsOnShutdown: When set to true, peon tasks delete metrics from the Prometheus push gateway on task shutdown. Default value is false.
  • druid.emitter.prometheus.waitForShutdownDelay: Time in milliseconds to wait for peon tasks to delete metrics from pushgateway on shutdown. Applicable only when druid.emitter.prometheus.deletePushGatewayMetricsOnShutdown is set to true. Default value is none, meaning that there is no delay between peon task shutdown and metrics deletion from the push gateway.

#14935

# Iceberg improvements

Improved the Iceberg extension as follows:

  • Added a parameter snapshotTime to the iceberg input source spec that allows the user to ingest data files associated with the most recent snapshot. This helps the user ingest data based on older snapshots by specifying the associated snapshot time #15348
  • Added a new Iceberg ingestion filter of type range to filter on ranges of column values #15782
  • Fixed a typo in the Iceberg warehouse path for s3 #15823

# Upgrade notes and incompatible changes

# Upgrade notes

# Changed equals filter for native queries

The equality filter on mixed type auto columns that contain arrays must now be filtered as their presenting type. This means that if any rows are arrays (for example, the segment metadata and information_schema reports the type as some array type), then the native queries must also filter as if they are some array type.

This change impacts mixed type auto columns that contain both scalars and arrays. It doesn't impact SQL, which already has this limitation due to how the type presents itself.

#15503

# Console automatically sets arrayIngestMode for MSQ queries

Druid console now configures the arrayIngestMode parameter in the data loading flow, and its value can persist across the SQL tab unless manually updated. Therefore, when loading multi-value dimensions or arrays in the Druid web console, note the value of the arrayIngestMode parameter, to prevent mixing MVDs and Arrays in the same column of a data source accidentally.

#15588

# Improved concurrent append and replace (experimental)

You no longer have to manually determine the task lock type for concurrent append and replace (experimental) with the taskLockType task context. Instead, Druid can now determine it automatically for you. You can use the context parameter "useConcurrentLocks": true for individual tasks and datasources or enable concurrent append and replace at a cluster level using druid.indexer.task.default.context.

#15684

# Enabled empty ingest queries

The MSQ task engine now allows empty ingest queries by default. For queries that don't generate any output rows, the MSQ task engine reports zero values for numTotalRows and totalSizeInBytes instead of null. Previously, ingest queries that produced no data would fail with the InsertCannotBeEmpty MSQ fault.

To revert to the original behavior, set the MSQ query parameter failOnEmptyInsert to true.

#15495 #15674

# Enabled query request queuing by default when total laning is turned on

When query scheduler threads are less than server HTTP threads, total laning turns on.
This reserves some HTTP threads for non-query requests such as health checks.
The total laning previously would reject any query request that exceeds the lane capacity.
Now, excess requests will instead be queued with a timeout equal to MIN(Integer.MAX_VALUE, druid.server.http.maxQueryTimeout).

#15440

# Changed how empty or null array columns are stored

Columns ingested with the auto column indexer that contain only empty or null arrays are now stored as ARRAY<LONG\> instead of COMPLEX<json\>.

#15505

# Changed how Druid allocates weekly segments

When the requested granularity is a month or larger but a segment can't be allocated, Druid resorts to day partitioning.
Unless explicitly specified, Druid skips week-granularity segments for data partitioning because these segments don't align with the end of the month or more coarse-grained intervals.

#15589

# Removed the auto search strategy

Removed the auto search strategy from the native search query. Setting searchStrategy to auto is now equivalent to useIndexes.

#15550

# Developer notes

# Improved InDimFilter reverse-lookup optimization

This improvement includes the following changes:

  • Added the mayIncludeUnknown parameter to DimFilter#optimize.
  • Enabled InDimFilter#optimizeLookup to handle mayIncludeUnknown and perform reverse lookups in a wider range of cases.
  • Made unapply method in LookupExtractor protected and relocated callers to unapplyAll.

If your extensions provide a DimFilter, you may need to rebuild them to ensure compatibility with this release.

#15611

# Other developer improvements

  • Fixed an issue with the Druid Docker image #15264

# Web console logging

The web console now logs request errors in end-to-end tests to help with debugging.

#15483

# Dependency updates

The following dependencies have been updated:

  • Added chronoshift as a dependency #14990

  • Added gson to pom.xml #15488

  • Updated Confluent's dependencies to 6.2.12 #15441

  • Excluded jackson-jaxrs from ranger-plugin-common, which isn't required, to address CVEs #15481

  • Updated AWS SDK version to 1.12.638 #15814

  • Updated Avro to 1.11.3 #15419

  • Updated Ranger libraries to the newest available version #15363

  • Updated the iceberg core version to 1.4.1 #15348

  • Reduced dependency footprint for the iceberg extension #15280

  • Updated com.github.eirslett version to 1.15.0 #15556

  • Updated multiple webpack dependencies:

    • webpack to 5.89.0
    • webpack-bundle-analyzer to 4.10.1
    • webpack-cli to 5.1.4
    • webpack-dev-server to 4.15.1

    #15555

  • Updated pac4j-oidc java security library version to 4.5.7 #15522

  • Updated io.kubernetes.client-java version to 19.0.0 and docker-java-bom to 3.3.4 #15449

  • Updated core Apache Kafka dependencies to 3.6.1 #15539

  • Updated and pruned multiple dependencies for the web console, including dropping Babel. As a result, Internet Explorer 11 is no longer supported with the web console #15487

  • Updated Apache Zookeeper to 3.8.3 from 3.5.10 #15477

  • Updated Gauva to 32.0.1 from 31.1 #15482

  • Updated multiple dependencies to address CVEs:

    • dropwizard-metrics to 4.2.22 to address GHSA-mm8h-8587-p46h in com.rabbitmq:amqp-client
    • ant to 1.10.14 to resolve GHSA-f62v-xpxf-3v68, GHSA-4p6w-m9wc-c9c9, GHSA-q5r4-cfpx-h6fh, and GHSA-5v34-g2px-j4fw
    • comomons-compress to 1.24.0 to resolve GHSA-cgwf-w82q-5jrr
    • jose4j to 0.9.3 to resolve GHSA-7g24-qg88-p43q and GHSA-jgvc-jfgh-rjvv
    • kotlin-stdlib to 1.6.0 to resolve GHSA-cqj8-47ch-rvvq and CVE-2022-24329

    #15464

  • Updated Jackson to version 2.12.7.1 to address CVE-2022-42003 and CVE-2022-42004 which affects jackson-databind #15461

  • Updated com.google.code.gson:gson from 2.2.4 to 2.10.1 since 2.2.4 is affected by CVE-2022-25647 #15461

  • Updated Jedis to version 5.0.2 #15344

  • Updated commons-codec:commons-codec from 1.13 to 1.16.0 #14819

  • Updated Nimbus version to 8.22.1 #15753

# Credits

@17px
@317brian
@a2l007
@abhishekagarwal87
@abhishekrb19
@adarshsanjeev
@AlbericByte
@aleksi75
@AmatyaAvadhanula
@ankit0811
@aruraghuwanshi
@BartMiki
@benhopp
@bsyk
@clintropolis
@cristian-popa
@cryptoe
@dchristle
@dependabot[bot]
@ektravel
@fectrain
@findingrish
@gargvishesh
@georgew5656
@gianm
@hfukada
@hofi1
@HudsonShi
@janjwerner-confluent
@jon-wei
@kaisun2000
@KeerthanaSrikanth
@kfaraz
@kgyrtkirk
@krishnanand5
@LakshSingla
@legoscia
@lkm
@lorem--ipsum
@maytasm
@nasuiyile
@nozjkoitop
@oo007
@pagrawal10
@Pankaj260100
@pranavbhole
@rash67
@sb89594
@sekikn
@sergioferragut
@somu-imply
@suneet-s
@techdocsmith
@tejaswini-imply
@TestBoost
@TSFenwick
@Tts-233
@vinlee19
@vivek807
@vogievetsky
@vtlim
@writer-jill
@xvrl
@yashdeep97
@YongGang
@yuanlihan
@zachjsh

druid - Druid 28.0.1

Published by LakshSingla 10 months ago

Description

Apache Druid 28.0.1 is a patch release that fixes some issues in the 28.0.0 release. See the complete set of changes for additional details.

# Notable Bug fixes

# Credits

Thanks to everyone who contributed to this release!

@cryptoe
@gianm
@kgyrtkirk
@LakshSingla
@vogievetsky

druid - Druid 28.0.0

Published by LakshSingla 11 months ago

Apache Druid 28.0.0 contains over 420 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 57 contributors.

See the complete set of changes for additional details, including bug fixes.

Review the upgrade notes and incompatible changes before you upgrade to Druid 28.0.0.

# Important features, changes, and deprecations

In Druid 28.0.0, we have made substantial improvements to querying to make the system more ANSI SQL compatible. This includes changes in handling NULL and boolean values as well as boolean logic. At the same time, the Apache Calcite library has been upgraded to the latest version. While we have documented known query behavior changes, please read the upgrade notes section carefully. Test your application before rolling out to broad production scenarios while closely monitoring the query status.

# SQL compatibility

Druid continues to make SQL query execution more consistent with how standard SQL behaves. However, there are feature flags available to restore the old behavior if needed.

# Three-valued logic

Druid native filters now observe SQL three-valued logic (true, false, or unknown) instead of Druid's classic two-state logic by default, when the following default settings apply:

  • druid.generic.useThreeValueLogicForNativeFilters = true
  • druid.expressions.useStrictBooleans = true
  • druid.generic.useDefaultValueForNull = false

#15058

# Strict booleans

druid.expressions.useStrictBooleans is now enabled by default.
Druid now handles booleans strictly using 1 (true) or 0 (false).
Previously, true and false could be represented either as true and false as well as 1 and 0, respectively.
In addition, Druid now returns a null value for Boolean comparisons like True && NULL.

If you don't explicitly configure this property in runtime.properties, clusters now use LONG types for any ingested boolean values and in the output of boolean functions for transformations and query time operations.
For more information, see SQL compatibility in the upgrade notes.

#14734

# NULL handling

druid.generic.useDefaultValueForNull is now disabled by default.
Druid now differentiates between empty records and null records.
Previously, Druid might treat empty records as empty or null.
For more information, see SQL compatibility in the upgrade notes.

#14792

# SQL planner improvements

Druid uses Apache Calcite for SQL planning and optimization. Starting in Druid 28.0.0, the Calcite version has been upgraded from 1.21 to 1.35. This upgrade brings in many bug fixes in SQL planning from Calcite.

# Dynamic parameters

As part of the Calcite upgrade, the behavior of type inference for dynamic parameters has changed. To avoid any type interference issues, explicitly CAST all dynamic parameters as a specific data type in SQL queries. For example, use:

SELECT (1 * CAST (? as DOUBLE))/2 as tmp

Do not use:

SELECT (1 * ?)/2 as tmp

# Async query and query from deep storage

Query from deep storage is no longer an experimental feature. When you query from deep storage, more data is available for queries without having to scale your Historical services to accommodate more data. To benefit from the space saving that query from deep storage offers, configure your load rules to unload data from your Historical services.

# Support for multiple result formats

Query from deep storage now supports multiple result formats.
Previously, the /druid/v2/sql/statements/ endpoint only supported results in the object format. Now, results can be written in any format specified in the resultFormat parameter.
For more information on result parameters supported by the Druid SQL API, see Responses.

#14571

# Broadened access for queries from deep storage

Users with the STATE permission can interact with status APIs for queries from deep storage. Previously, only the user who submitted the query could use those APIs. This enables the web console to monitor the running status of the queries. Users with the STATE permission can access the query results.

#14944

# MSQ queries for realtime tasks

The MSQ task engine can now include real time segments in query results. To do this, use the includeSegmentSource context parameter and set it to REALTIME.

#15024

# MSQ support for UNION ALL queries

You can now use the MSQ task engine to run UNION ALL queries with UnionDataSource.

#14981

# Ingest from multiple Kafka topics to a single datasource

You can now ingest streaming data from multiple Kafka topics to a datasource using a single supervisor.
You configure the topics for the supervisor spec using a regex pattern as the value for topicPattern in the IO config. If you add new topics to Kafka that match the regex, Druid automatically starts ingesting from those new topics.

If you enable multi-topic ingestion for a datasource, downgrading will cause the Supervisor to fail.
For more information, see Stop supervisors that ingest from multiple Kafka topics before downgrading.

#14424
#14865

# SQL UNNEST and ingestion flattening

The UNNEST function is no longer experimental.

Druid now supports UNNEST in SQL-based batch ingestion and query from deep storage, so you can flatten arrays easily. For more information, see UNNEST and Unnest arrays within a column.

You no longer need to include the context parameter enableUnnest: true to use UNNEST.

#14886

# Recommended syntax for SQL UNNEST

The recommended syntax for SQL UNNEST has changed. We recommend using CROSS JOIN instead of commas for most queries to prevent issues with precedence. For example, use:

SELECT column_alias_name1 FROM datasource CROSS JOIN UNNEST(source_expression1) AS table_alias_name1(column_alias_name1) CROSS JOIN UNNEST(source_expression2) AS table_alias_name2(column_alias_name2), ...

Do not use:

SELECT column_alias_name FROM datasource, UNNEST(source_expression1) AS table_alias_name1(column_alias_name1), UNNEST(source_expression2) AS table_alias_name2(column_alias_name2), ...

# Window functions (experimental)

You can use window functions in Apache Druid to produce values based upon the relationship of one row within a window of rows to the other rows within the same window. A window is a group of related rows within a result set. For example, rows with the same value for a specific dimension.

Enable window functions in your query with the enableWindowing: true context parameter.

#15184

# Concurrent append and replace (experimental)

Druid 28.0.0 adds experimental support for concurrent append and replace.
This feature allows you to safely replace the existing data in an interval of a datasource while new data is being appended to that interval. One of the most common applications of this is appending new data to an interval while compaction of that interval is already in progress.
For more information, see Concurrent append and replace.

Segment locking will be deprecated and removed in favor of concurrent append and replace that is much simpler in design. With concurrent append and replace, Druid doesn't lock compaction jobs out because of active realtime ingestion.

# Task locks for append and replace batch ingestion jobs

Append batch ingestion jobs can now share locks. This allows you to run multiple append batch ingestion jobs against the same time internal. Replace batch ingestion jobs still require an exclusive lock. This means you can run multiple append batch ingestion jobs and one replace batch ingestion job for a given interval.

#14407

# Streaming ingestion with concurrent replace

Streaming jobs reading from Kafka and Kinesis with APPEND locks can now ingest concurrently with compaction running with REPLACE locks. The segment granularity of the streaming job must be equal to or finer than that of the concurrent replace job.

#15039

# Functional area and related changes

This section contains detailed release notes separated by areas.

# Web console

# Added UI support for segment loading query context parameter

The web console supports the waitUntilSegmentsLoad query context parameter.

#15110

# Added concurrent append and replace switches

The web console includes concurrent append and replace switches.

The following screenshot shows the concurrent append and replace switches in the classic batch ingestion wizard:

The following screenshot shows the concurrent append and replace switches in the compaction configuration UI:

#15114

# Added UI support for ingesting from multiple Kafka topics to a single datasource

The web console supports ingesting streaming data from multiple Kafka topics to a datasource using a single supervisor.

#14833

# Other web console improvements

  • You can now copy query results from the web console directly to the clipboard #14889
  • The web console now shows the execution dialog for query_controller tasks in the task view instead of the generic raw task details dialog. You can still access the raw task details from the ellipsis (...) menu #14930)
  • You can now select a horizontal range in the web console time chart to modify the current WHERE clause #14929
  • You can now set dynamic query parameters in the web console #14921
  • You can now edit the Coordinator dynamic configuration in the web console #14791
  • You can now prettify SQL queries and use flatten with a Kafka input format #14906
  • A warning now appears when a CSV or TSV sample contains newlines that Druid does not accept #14783
  • You can now select a format when downloading data #14794
  • Improved the clarity of cluster default rules in the retention dialog #14793
  • The web console now detects inline queries in the query text and lets you run them individually #14810
  • You can now reset specific partition offsets for a supervisor #14863

# Ingestion

# JSON and auto column indexer

The json column type is now equivalent to using auto in JSON-based batch ingestion dimension specs. Upgrade your ingestion specs to json to take advantage of the features and functionality of auto, including the following:

  • Type specializations including ARRAY typed columns
  • Better support for nested arrays of strings, longs, and doubles
  • Smarter index utilization

json type columns created with Druid 28.0.0 are not backwards compatible with Druid versions older than 26.0.0.
If you upgrade from one of these versions, you can continue to write nested columns in a backwards compatible format (version 4).

For more information, see Nested column format in the upgrade notes.

#14955
#14456

# Ingestion status

Ingestion reports now include a segmentLoadStatus object that provides information related to the ingestion, such as duration and total segments.

#14322

# SQL-based ingestion

# Ability to ingest ARRAY types

SQL-based ingestion now supports storing ARRAY typed values in ARRAY typed columns as well as storing both VARCHAR and numeric typed arrays.
Previously, the MSQ task engine stored ARRAY typed values as multi-value dimensions instead of ARRAY typed columns.

The MSQ task engine now includes the arrayIngestMode query context parameter, which controls how
ARRAY types are stored in Druid segments.
Set the arrayIngestMode query context parameter to array to ingest ARRAY types.

In Druid 28.0.0, the default mode for arrayIngestMode is mvd for backwards compatibility, which only supports VARCHAR typed arrays and stores them as multi-value dimensions. This default is subject to change in future releases.

For information on how to migrate to the new behavior, see the Ingestion options for ARRAY typed columns in the upgrade notes.
For information on inserting, filtering, and grouping behavior for ARRAY typed columns, see Array columns.

#15093

# Numeric array type support

Row-based frames and, by extension, the MSQ task engine now support numeric array types. This means that all queries consuming or producing arrays work with the MSQ task engine. Numeric arrays can also be ingested using SQL-based ingestion with MSQ. For example, queries like SELECT [1, 2] are valid now since they consume a numeric array instead of failing with an unsupported column type exception.

#14900

# Azure Blob Storage support

Added support for Microsoft Azure Blob Storage.
You can now use fault tolerance and durable storage with Microsoft Azure Blob Storage.
For more information, see Durable storage.

#14660

# Other SQL-based ingestion improvements

  • Added a new rowsPerPage context parameter for the MSQ task engine.
    Use rowsPerPage to limit the number of rows per page. For more information on context parameters for the MSQ task engine, see Context parameters #14994
  • Druid now ignores ServiceClosedException on postCounters while the controller is offline #14707
  • Improved error messages related to OVERWRITE keyword #14870

# Streaming ingestion

# Ability to reset offsets for a supervisor

Added a new API endpoint /druid/indexer/v1/supervisor/:supervisorId/resetOffsets to reset specific partition offsets for a supervisor without resetting the entire set.
This endpoint clears only the specified offsets in Kafka or sequence numbers in Kinesis, prompting the supervisor to resume data reading.

#14772

# Other streaming ingestion improvements

  • Added PropertyNamingStrategies from Jackson to fix Hadoop ingestion and make it compatible with newer Jackson #14671
  • Added pod name to the TaskLocation object for Kubernetes task scheduling to make debugging easier #14758
  • Added lifecycle hooks to KubernetesTaskRunner #14790
  • Added new method for SqlStatementResource and SqlTaskResource to set request attribute #14878
  • Added a sampling factor for DeterminePartitionsJob #13840
  • Added usedClusterCapacity to the GET /totalWorkerCapacity response. Use this API to get the total ingestion capacity on the overlord #14888
  • Improved Kubernetes task runner performance #14649
  • Improved handling of long data source names. Previously, the Kubernetes task runner would throw an error if the name of a data source was too long #14620
  • Improved the streaming ingestion completion timeout error message #14636
  • Druid now retries fetching S3 task logs on transient S3 errors #14714
  • Druid now reports task/pending/time metrics for Kubernetes-based ingestion #14698
  • Druid now reports k8s/peon/startup/time metrics for Kubernetes-based ingestion #14771
  • handoffConditionTimeout now defaults to 15 minutes—the default change won't affect existing supervisors #14539
  • Fixed an NPE with checkpoint parsing for streaming ingestion #14353
  • Fixed an issue with Hadoop ingestion writing arrays as objects.toString as a result of transform expressions #15127
  • The PodTemplateTaskAdapter now accounts for queryable tasks #14789
  • The rolling supervisor now restarts at taskDuration #14396
  • S3 deleteObjects requests are now retried if the failure state allows retry #14776
  • You can now ingest the name of a Kafka topic to a datasource #14857

# Querying

# Improved LOOKUP function

The LOOKUP function now accepts an optional constant string as a third argument. This string is used to replace missing values in results. For example, the query LOOKUP(store, 'store_to_country', 'NA'), returns NA if the store_to_country value is missing for a given store.

#14956

# AVG function

The AVG aggregation function now returns a double instead of a long.

#15089

# Improvements to EARLIEST and LATEST operators

Improved EARLIEST and LATEST operators as follows:

  • EARLIEST and LATEST operators now rewrite to EARLIEST_BY and LATEST_BY during query processing to make the __time column reference explicit to Calcite. #15095
  • You can now use EARLIEST/EARLIEST_BY and LATEST/LATEST_BY for STRING columns without specifying the maxBytesPerValue parameter.
    If you omit the maxBytesPerValue parameter, the aggregations default to 1024 bytes for the buffer. #14848

# Functions for evaluating distinctness

New SQL and native query functions allow you to evaluate whether two expressions are distinct or not distinct.
Expressions are distinct if they have different values or if one of them is NULL.
Expressions are not distinct if their values are the same or if both of them are NULL.

Because the functions treat NULLs as known values when used as a comparison operator, they always return true or false even if one or both expressions are NULL.

The following table shows the difference in behavior between the equals sign (=) and IS [NOT] DISTINCT FROM:

A B A=B A IS NOT DISTINCT FROM B
0 0 true true
0 1 false false
0 null unknown false
null null unknown true

#14976

# Functions for evaluating equalities

New SQL and native query functions allow you to evaluate whether a condition is true or false. These functions are different from x == true and x != true in that they never return null even when the variable is null.

SQL function Native function
IS_TRUE istrue()
IS_FALSE isfalse()
IS_NOT_TRUE nottrue()
IS_NOT_FALSE notfalse()

#14977

# Function to decode Base64-encoded strings

The new SQL and native query function, decode_base64_utf8 decodes a Base64-encoded string and returns the UTF-8-encoded string. For example, decode_base64_utf8('aGVsbG8=').

#14943

# Improved subquery guardrail

You can now set the maxSubqueryBytes guardrail to one of the following:

  • disabled: Default setting. Druid doesn't apply the guardrail around the number of bytes a subquery can generate.

  • auto: Druid calculates the amount of memory to use for the materialization of results as a portion of the fixed memory of the heap.
    In the query context, Druid uses the following formula to determine the upper limit on the number of bytes a subquery can generate:

    ((total JVM space - memory occupied by lookups) * 0.5) / maximum queries that the system can handle concurrently
    
  • INTEGER: The number of bytes to use for materializing subquery results. Set a specific value if you understand the query patterns and want to optimize memory usage.
    For example, set the maxSubqueryBytes parameter to 300000000 (300 * 1000 * 1000) for a 300 MB limit.
    Set the maxSubqueryBytes parameter to 314572800 (300 * 1024 * 1024) for a 300 MiB limit.

#14808

# Other query improvements

  • Added filters to the set of filters that work with UNNEST filter rewrite and pushdown #14777
  • Enabled whole-query caching on the Broker for groupBy v2 queries #11595
  • Improved performance of EARLIEST aggregator with vectorization #14408

# Cluster management

# Unused segments

Druid now stops loading and moving segments as soon as they are marked as unused. This prevents Historical processes from spending time on superfluous loads of segments that will be unloaded later. You can mark segments as unused by a drop rule, overshadowing, or by calling the Data management API.

#14644

# Encrypt data in transit

The net.spy.memcached client has been replaced with the AWS ElastiCache client. This change allows Druid to encrypt data in transit using TLS.

Configure it with the following properties:

Property Description Default
druid.cache.enableTls Enable TLS based connection for Memcached client. Boolean false
druid.cache.clientMode Client Mode. Static mode requires the user to specify individual cluster nodes. Dynamic mode uses AutoDiscovery feature of AWS Memcached. String. "static" or "dynamic" static
druid.cache.skipTlsHostnameVerification Skip TLS Hostname Verification. Boolean. true

#14827

# New metadata in the Druid segments table

The Druid segments table now has a column called used_flag_last_updated (VARCHAR (255)). This column is a UTC date string corresponding to the last time that the used column was modified.

Note that this is an incompatible change to the table. For upgrade information, see Upgrade Druid segments table.

#12599

# Other cluster management improvements

  • You can now use multiple console appenders in Peon logging #14521
  • Thread names of the processing pool for Indexer, Peon, and Historical processes now include the query ID #15059
  • The value for replicationThrottleLimit used for smart segment loading has been increased from 2% to 5% of total number of used segments. The total number of replicas in the load queue at the start of a run plus the replicas assigned in a run is kept less than or equal to the throttle limit #14913
  • The value default value for balancerComputeThreads is now calculated based on the number of CPUs divided by 2. Previously, the value was 1. Smart segment loading uses this computed value #14902
  • Improved InvalidNullByteFault errors. They now include the output column name instead of the query column name for ease of use #14780
  • Improved the exception message when DruidLeaderClient doesn't find leader node #14775
  • Reduced Coordinator logging under normal operation #14926
  • Heap usage is now more predictable at very minor performance cost when using nested values #14919
  • Middle Manager-less ingestion:
    • The sys.tasks metadata table and web console now show the Kubernetes pod name rather than the peon location when using Middle Manager-less ingestion #14959
    • Added support for Middle Manager-less ingestion to migrate with zero downtime to and from WorkerTaskRunners that use Middle Managers #14918
  • Druid extensions cannot bind custom Coordinator duties to the duty groups IndexingServiceDuties and MetadataStoreManagementDuties anymore. These are meant to be core coordinator built-in flows and should not be affected by custom duties. Users can still define a CustomCoordinatorDuty with a custom duty group and period #14891
  • Druid now adjusts balancerComputeThreads and maxSegmentsToMove automatically based on usage skew between the Historical processes in a tier #14584
  • Removed the configurable property druid.coordinator.compaction.skipLockedIntervals because it should always be true #14807
  • Updated mm-less task runner lifecycle logic to better match the logic in the HTTP and ZooKeeper worker task runners #14895

# Data management

# Alert message for segment assignments

Improved alert message for segment assignments when an invalid tier is specified in a load rule or when no rule applies on a segment.

#14696

# Coordinator API for unused segments

Added includeUnused as an optional parameter to the Coordinator API.
You can send a GET request to /druid/coordinator/v1/metadata/datasources/{dataSourceName}/segments/{segmentId}?includeUnused=true to retrieve the metadata for a specific segment as stored in the metadata store.

The API also returns unused segments if the includeUnused parameter is set.

#14846

# Kill task improvements

  • Added killTaskSlotRatio and maxKillTaskSlots dynamic configuration properties to allow control of task resource usage spawned by the KillUnusedSegments coordinator task #14769
  • The value for druid.coordinator.kill.period can now be greater than or equal to druid.coordinator.period.indexingPeriod. Previously, it had to be greater than druid.coordinator.period.indexingPeriod. Additionally, the leader Coordinator now keeps track of the last submitted kill task for a datasource to avoid submitting duplicate kill tasks #14831
  • Added a new config druid.coordinator.kill.bufferPeriod for a buffer period. This config defines the amount of time that a segment is unused before KillUnusedSegment can kill it. Using the default PT24H, if you mark a segment as unused at 2022-06-01T00:05:00.000Z, then the segment cannot be killed until at or after 2022-06-02T00:05:00.000Z #12599
  • You can now specify the following parameters for a kill task:
    • batchSize: The maximum number of segments to delete in one kill batch #14642
    • limit: The maximum number of segments for a kill task to delete #14662
  • You can now speed up kill tasks by batch deleting multiple segments stored in S3 #14131
  • Kill tasks that delete unused segments now publish a task report containing kill stats such as numSegmentsKilled, numBatchesProcessed, and numSegmentsMarkedAsUnused #15023
  • IndexerSQLMetadataStorageCoordinator now uses the JDBI PreparedBatch instead of issuing single update statements inside a transaction to mitigate scaling challenges #14639

# Metrics and monitoring

# New ingestion metrics

Metric Description Dimensions Normal value
ingest/input/bytes Number of bytes read from input sources, after decompression but prior to parsing. This covers all data read, including data that does not end up being fully processed and ingested. For example, this includes data that ends up being rejected for being unparseable or filtered out. dataSource, taskId, taskType, groupId, tags Depends on the amount of data read.

#14582

# New query metrics

Metric Description Dimensions Normal value
mergeBuffer/pendingRequests Number of requests waiting to acquire a batch of buffers from the merge buffer pool. This metric is exposed through the QueryCountStatsMonitor module for the Broker.

#15025

# New ZooKeeper metrics

Metric Description Dimensions Normal value
zk/connected Indicator of connection status. 1 for connected, 0 for disconnected. Emitted once per monitor period. None 1
zk/reconnect/time Amount of time, in milliseconds, that a server was disconnected from ZooKeeper before reconnecting. Emitted on reconnection. Not emitted if connection to ZooKeeper is permanently lost, because in this case, there is no reconnection. None Not present

#14333

# New subquery metrics for the Broker

The new SubqueryCountStatsMonitor emits metrics corresponding to the subqueries and their execution.

Metric Description Dimensions Normal value
subquery/rowLimit/count Number of subqueries whose results are materialized as rows (Java objects on heap). This metric is only available if the SubqueryCountStatsMonitor module is included.
subquery/byteLimit/count Number of subqueries whose results are materialized as frames (Druid's internal byte representation of rows). This metric is only available if the SubqueryCountStatsMonitor module is included.
subquery/fallback/count Number of subqueries which cannot be materialized as frames This metric is only available if the SubqueryCountStatsMonitor module is included.
subquery/fallback/insufficientType/count Number of subqueries which cannot be materialized as frames due to insufficient type information in the row signature. This metric is only available if the SubqueryCountStatsMonitor module is included.
subquery/fallback/unknownReason/count Number of subqueries which cannot be materialized as frames due other reasons. This metric is only available if the SubqueryCountStatsMonitor module is included.
query/rowLimit/exceeded/count Number of queries whose inlined subquery results exceeded the given row limit This metric is only available if the SubqueryCountStatsMonitor module is included.
query/byteLimit/exceeded/count Number of queries whose inlined subquery results exceeded the given byte limit This metric is only available if the SubqueryCountStatsMonitor module is included.

#14808

# New Coordinator metrics

Metric Description Dimensions Normal value
killTask/availableSlot/count Number of available task slots that can be used for auto kill tasks in the auto kill run. This is the max number of task slots minus any currently running auto kill tasks. Varies
killTask/maxSlot/count Maximum number of task slots available for auto kill tasks in the auto kill run. Varies
kill/task/count Number of tasks issued in the auto kill run. Varies
kill/pendingSegments/count Number of stale pending segments deleted from the metadata store. dataSource Varies

#14782
#14951

# New compaction metrics

Metric Description Dimensions Normal value
compact/segmentAnalyzer/fetchAndProcessMillis Time taken to fetch and process segments to infer the schema for the compaction task to run. dataSource, taskId, taskType, groupId,tags Varies. A high value indicates compaction tasks will speed up from explicitly setting the data schema.

#14752

# Segment scan metrics

Added a new metric to figure out the usage of druid.processing.numThreads on the Historicals/Indexers/Peons.

Metric Description Dimensions Normal value
segment/scan/active Number of segments currently scanned. This metric also indicates how many threads from druid.processing.numThreads are currently being used. Close to druid.processing.numThreads

# New Kafka consumer metrics

Added the following Kafka consumer metrics:

  • kafka/consumer/bytesConsumed: Equivalent to the Kafka consumer metric bytes-consumed-total. Only emitted for Kafka tasks.
  • kafka/consumer/recordsConsumed: Equivalent to the Kafka consumer metric records-consumed-total. Only emitted for Kafka tasks.

#14582

# service/heartbeat metrics

  • Exposed service/heartbeat metric to statsd-reporter #14564
  • Modified the service/heartbeat metric to expose the leader dimension #14593

# Tombstone and segment counts

Added ingest/tombstones/count and ingest/segments/count metrics in MSQ to report the number of tombstones and segments after Druid finishes publishing segments.

#14980

# Extensions

# Ingestion task payloads for Kubernetes

You can now provide compressed task payloads larger than 128 KB when you run MiddleManager-less ingestion jobs.

#14887

# Prometheus emitter

The Prometheus emitter now supports a new optional configuration parameter, druid.emitter.prometheus.extraLabels.
This addition offers the flexibility to add arbitrary extra labels to Prometheus metrics, providing more granular control in managing and identifying data across multiple Druid clusters or other dimensions.
For more information, see Prometheus emitter extension.

#14728

# Documentation improvements

We've moved Jupyter notebooks that guide you through query, ingestion, and data management with Apache Druid to the new Learn Druid repository.
The repository also contains a Docker Compose file to get you up and running with a learning lab.

#15136

# Upgrade notes and incompatible changes

# Upgrade notes

# Upgrade Druid segments table

Druid 28.0.0 adds a new column to the Druid metadata table that requires an update to the table.

If druid.metadata.storage.connector.createTables is set to true and the metadata store user has DDL privileges, the segments table gets automatically updated at startup to include the new used_flag_last_updated column. No additional work is needed for the upgrade.

If either of those requirements are not met, pre-upgrade steps are required. You must make these updates before you upgrade to Druid 28.0.0, or the Coordinator and Overlord processes fail.

Although you can manually alter your table to add the new used_flag_last_updated column, Druid also provides a CLI tool to do it.

#12599

In the example commands below:

  • lib is the Druid lib directory
  • extensions is the Druid extensions directory
  • base corresponds to the value of druid.metadata.storage.tables.base in the configuration, druid by default.
  • The --connectURI parameter corresponds to the value of druid.metadata.storage.connector.connectURI.
  • The --user parameter corresponds to the value of druid.metadata.storage.connector.user.
  • The --password parameter corresponds to the value of druid.metadata.storage.connector.password.
  • The --action parameter corresponds to the update action you are executing. In this case, it is add-last-used-to-segments

# Upgrade step for MySQL

cd ${DRUID_ROOT}
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"mysql-metadata-storage\"] -Ddruid.metadata.storage.type=mysql org.apache.druid.cli.Main tools metadata-update --connectURI="<mysql-uri>" --user USER --password PASSWORD --base druid --action add-used-flag-last-updated-to-segments

# Upgrade step for PostgreSQL

cd ${DRUID_ROOT}
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"postgresql-metadata-storage\"] -Ddruid.metadata.storage.type=postgresql org.apache.druid.cli.Main tools metadata-update --connectURI="<postgresql-uri>" --user  USER --password PASSWORD --base druid --action add-used-flag-last-updated-to-segments

# Manual upgrade step

ALTER TABLE druid_segments
ADD used_flag_last_updated varchar(255);

# Recommended syntax for SQL UNNEST

The recommended syntax for SQL UNNEST has changed. We recommend using CROSS JOIN instead of commas for most queries to prevent issues with precedence. For example, use:

SELECT column_alias_name1 FROM datasource CROSS JOIN UNNEST(source_expression1) AS table_alias_name1(column_alias_name1) CROSS JOIN UNNEST(source_expression2) AS table_alias_name2(column_alias_name2), ...

Do not use:

SELECT column_alias_name FROM datasource, UNNEST(source_expression1) AS table_alias_name1(column_alias_name1), UNNEST(source_expression2) AS table_alias_name2(column_alias_name2), ...

# Dynamic parameters

The Apache Calcite version has been upgraded from 1.21 to 1.35. As part of the Calcite upgrade, the behavior of type inference for dynamic parameters has changed. To avoid any type interference issues, explicitly CAST all dynamic parameters as a specific data type in SQL queries. For example, use:

SELECT (1 * CAST (? as DOUBLE))/2 as tmp

Do not use:

SELECT (1 * ?)/2 as tmp

# Nested column format

json type columns created with Druid 28.0.0 are not backwards compatible with Druid versions older than 26.0.0.
If you are upgrading from a version prior to Druid 26.0.0 and you use json columns, upgrade to Druid 26.0.0 before you upgrade to Druid 28.0.0.
Additionally, to downgrade to a version older than Druid 26.0.0, any new segments created in Druid 28.0.0 should be re-ingested using Druid 26.0.0 or 27.0.0 prior to further downgrading.

When upgrading from a previous version, you can continue to write nested columns in a backwards compatible format (version 4).

In a classic batch ingestion job, include formatVersion in the dimensions list of the dimensionsSpec property. For example:

      "dimensionsSpec": {
        "dimensions": [
          "product",
          "department",
          {
            "type": "json",
            "name": "shipTo",
            "formatVersion": 4
          }
        ]
      },

To set the default nested column version, set the desired format version in the common runtime properties. For example:

druid.indexing.formats.nestedColumnFormatVersion=4

# SQL compatibility

Starting with Druid 28.0.0, the default way Druid treats nulls and booleans has changed.

For nulls, Druid now differentiates between an empty string and a record with no data as well as between an empty numerical record and 0.
You can revert to the previous behavior by setting druid.generic.useDefaultValueForNull to true.

This property affects both storage and querying, and must be set on all Druid service types to be available at both ingestion time and query time. Reverting this setting to the old value restores the previous behavior without reingestion.

For booleans, Druid now strictly uses 1 (true) or 0 (false). Previously, true and false could be represented either as true and false as well as 1 and 0, respectively. In addition, Druid now returns a null value for boolean comparisons like True && NULL.

You can revert to the previous behavior by setting druid.expressions.useStrictBooleans to false.
This property affects both storage and querying, and must be set on all Druid service types to be available at both ingestion time and query time. Reverting this setting to the old value restores the previous behavior without reingestion.

The following table illustrates some example scenarios and the impact of the changes.

Query Druid 27.0.0 and earlier Druid 28.0.0 and later
Query empty string Empty string ('') or null Empty string ('')
Query null string Null or empty Null
COUNT(*) All rows, including nulls All rows, including nulls
COUNT(column) All rows excluding empty strings All rows including empty strings but excluding nulls
Expression 100 && 11 11 1
Expression 100 || 11 100 1
Null FLOAT/DOUBLE column 0.0 Null
Null LONG column 0 Null
Null __time column 0, meaning 1970-01-01 00:00:00 UTC 1970-01-01 00:00:00 UTC
Null MVD column '' Null
ARRAY Null Null
COMPLEX none Null

Before upgrading to Druid 28.0.0, update your queries to account for the changed behavior as described in the following sections.

# NULL filters

If your queries use NULL in the filter condition to match both nulls and empty strings, you should add an explicit filter clause for empty strings. For example, update s IS NULL to s IS NULL OR s = ''.

# COUNT functions

COUNT(column) now counts empty strings. If you want to continue excluding empty strings from the count, replace COUNT(column) with COUNT(column) FILTER(WHERE column <> '').

# GroupBy queries

GroupBy queries on columns containing null values can now have additional entries as nulls can co-exist with empty strings.

# Stop Supervisors that ingest from multiple Kafka topics before downgrading

If you have added supervisors that ingest from multiple Kafka topics in Druid 28.0.0 or later, stop those supervisors before downgrading to a version prior to Druid 28.0.0 because the supervisors will fail in versions prior to Druid 28.0.0.

# lenientAggregatorMerge deprecated

lenientAggregatorMerge property in segment metadata queries has been deprecated. It will be removed in future releases.
Use aggregatorMergeStrategy instead. aggregatorMergeStrategy also supports the latest and earliest strategies in addition to strict and lenient strategies from lenientAggregatorMerge.

#14560
#14598

# Broker parallel merge config options

The paths for druid.processing.merge.pool.* and druid.processing.merge.task.* have been flattened to use druid.processing.merge.* instead. The legacy paths for the configs are now deprecated and will be removed in a future release. Migrate your settings to use the new paths because the old paths will be ignored in the future.

#14695

# Ingestion options for ARRAY typed columns

Starting with Druid 28.0.0, the MSQ task engine can detect and ingest arrays as ARRAY typed columns when you set the query context parameter arrayIngestMode to array.
The arrayIngestMode context parameter controls how ARRAY type values are stored in Druid segments.

When you set arrayIngestMode to array (recommended for SQL compliance), the MSQ task engine stores all ARRAY typed values in ARRAY typed columns and supports storing both VARCHAR and numeric typed arrays.

For backwards compatibility, arrayIngestMode defaults to mvd. When "arrayIngestMode":"mvd", Druid only supports VARCHAR typed arrays and stores them as multi-value string columns.

When you set arrayIngestMode to none, Druid throws an exception when trying to store any type of arrays.

For more information on how to ingest ARRAY typed columns with SQL-based ingestion, see SQL data types and Array columns.

# Incompatible changes

# Removed Hadoop 2

Support for Hadoop 2 has been removed.
Migrate to SQL-based ingestion or JSON-based batch ingestion if you are using Hadoop 2.x for ingestion today.
If migrating to Druid's built-in ingestion is not possible, you must upgrade your Hadoop infrastructure to 3.x+ before upgrading to Druid 28.0.0.

#14763

# Removed GroupBy v1

The GroupBy v1 engine has been removed. Use the GroupBy v2 engine instead, which has been the default GroupBy engine for several releases.
There should be no impact on your queries.

Additionally, AggregatorFactory.getRequiredColumns has been deprecated and will be removed in a future release. If you have an extension that implements AggregatorFactory, then this method should be removed from your implementation.

#14866

# Removed Coordinator dynamic configs

The decommissioningMaxPercentOfMaxSegmentsToMove config has been removed.
The use case for this config is handled by smart segment loading now, which is enabled by default.

#14923

# Removed cachingCost strategy

The cachingCost strategy for segment loading has been removed.
Use cost instead, which has the same benefits as cachingCost.

If you have cachingCost set, the system ignores this setting and automatically uses cost.

#14798

# Removed InsertCannotOrderByDescending

The deprecated MSQ fault InsertCannotOrderByDescending has been removed.

#14588

# Removed the backward compatibility code for the Handoff API

The backward compatibility code for the Handoff API in CoordinatorBasedSegmentHandoffNotifier has been removed.
If you are upgrading from a Druid version older than 0.14.0, upgrade to a newer version of Druid before upgrading to Druid 28.0.0.

#14652

# Developer notes

# Dependency updates

The following dependencies have had their versions bumped:

  • Guava to 31.1-jre. If you use an extension that has a transitive Guava dependency from Druid, it may be impacted #14767
  • Google Client APIs have been upgraded from 1.26.0 to 2.0.0 #14414
  • Apache Kafka has been upgraded to 3.5.1 #14721
  • Calcite has been upgraded to 1.35 #14510
  • RoaringBitmap has been upgraded from 0.9.0 to 0.9.49 #15006
  • snappy-java has been upgraded to 1.1.10.3 #14641
  • decode-uri-component has been upgraded to 0.2.2 #13481
  • word-wrap has been upgraded to 1.2.4 #14613
  • tough-cookie has been upgraded to 4.1.3 #14557
  • qs has been upgraded to 6.5.3 #13510
  • api-util has been upgraded to 2.1.3 #14852
  • commons-cli has been upgraded from 1.3.1 to 1.5.0 #14837
  • tukaani:xz has been upgraded from 1.8 to 1.9 #14839
  • commons-compress has been upgraded from 1.21 to 1.23.0 #14820
  • protobuf.version has been upgraded from 3.21.7 to 3.24.0 #14823
  • dropwizard.metrics.version has been upgraded from 4.0.0 to 4.2.19 #14824
  • assertj-core has been upgraded from 3.19.0 to 3.24.2 #14815
  • maven-source-plugin has been upgraded from 2.2.1 to 3.3.0 #14812
  • scala-library has been upgraded from 2.13.9 to 2.13.11 #14826
  • oshi-core has been upgraded from 6.4.2 to 6.4.4 #14814
  • maven-surefire-plugin has been upgraded from 3.0.0-M7 to 3.1.2 #14813
  • apache-rat-plugin has been upgraded from 0.12 to 0.15 #14817
  • jclouds.version has been upgraded from 1.9.1 to 2.0.3 #14746
  • dropwizard.metrics:metrics-graphite has been upgraded from 3.1.2 to 4.2.19 #14842
  • postgresql has been upgraded from 42.4.1 to 42.6.0 #13959
  • org.mozilla:rhino has been upgraded #14765
  • apache.curator.version has been upgraded from 5.4.0 to 5.5.0 #14843
  • jackson-databind has been upgraded to 2.12.7 #14770
  • icu4j from 55.1 to 73.2 has been upgraded from 55.1 to 73.2 #14853
  • joda-time has been upgraded from 2.12.4 to 2.12.5 #14855
  • tough-cookie has been upgraded from 4.0.0 to 4.1.3 #14557
  • word-wrap has been upgraded from 1.2.3 to 1.2.4 #14613
  • decode-uri-component has been upgraded from 0.2.0 to 0.2.2 #13481
  • snappy-java has been upgraded from 1.1.10.1 to 1.1.10.3 #14641
  • Hibernate validator version has been upgraded #14757
  • The Dependabot PR limit for Java dependencies has been increased #14804
  • jetty has been upgraded from 9.4.51.v20230217 to 9.4.53.v20231009 #15129
  • netty4 has been upgraded from 4.1.94.Final to 4.1.100.Final #15129

# Credits

@2bethere
@317brian
@a2l007
@abhishekagarwal87
@abhishekrb19
@adarshsanjeev
@aho135
@AlexanderSaydakov
@AmatyaAvadhanula
@asdf2014
@benkrug
@capistrant
@clintropolis
@cristian-popa
@cryptoe
@demo-kratia
@dependabot[bot]
@ektravel
@findingrish
@gargvishesh
@georgew5656
@gianm
@giuliotal
@hardikbajaj
@hqx871
@imply-cheddar
@Jaehui-Lee
@jasonk000
@jon-wei
@kaisun2000
@kfaraz
@kgyrtkirk
@LakshSingla
@lorem--ipsum
@maytasm
@pagrawal10
@panhongan
@petermarshallio
@pranavbhole
@rash67
@rohangarg
@SamWheating
@sergioferragut
@slfan1989
@somu-imply
@suneet-s
@techdocsmith
@tejaswini-imply
@TSFenwick
@vogievetsky
@vtlim
@writer-jill
@xvrl
@yianni
@YongGang
@yuanlihan
@zachjsh

druid - Druid 27.0.0

Published by AmatyaAvadhanula about 1 year ago

Apache Druid 27.0.0 contains over 316 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 50 contributors.

See the complete set of changes for additional details, including bug fixes.

Review the upgrade notes and incompatible changes before you upgrade to Druid 27.0.0.

# Highlights

# New Explore view in the web console (experimental)

The Explore view is a simple, stateless, SQL backed, data exploration view to the web console. It lets users explore data in Druid with point-and-click interaction and visualizations (instead of writing SQL and looking at a table). This can provide faster time-to-value for a user new to Druid and can allow a Druid veteran to quickly chart some data that they care about.

image-4

The Explore view is accessible from the More (...) menu in the header:

image-5

#14540

# Query from deep storage (experimental)

Druid now supports querying segments that are stored only in deep storage. When you query from deep storage, you can query larger data available for queries without necessarily having to scale your Historical processes to accommodate more data. To take advantage of the potential storage savings, make sure you configure your load rules to not load all your segments onto Historical processes.

Note that at least one segment of a datasource must be loaded onto a Historical process so that the Broker can plan the query. It can be any segment though.

For more information, see the following:

#14416 #14512 #14527

# Schema auto-discovery and array column types

Type-aware schema auto-discovery is now generally available. Druid can determine the schema for the data you ingest rather than you having to manually define the schema.

As part of the type-aware schema discovery improvements, array column types are now generally available. Druid can determine the column types for your schema and assign them to these array column types when you ingest data using type-aware schema auto-discovery with the auto column type.

For more information about this feature, see the following:

# Smart segment loading

The Coordinator is now much more stable and user-friendly. In the new smartSegmentLoading mode, it dynamically computes values for several configs which maximize performance.

The Coordinator can now prioritize load of more recent segments and segments that are completely unavailable over load of segments that already have some replicas loaded in the cluster. It can also re-evaluate decisions taken in previous runs and cancel operations that are not needed anymore. Moreoever, move operations started by segment balancing do not compete with the load of unavailable segments thus reducing the reaction time for changes in the cluster and speeding up segment assignment decisions.

Additionally, leadership changes have less impact now, and the Coordinator doesn't get stuck even if re-election happens while a Coordinator run is in progress.

Lastly, the cost balancer strategy performs much better now and is capable of moving more segments in a single Coordinator run. These improvements were made by borrowing ideas from the cachingCost strategy. We recommend using cost instead of cachingCost since cachingCost is now deprecated.

For more information, see the following:

#13197 #14385 #14484

# New query filters

Druid now supports the following filters:

  • Equality: Use in place of the selector filter. It never matches null values.
  • Null: Match null values. Use in place of the selector filter.
  • Range: Filter on ranges of dimension values. Use in place of the bound filter. It never matches null values

Note that Druid's SQL planner uses these new filters in place of their older counterparts by default whenever druid.generic.useDefaultValueForNull=false or if sqlUseBoundAndSelectors is set to false on the SQL query context.

You can use these filters for filtering equality and ranges on ARRAY columns instead of only strings with the previous selector and bound filters.

For more information, see Query filters.

#14542

# Guardrail for subquery results

Users can now add a guardrail to prevent subquery’s results from exceeding the set number of bytes by setting druid.server.http.maxSubqueryRows in the Broker's config or maxSubqueryRows in the query context. This guardrail is recommended over row-based limiting.

This feature is experimental for now and defaults back to row-based limiting in case it fails to get the accurate size of the results consumed by the query.

#13952

# Added a new OSHI system monitor

Added a new OSHI system monitor (OshiSysMonitor) to replace SysMonitor. The new monitor has a wider support for different machine architectures including ARM instances. We recommend switching to the new monitor. SysMonitor is now deprecated and will be removed in future releases.

#14359

# Java 17 support

Druid now fully supports Java 17.

#14384

# Hadoop 2 deprecated

Support for Hadoop 2 is now deprecated. It will be removed in a future release.

For more information, see the upgrade notes.

# Additional features and improvements

# SQL-based ingestion

# Improved query planning behavior

Druid now fails query planning if a CLUSTERED BY column contains descending order.
Previously, queries would successfully plan if any CLUSTERED BY columns contained descending order.

The MSQ fault, InsertCannotOrderByDescending, is deprecated. An INSERT or REPLACE query containing a CLUSTERED BY expression cannot be in descending order. Druid's segment generation code only supports ascending order. Instead of the fault, Druid now throws a query ValidationException.

#14436 #14370

# Improved segment sizes

The default clusterStatisticsMergeMode is now SEQUENTIAL, which provide more accurate segment sizes.

#14310

# Other SQL-based ingestion improvements

  • The same aggregator can now have two output names #14367
  • Enabled using functions as inputs for index and length parameters #14480
  • Improved parse exceptions #14398

# Ingestion

# Ingestion improvements

  • If the Overlord fails to insert a task into the metadata because of a payload that exceeds the max_allowed_packet limit, the response now returns 400 Bad request. This prevents an index_parallel task from retrying the insertion of a bad sub-task indefinitely and causes it to fail immediately. #14271
  • A negative streaming ingestion lag is no longer emitted as a result of stale offsets. #14292
  • Removed double synchronization on simple map operations in Kubernetes task runner. #14435
  • Kubernetes overlord extension now cleans up the job if the task pod fails to come up in time. #14425

# MSQ task engine querying

In addition to the new query from deep storage feature, SELECT queries using the MSQ task engine have been improved.

# Support for querying lookup and inline data directly

You can now query lookup tables directly, such as SELECT * FROM lookup.xyz, when using the MSQ task engine.

#14048

# Truncated query results

SELECT queries executed using MSQ generate only a subset of the results in the query reports.
To fetch the complete result set, run the query using the native engine.

#14370

# New context parameter for query results

Added a query context parameter MultiStageQueryContext to determine whether the result of an MSQ SELECT query is limited.

#14476

# Query results directory

Druid now supports a query-results directory in durable storage to store query results after the task finishes. The auto cleaner does not remove this directory unless the task ID is not known to the Overlord.

#14446

# Querying

# New function for regular expression replacement

The new function REGEXP_REPLACE allows you to replace all instances of a pattern with a replacement string.

#14460

# HLL and Theta sketch estimates

You can now use HLL_SKETCH_ESTIMATE and THETA_SKETCH_ESTIMATE as expressions. These estimates work on sketch columns and have the same behavior as postAggs.

#14312

# EARLIEST_BY and LATEST_BY signatures

Updated EARLIEST_BY and LATEST_BY function signatures as follows:

  • Changed EARLIEST(expr, timeColumn) to EARLIEST_BY(expr, timeColumn)
  • Changed LATEST(expr, timeColumn) to LATEST_BY(expr, timeColumn)

#14352

# INFORMATION_SCHEMA.ROUTINES TABLE

Use the new INFORMATION_SCHEMA.ROUTINES to programmatically get information about the functions that Druid SQL supports.

For more information, such as the available columns, see ROUTINES table.

#14378

# New Broker configuration for SQL schema migrations

You can now better control how Druid reacts to schema changes between segments. This can make querying more resilient when newer segments introduce different types, such as if a column previously contained LONG values and newer segments contain STRING.

Use the new Broker configuration, druid.sql.planner.metadataColumnTypeMergePolicy to control how column types are computed for the SQL table schema when faced with differences between segments.

Set it to one of the following:

  • leastRestrictive: the schema only updates once all segments are reindexed to the new type.
  • latestInterval: the SQL schema gets updated as soon as the first job with the new schema publishes segments in the latest time interval of the data.

leastRestrictive can have better query time behavior and eliminates some query time errors that can occur when using latestInterval.

#14319

# EXPLAIN PLAN improvements

The EXPLAIN PLAN result includes a new column ATTRIBUTES that describes the attributes of a query.

For more information, see SQL translation

#14391

# Metrics and monitoring

# Monitor for Overlord and Coordinator service health

Added a new monitor ServiceStatusMonitor to monitor the service health of the Overlord and Coordinator.

#14443

# New Broker metrics

The following metrics are now available for Brokers:

Metric Description Dimensions
segment/metadatacache/refresh/count Number of segments to refresh in broker segment metadata cache. Emitted once per refresh per datasource. dataSource
segment/metadatacache/refresh/time Time taken to refresh segments in broker segment metadata cache. Emitted once per refresh per datasource. dataSource

#14453

# New Coordinator metrics

Metric Description Dimensions Normal value
segment/loadQueue/assigned Number of segments assigned for load or drop to the load queue of a server. dataSource,server Varies
segment/loadQueue/success Number of segment assignments that completed successfully. dataSource, server Varies
segment/loadQueue/cancelled Number of segment assignments that were canceled before completion. dataSource,server 0
segment/loadQueue/failed Number of segment assignments that failed to complete. dataSource, server 0

#13197

# New metrics for task completion updates

Metric Description Normal value
task/status/queue/count Monitors the number of queued items Varies
task/status/updated/count Monitors the number of processed items Varies

#14533

# Added groupId to Overlord task metrics

Added groupId to task metrics emitted by the Overlord. This is helpful for grouping metrics like task/run/time by a single task group, such as a single compaction task or a single MSQ query.

#14402

# New metrics for monitoring sync status of HttpServerInventoryView

TBD for name change

Metric Description Dimensions Normal value
serverview/sync/healthy Sync status of the Coordinator/Broker with a segment-loading server such as a Historical or Peon. Emitted by the Coordinator and Brokers only when HTTP-based server view is enabled. This metric can be used in conjunction with serverview/sync/unstableTime to debug slow startup of the Coordinator. server, tier 1 for fully synced servers, 0 otherwise
serverview/sync/unstableTime Time in milliseconds for which the Coordinator/Broker has been failing to sync with a segment-loading server. Emitted by the Coordinator and Brokers only when HTTP-based server view is enabled. server, tier Not emitted for synced servers.

# Cluster management

# New property for task completion updates

The new property druid.indexer.queue.taskCompleteHandlerNumThreads controls the number of threads used by the Overlord TaskQueue to handle task completion updates received from the workers.

For the related metrics, see new metrics for task completion updates.

#14533

# Enabled empty tiered replicants for load rules

Druid now allows empty tiered replicants in load rules. Use this feature along with query from deep storage to increase the amount of data you can query without needing to scale your Historical processes.

#14432

# Stabilized initialization of HttpServerInventoryView

The initialization of HttpServerInventoryView maintained by Brokers and Coordinator is now resilient to Historicals and Peons crashing. The crashed servers are marked as stopped and not waited upon during the initialization.

New metrics are available to monitor the sync status of HttpServerInventoryView with different servers.

#14517

# Web console

# Console now uses the new statements API for all MSQ interaction

The console uses the new async statements API for all sql-msq-task engine queries.
While this has relatively few impacts on the UX of the query view, you are invited to peek under the hood and check out the new network requests being sent as working examples of the new API.

You can now specify durableStorage as the result destination for SELECT queries (when durable storage is configured):

![Choose to write the results for SELECT queries to durable storage]
image

After running a SELECT query that wrote its results to durableStorage, download the full, unlimited result set directly from the Broker:

image-1

#14540 #14669 #14712

# Added UI around data source with zero replica segments

This release of Druid supports having datasources with segments that are not replicated on any Historicals. These datasources appear in the console like so:

image-2

#14540

# Added a dialog for viewing and setting the dynamic compaction config

There's now a dialog for managing your dynamic compaction config:

image-3

# Other web console improvements

  • Replaced the Ingestion view with two views: Supervisors and Tasks. #14395
  • Added a new virtual column replication_factor to the sys.segments table. This returns the total number of replicants of the segment across all tiers. The column is set to -1 if the information is not available. #14403
  • Added stateful filter URLs for all views. #14395

# Extensions

# Improved segment metadata for Kafka emitter extension

The Kafka emitter extension has been improved. You can now publish events related to segments and their metadata to Kafka.
You can set the new properties such as in the following example:

druid.emitter.kafka.event.types=["metrics", "alerts", "segment_metadata"]
druid.emitter.kafka.segmentMetadata.topic=foo

#14281

# Contributor extensions

# Apache® Iceberg integration

You can now ingest data stored in Iceberg and query that data directly by querying from deep storage. Support for Iceberg is available through the new community extension.

For more information, see Iceberg extension.

#14329

# Dependency updates

The following dependencies have had their versions bumped:

  • Apache DataSketches has been upgraded to 4.1.0. Additionally, the datasketches-memory component has been upgraded to version 2.2.0. #14430
  • Hadoop has been upgraded to version 3.3.6. #14489
  • Avro has been upgraded to version 1.11.1. #14440

# Developer notes

Introduced a new unified exception, DruidException, for surfacing errors. It is partially compatible with the old way of reporting error messages. Response codes remain the same, all fields that previously existed on the response will continue to exist and be populated, including errorMessage. Some error messages have changed to be more consumable by humans and some cases have the message restructured. There should be no impact to the response codes.

org.apache.druid.common.exception.DruidException is deprecated in favor of the more comprehensive org.apache.druid.error.DruidException.

org.apache.druid.metadata.EntryExistsException is deprecated and will be removed in a future release.

#14004 #14554

# Upgrade notes and incompatible changes

# Upgrade notes

# Worker input bytes for SQL-based ingestion

The maximum input bytes for each worker for SQL-based ingestion is now 512 MiB (previously 10 GiB).

#14307

# Parameter execution changes for Kafka

When using the built-in FileConfigProvider for Kafka, interpolations are now intercepted by the JsonConfigurator instead of being passed down to the Kafka provider. This breaks existing deployments.

For more information, see KIP-297.

#13023

# Hadoop 2 deprecated

Many of the important dependent libraries that Druid uses no longer support Hadoop 2. In order for Druid to stay current and have pathways to mitigate security vulnerabilities, the community has decided to deprecate support for Hadoop 2.x releases starting this release. Starting with Druid 28.x, Hadoop 3.x is the only supported Hadoop version.

Consider migrating to SQL-based ingestion or native ingestion if you are using Hadoop 2.x for ingestion today. If migrating to Druid ingestion is not possible, plan to upgrade your Hadoop infrastructure before upgrading to the next Druid release.

# GroupBy v1 deprecated

GroupBy queries using the v1 legacy engine has been deprecated. It will be removed in future releases. Use v2 instead. Note that v2 has been the default GroupBy engine.

For more information, see GroupBy queries.

# Push-based real-time ingestion deprecated

Support for push-based real-time ingestion has been deprecated. It will be removed in future releases.

# cachingCost segment balancing strategy deprecated

The cachingCost strategy has been deprecated and will be removed in future releases. Use an alternate segment balancing strategy instead, such as cost.

# Segment loading config changes

The following segment related configs are now deprecated and will be removed in future releases:

  • maxSegmentsInNodeLoadingQueue
  • maxSegmentsToMove
  • replicationThrottleLimit
  • useRoundRobinSegmentAssignment
  • replicantLifetime
  • maxNonPrimaryReplicantsToLoad
  • decommissioningMaxPercentOfMaxSegmentsToMove

Use smartSegmentLoading mode instead, which calculates values for these variables automatically.

Additionally, the defaults for the following Coordinator dynamic configs have changed:

  • maxsegmentsInNodeLoadingQueue : 500, previously 100
  • maxSegmentsToMove: 100, previously 5
  • replicationThrottleLimit: 500, previously 10

These new defaults can improve performance for most use cases.

#13197
#14269

# SysMonitor support deprecated

Switch to OshiSysMonitor as SysMonitor is now deprecated and will be removed in future releases.

# Incompatible changes

# Removed property for setting max bytes for dimension lookup cache

druid.processing.columnCache.sizeBytes has been removed since it provided limited utility after a number of internal changes. Leaving this config is harmless, but it does nothing.

#14500

# Removed Coordinator dynamic configs

The following Coordinator dynamic configs have been removed:

  • emitBalancingStats: Stats for errors encountered while balancing will always be emitted. Other debugging stats will not be emitted but can be logged by setting the appropriate debugDimensions.
  • useBatchedSegmentSampler and percentOfSegmentsToConsiderPerMove: Batched segment sampling is now the standard and will always be on.

Use the new smart segment loading mode instead.

#14524

# Credits

Thanks to everyone who contributed to this release!
@317brian
@a2l007
@abhishek-chouhan
@abhishekagarwal87
@abhishekrb19
@adarshsanjeev
@AlexanderSaydakov
@amaechler
@AmatyaAvadhanula
@asdf2014
@churromorales
@clintropolis
@cryptoe
@demo-kratia
@ektravel
@findingrish
@georgew5656
@gianm
@hardikbajaj
@harinirajendran
@imply-cheddar
@jakubmatyszewski
@janjwerner-confluent
@jgoz
@jon-wei
@kfaraz
@knorth55
@LakshSingla
@maytasm
@nlippis
@panhongan
@paul-rogers
@petermarshallio
@pjain1
@PramodSSImmaneni
@pranavbhole
@robo220
@rohangarg
@sergioferragut
@skytin1004
@somu-imply
@suneet-s
@techdocsmith
@tejaswini-imply
@TSFenwick
@vogievetsky
@vtlim
@writer-jill
@YongGang
@zachjsh

druid - Druid 26.0.0

Published by clintropolis over 1 year ago

Apache Druid 26.0.0 contains over 390 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 65 contributors.

See the complete set of changes for additional details.

Review the upgrade notes and incompatible changes before you upgrade to Druid 26.0.0.

# Highlights

# Auto type column schema (experimental)

A new "auto" type column schema and indexer has been added to native ingestion as the next logical iteration of the nested column functionality. This automatic type column indexer that produces the most appropriate column for the given inputs, producing either STRING, ARRAY<STRING>, LONG, ARRAY<LONG>, DOUBLE, ARRAY<DOUBLE>, or COMPLEX<json> columns, all sharing a common 'nested' format.

All columns produced by 'auto' have indexes to aid in fast filtering (unlike classic LONG and DOUBLE columns) and use cardinality based thresholds to attempt to only utilize these indexes when it is likely to actually speed up the query (unlike classic STRING columns).

COMPLEX<json> columns produced by this 'auto' indexer store arrays of simple scalar types differently than their 'json' (v4) counterparts, storing them as ARRAY typed columns. This means that the JSON_VALUE function can now extract entire arrays, for example JSON_VALUE(nested, '$.array' RETURNING BIGINT ARRAY). There is no change with how arrays of complex objects are stored at this time.

This improvement also adds a completely new functionality to Druid, ARRAY typed columns, which unlike classic multi-value STRING columns behave with ARRAY semantics. These columns can currently only be created via the 'auto' type indexer when all values are an arrays with the same type of elements.

An array data type is a data type that allows you to store multiple values in a single column of a database table. Arrays are typically used to store sets of related data that can be easily accessed and manipulated as a group.

This release adds support for storing arrays of primitive values such as ARRAY<STRING>, ARRAY<LONG>, and ARRAY<DOUBLE> as specialized nested columns instead of breaking them into separate element columns.

#14014 #13803

These changes affect two additional new features available in 26.0: schema auto-discovery and unnest.

# Schema auto-discovery (experimental)

We’re adding schema-auto discovery with type inference to Druid. With this feature, the data type of each incoming field is detected when schema is available. For incoming data which may contain added, dropped, or changed fields, you can choose to reject the nonconforming data (“the database is always correct - rejecting bad data!”), or you can let schema auto-discovery alter the datasource to match the incoming data (“the data is always right - change the database!”).

Schema auto-discovery is recommend for new use-cases and ingestions. For existing use-cases be careful switching to schema auto-discovery because Druid will ingest array-like values (e.g. ["tag1", "tag2]) as ARRAY<STRING> type columns instead of multi-value (MV) strings, this could cause issues in downstream apps replying on MV behavior. Hold off switching until an official migration path is available.

To use this feature, set spec.dataSchema.dimensionsSpec.useSchemaDiscovery to true in your task or supervisor spec or, if using the data loader in the console, uncheck the Explicitly define schema toggle on the Configure schema step. Druid can infer the entire schema or some of it if you explicitly list dimensions in your dimensions list.

Schema auto-discovery is available for native batch and streaming ingestion.

#13653 #13672 #14076

# UNNEST arrays (experimental)

Part of what’s cool about UNNEST is how it allows a wider range of operations that weren’t possible on Array data types. You can unnest arrays with either the UNNEST function (SQL) or the unnest datasource (native).

Unnest converts nested arrays or tables into individual rows. The UNNEST function is particularly useful when working with complex data types that contain nested arrays, such as JSON.

For example, suppose you have a table called "orders" with a column called "items" that contains an array of products for each order. You can use unnest to extract the individual products ("each_item") like in the following SQL example:

SELECT order_id, each_item FROM orders, UNNEST(items) as unnested(each_item)

This produces a result set with one row for each item in each order, with columns for the order ID and the individual item

Note the comma after the left table/datasource (orders in the example). It is required.

#13268 #13943 #13934 #13922 #13892 #13576 #13554 #13085

# Sort-merge join and hash shuffle join for MSQ

We can now perform shuffle joins by setting by setting the context parameter sqlJoinAlgorithm to sortMerge for the sort-merge algorithm or omitting it to perform broadcast joins (default).

Multi-stage queries can use a sort-merge join algorithm. With this algorithm, each pairwise join is planned into its own stage with two inputs. This approach is generally less performant but more scalable, than broadcast.

Set the context parameter sqlJoinAlgorithm to sortMerge to use this method.

Broadcast hash joins are similar to how native join queries are executed.

#13506

# Storage improvements on dictionary compression

Switching to using frontcoding dictionary compression (experimental) can save up to 30% with little to no impact to query performance.

This release further improves the frontCoded type of stringEncodingStrategy on indexSpec with a new segment format version, which typically has faster read speeds and reduced segment size. This improvement is backwards incompatible with Druid 25.0. Added a new formatVersion option, which defaults to the the current version 0. Set formatVersion to 1 to start using the new version.

#13988 #13996

Additionally, overall storage size, particularly with using larger buckets, has been improved.

13854

# Additional features and improvements

# MSQ task engine

# Array-valued parameters for SQL queries

Added support for array-valued parameters for SQL queries using. You can now reuse the same SQL for every ingestion, only passing in a different set of input files as query parameters.

#13627

# EXTEND clause for the EXTERN functions

You can now use an EXTEND clause to provide a list of column definitions for your source data in standard SQL format.

The web console now defaults to using the EXTEND clause syntax for all queries auto-generated in the web console. This means that SQL-based ingestion statements generated by the web console in Druid 26 (such as from the SQL based data loader) will not work in earlier versions of Druid.

#13627 #13985

# MSQ fault tolerance

Added the ability for MSQ controller task to retry worker task in case of failures. To enable, pass faultTolerance:true in the query context.

#13353

  • Connections to S3 for fault tolerance and durable shuffle storage are now more resilient. #13741

  • Improved S3 connector #13960

    • Added retries and max fetch size.
    • Implemented S3utils for interacting with APIs.

# Use tombstones when running REPLACE operations

REPLACE for SQL-based ingestion now generates tombstones instead of marking segments as unused.

If you downgrade Druid, you can only downgrade to a version that also supports tombstones.

#13706

# Better ingestion splits

The MSQ task engine now considers file size when determining splits. Previously, file size was ignored; all files were treated as equal weight when determining splits.

Also applies to native batch.

#13955

# Enabled composed storage for Supersorter intermediate data

Druid now supports composable storage for intermediate data. This allows the data to be stored on multiple storage systems through local disk and durable storage. Behavior is enabled when the runtime config druid.indexer.task.tmpStorageBytesPerTask is set and the query context parameter durableShuffleStorage is set to true.

#13368 #14061

# Other MSQ improvements

  • Added a check to prevent the collector from downsampling the same bucket indefinitely. #13663
  • Druid now supports composable storage for SuperSorter intermediate data. This allows the data to be stored on multiple storage systems through fallbacks. #13368
  • When MSQ throws a NOT_ENOUGH_MEMORY_FAULT error, the error message now suggests a JVM Xmx setting to provide. #13846
  • Add a new fault "QueryRuntimeError" to MSQ engine to capture native query errors. #13926
  • maxResultsSize has been removed from the S3OutputConfig and a default chunkSize of 100MiB is now present. This change primarily affects users who wish to use durable storage for MSQ jobs.

# Ingestion

# Indexing on multiple disks

You can now use multiple disks for indexing tasks. In the runtime properties for the MiddleManager/Indexer, use the following property to set the disks and directories:

  • druid.worker.baseTaskDirs=[\"PATH1\",\"PATH2\",...]

#13476 #14063

# Improved default fetch settings for Kinesis

Updated the following fetch settings for the Kinesis indexing service:

  • fetchThreads: Twice the number of processors available to the task.
  • fetchDelayMillis: 0 (no delay between fetches).
  • recordsPerFetch: 100 MB or an estimated 5% of available heap, whichever is smaller, divided by fetchThreads.
  • recordBufferSize: 100 MB or an estimated 10% of available heap, whichever is smaller.
  • maxRecordsPerPoll: 100 for regular records, 1 for aggregated records.

#13539

# Added fields in the sampler API response

The response from /druid/indexer/v1/sampler now includes the following:

  • logicalDimension: list of the most restrictive typed dimension schemas
  • physicalDimension: list of dimension schemas actually used to sample the data
  • logicalSegmentSchema: full resulting segment schema for the set of rows sampled

#13711

# Multi-dimensional range partitioning for Hadoop-based ingestion

Hadoop-based ingestion now supports multi-dimensional range partitioning. #13303

# Other ingestion improvements

  • Improved performance when ingesting JSON data. #13545
  • Added context map to HadoopIngestionSpec. You can set the context map directly in HadoopIngestionSpec using the command line (non-task) version or in the context map for HadoopIndexTask which is then automatically added to HadoopIngestionSpec. #13624

# Querying

Many of the querying improvements for Druid 26.0 are discussed in the highlights section. This section describes additional improvements to querying in Druid.

# New post aggregators for Tuple sketches

You can now do the following operations with Tuple sketches using post aggregators:

  • Get the sketch output as Base64 String.
  • Provide a constant Tuple sketch in a post aggregation step that can be used in set operations.
  • Estimate the sum of summary/metrics objects associated with Tuple sketches.

#13819

# Support for SQL functions on Tuple sketches

Added SQL functions for creating and operating on Tuple sketches.

#13887

# Improved nested column performance

Improve nested column performance by adding cardinality based thresholds for range and predicate indexes to choose to skip using bitmap indexes. #13977

# Improved logs for query errors

Logs for query errors now include more information about the exception that occurred, such as error code and class.

#13776

# Improve performance of SQL operators NVL and COALESCE

SQL operators NVL and COALESCE with 2 arguments now plan to a native NVL expression, which supports the vector engine. Multi-argument COALESCE still plans into a case_searched, which is not vectorized.

#13897

# Improved performance for composite key joins

Composite key joins are now faster.

#13516

# Other querying improvements

  • Improved exception logging of queries during planning. Previously, a class of QueryException would throw away the causes making it hard to determine what failed in the SQL planner. #13609
  • Added function equivalent to Math.pow to support square, cube, square root. #13704
  • Enabled merge-style operations that combine multiple streams. This means that query operators are now pausable. #13694
  • Various improvements to improve query performance and logic. #13902

# Metrics

# New server view metrics

The following metrics are now available for Brokers:

Metrics Description Normal value
init/serverview/time Time taken to initialize the broker server view. Useful to detect if brokers are taking too long to start. Depends on the number of segments.
init/metadatacache/time Time taken to initialize the broker segment metadata cache. Useful to detect if brokers are taking too long to start Depends on the number of segments.

The following metric is now available for Coordinators:

Metrics Description Normal value
init/serverview/time Time taken to initialize the coordinator server view. Depends on the number of segments

#13716

# Additional metadata for native ingestion metrics

You can now add additional metadata to the ingestion metrics emitted from the Druid cluster. Users can pass a map of metadata in the ingestion spec context parameters. These get added to the ingestion metrics. You can then tag these metrics with other metadata besides the existing tags like taskId. For more information, see General native ingestion metrics.

#13760

# Peon monitor override when using MiddleManager-less ingestion

You can now override druid.monitoring.monitors if you don't want to inherit monitors from the Overlord. Use the following property: druid.indexer.runner.peonMonitors.

#14028

# Cluster management

# Enabled round-robin segment assignment and batch segment allocation by default

Round-robin segment assignment greatly speeds up Coordinator run times and is hugely beneficial to all clusters. Batch segment allocation works extremely well when you have multiple concurrent real-time tasks for a single supervisor.

#13942

# Improved client change counter in HTTP Server View

The client change counter is now more efficient and resets in fewer situations.

#13010

# Enabled configuration of ZooKeeper connection retries

You can now override the default ZooKeeper connection retry count. In situations where the underlying k8s node loses network connectivity or is no longer able to talk to ZooKeeper, configuring a fast fail can trigger pod restarts which can then reassign the pod to a healthy k8s node.

#13913

# Improve memory usage on Historicals

Reduced segment heap footprint.

#14002

# MiddleManager-less extension

# Better sidecar support

The following property has been added to improve support for sidecars:

  • druid.indexer.runner.primaryContainerName=OVERLORD_CONTAINER_NAME: Set this to the name of your Druid container, such as druid-overlord. The default setting is the first container in thepodSpec list.

Use this property when Druid is not the first container, such as when you're using Istio and the istio-proxy sidecar gets injected as the first container.

#13655

# Other improvements for MiddleManager-less extension

  • The druid-kubernetes-overlord-extensions can now be loaded in any Druid service. #13872
  • You can now add files to the common configuration directory when deploying on Kubernetes. #13795
  • You can now specify a Kubernetes pod spec per task type. #13896
  • You can now override druid.monitoring.monitors. If you don't want to inherit monitors from the Overlord, you can override the monitors with the following config: druid.indexer.runner.peonMonitors.#14028
  • Added live reports for KubernetesTaskRunner. #13986

# Compaction

# Added a new API for compaction configuration history

Added API endpoint CoordinatorCompactionConfigsResource#getCompactionConfigHistory to return the history of changes to automatic compaction configuration history. If the datasource does not exist or it has no compaction history, an empty list is returned

#13699 #13730

# Security

# Support for the HTTP Strict-Transport-Security response header

Added support for the HTTP Strict-Transport-Security response header.
Druid does not include this header by default. You must enable it in runtime properties by setting druid.server.http.enableHSTS to true.

#13489

# Add JWT authenticator support for validating ID Tokens #13242

Expands the OIDC based auth in Druid by adding a JWT Authenticator that validates ID Tokens associated with a request. The existing pac4j authenticator works for authenticating web users while accessing the console, whereas this authenticator is for validating Druid API requests made by Direct clients. Services already supporting OIDC can attach their ID tokens to the Druid requests
under the Authorization request header.

13242

# Allow custom scope when using pac4j

Updated OpenID Connect extension configuration with scope information.
Applications use druid.auth.pac4j.oidc.scope during authentication to authorize access to a user's details.

#13973

# Web console

# Kafka metadata is included by default when loading Kafka streams with the data loader

The streaming data loader in the console added support for the Kafka input format, which gives you access to the Kafka metadata fields like the key and the Kafka timestamp. This is used by default when you choose a Kafka stream as the data source.

#14017

image

image

# Overlord dynamic config

Added a form with JSON fallback to the Overlord dynamic config dialog.

https://github.com/apache/druid/pull/13993

image

# Other web console improvements:

# Docs

# SQL tutorial using Jupyter notebook

Added a new tutorial to our collection of Jupyter Notebook-based Druid tutorials.
This interactive tutorial introduces you to the unique aspects of Druid SQL with the primary focus on the SELECT statement. For more information, see Learn the basics of Druid SQL.

#13465

# Python Druid API

Added a Python API for use in Jupyter notebooks.

#13787

# Updated Docker Compose

This release includes several improvements to the docker-compose.yml file that Druid tutorials reference:

  • Added configuration to bind Postgres instance on the default port ("5432") to the docker-compose.yml file.
  • Updated Broker, Historical, MiddleManager, and Router instances to use Druid 24.0.1 on the docker-compose.yml file.
  • Removed trailing space on the docker-compose.yml file.

#13623

# Bug fixes

Druid 26.0.0 contains 80 bug fixes, the complete list is available here.

# Dependency updates

The following dependencies have had their versions bumped:

  • Apache Kafka to version 3.4.0 #13802
  • Apache Zookeeper to version 3.5.10 #13715
  • Joda-Time to version 2.12.4 #13999
  • Kubernetes Java Client to 6.4.1 #14028

The full list is available here.

# Upgrade notes and incompatible changes

# Upgrade notes

# Real-time tasks

Optimized query performance by lowering the default maxRowsInMemory for real-time ingestion, which might lower overall ingestion throughput #13939

# Incompatible changes

# Firehose ingestion removed

The firehose/parser specification used by legacy Druid streaming formats is removed.
Firehose ingestion was deprecated in version 0.17, and support for this ingestion was removed in version 24.0.

#12852

# Information schema now uses numeric column types

The Druid system table (INFORMATION_SCHEMA) now uses SQL types instead of Druid types for columns. This change makes the INFORMATION_SCHEMA table behave more like standard SQL. You may need to update your queries in the following scenarios in order to avoid unexpected results if you depend either of the following:

  • Numeric fields being treated as strings.
  • Column numbering starting at 0. Column numbering is now 1-based.

#13777

# frontCoded segment format change

The frontCoded type of stringEncodingStrategy on indexSpec with a new segment format version, which typically has faster read speeds and reduced segment size. This improvement is backwards incompatible with Druid 25.0.

For more information, see the frontCoded string encoding strategy highlight.

# Developer notes

# Null value coercion moved out of expression processing engine

Null values input to and created by the Druid native expression processing engine no longer coerce null to the type appropriate 'default' value if druid.generic.useDefaultValueForNull=true. This should not impact existing behavior since this has been shifted onto the consumer and internally operators will still use default values in this mode. However, there may be subtle behavior changes around the handling of null values. Direct callers can get default values by using the new valueOrDefault() method of ExprEval, instead of value().

#13809

# Simplified dependencies

druid-core, extendedset, and druid-hll modules have been consolidated into druid-processing to simplify dependencies. Any extensions referencing these should be updated to use druid-processing instead. Existing extension binaries should continue to function normally when used with newer versions of Druid.

This change does not impact end users. It does impact anyone who develops extensions for Druid.

13698

# Credits

Thanks to everyone who contributed to this release!

@317brian
@a2l007
@abhagraw
@abhishekagarwal87
@abhishekrb19
@adarshsanjeev
@AdheipSingh
@amaechler
@AmatyaAvadhanula
@anshu-makkar
@ApoorvGuptaAi
@asdf2014
@benkrug
@capistrant
@churromorales
@clintropolis
@cryptoe
@dependabot[bot]
@dongjoon-hyun
@drudi-at-coffee
@ektravel
@EylonLevy
@findingrish
@frankgrimes97
@g1y
@georgew5656
@gianm
@hqx871
@imply-cheddar
@imply-elliott
@isandeep41
@jaegwonseo
@jasonk000
@jgoz
@jwitko
@kaijianding
@kfaraz
@LakshSingla
@maytasm
@nlippis
@p-
@paul-rogers
@pen4
@raboof
@rohangarg
@sairamdevarashetty
@sergioferragut
@somu-imply
@soullkk
@suneet-s
@SurajKadam7
@techdocsmith
@tejasparbat
@tejaswini-imply
@tijoparacka
@TSFenwick
@varachit
@vogievetsky
@vtlim
@winminsoe
@writer-jill
@xvrl
@yurmix
@zachjsh
@zemin-piao

druid - Druid 25.0.0

Published by kfaraz almost 2 years ago

Apache Druid 25.0.0 contains over 300 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 51 contributors.

See the complete set of changes for additional details.

# Highlights

# MSQ task engine now production ready

The multi-stage query (MSQ) task engine used for SQL-based ingestion is now production ready. Use it for any supported workloads. For more information, see the following pages:

# Simplified Druid deployments

The new start-druid script greatly simplifies deploying any combination of Druid services on a single-server. It comes pre-packaged with the required configs and can be used to launch a fully functional Druid cluster simply by invoking ./start-druid. For experienced Druids, it also gives complete control over the runtime properties and JVM arguments to have a cluster that exactly fits your needs.

The start-druid script deprecates the existing profiles such as start-micro-quickstart and start-nano-quickstart. These profiles may be removed in future releases. For more information, see Single server deployment.

# String dictionary compression (experimental)

Added support for front coded string dictionaries for smaller string columns, leading to reduced segment sizes with only minor performance penalties for most Druid queries.

This can be enabled by setting IndexSpec.stringDictionaryEncoding to {"type":"frontCoded", "bucketSize": 4} , where bucketSize is any power of 2 less than or equal to 128. Setting this property instructs indexing tasks to write segments using compressed dictionaries of the specified bucket size.

Any segment written using string dictionary compression is not readable by older versions of Druid.

For more information, see Front coding.

https://github.com/apache/druid/pull/12277

# Kubernetes-native tasks

Druid can now use Kubernetes to launch and manage tasks, eliminating the need for middle managers.

To use this feature, enable the druid-kubernetes-overlord-extensions in the extensions load list for your Overlord process.

https://github.com/apache/druid/pull/13156

# Hadoop-3 compatible binary

Druid now comes packaged as a dedicated binary for Hadoop-3 users, which contains Hadoop-3 compatible jars. If you do not use Hadoop-3 with your Druid cluster, you may continue using the classic binary.

# Multi-stage query (MSQ) task engine

# MSQ enabled for Docker

MSQ task query engine is now enabled for Docker by default.

https://github.com/apache/druid/pull/13069

# Query history

Multi-stage queries no longer show up in the Query history dialog. They are still available in the Recent query tasks panel.

# Limit on CLUSTERED BY columns

When using the MSQ task engine to ingest data, the number of columns that can be passed in the CLUSTERED BY clause is now limited to 1500.

https://github.com/apache/druid/pull/13352

# Support for string dictionary compression

The MSQ task engine supports the front-coding of String dictionaries for better compression. This can be enabled for INSERT or REPLACE statements by setting indexSpec to a valid json string in the query context.

https://github.com/apache/druid/pull/13275

# Sketch merging mode

Workers can now gather key statistics, used to generate partition boundaries, either sequentially or in parallel. Set clusterStatisticsMergeMode to PARALLEL, SEQUENTIAL or AUTO in the query context to use the corresponding sketch merging mode. For more information, see Sketch merging mode.

https://github.com/apache/druid/pull/13205

# Performance and operational improvements

# Querying

# Async reads for JDBC

Prevented JDBC timeouts on long queries by returning empty batches when a batch fetch takes too long. Uses an async model to run the result fetch concurrently with JDBC requests.

https://github.com/apache/druid/pull/13196

# Improved algorithm to check values of an IN filter

To accommodate large value sets arising from large IN filters or from joins pushed down as IN filters, Druid now uses a sorted merge algorithm for merging the set and dictionary for larger values.

https://github.com/apache/druid/pull/13133

# Enhanced query context security

Added the following configuration properties that refine the query context security model controlled by druid.auth.authorizeQueryContextParams:

  • druid.auth.unsecuredContextKeys: A JSON list of query context keys that do not require a security check.
  • druid.auth.securedContextKeys: A JSON list of query context keys that do require a security check.

If both are set, unsecuredContextKeys acts as exceptions to securedContextKeys.

https://github.com/apache/druid/pull/13071

# HTTP response headers

The HTTP response for a SQL query now correctly sets response headers, same as a native query.

https://github.com/apache/druid/pull/13052

# Metrics

# New metrics

The following metrics have been newly added. For more details, see the complete list of Druid metrics.

# Batched segment allocation

These metrics pertain to batched segment allocation.

Metric Description Dimensions
task/action/batch/runTime Milliseconds taken to execute a batch of task actions. Currently only being emitted for batched segmentAllocate actions dataSource, taskActionType=segmentAllocate
task/action/batch/queueTime Milliseconds spent by a batch of task actions in queue. Currently only being emitted for batched segmentAllocate actions dataSource, taskActionType=segmentAllocate
task/action/batch/size Number of task actions in a batch that was executed during the emission period. Currently only being emitted for batched segmentAllocate actions dataSource, taskActionType=segmentAllocate
task/action/batch/attempts Number of execution attempts for a single batch of task actions. Currently only being emitted for batched segmentAllocate actions dataSource, taskActionType=segmentAllocate
task/action/success/count Number of task actions that were executed successfully during the emission period. Currently only being emitted for batched segmentAllocate actions dataSource, taskId, taskType, taskActionType=segmentAllocate
task/action/failed/count Number of task actions that failed during the emission period. Currently only being emitted for batched segmentAllocate actions dataSource, taskId, taskType, taskActionType=segmentAllocate

# Streaming ingestion

Metric Description Dimensions
ingest/kafka/partitionLag Partition-wise lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers. Minimum emission period for this metric is a minute. dataSource, stream, partition
ingest/kinesis/partitionLag/time Partition-wise lag time in milliseconds between the current message sequence number consumed by the Kinesis indexing tasks and latest sequence number in Kinesis. Minimum emission period for this metric is a minute. dataSource, stream, partition
ingest/pause/time Milliseconds spent by a task in a paused state without ingesting. dataSource, taskId, taskType
ingest/handoff/time Total time taken in milliseconds for handing off a given set of published segments. dataSource, taskId, taskType

https://github.com/apache/druid/pull/13238
https://github.com/apache/druid/pull/13331
https://github.com/apache/druid/pull/13313

# Other improvements

# Nested columns

# Nested columns performance improvement

Improved NestedDataColumnSerializer to no longer explicitly write null values to the field writers for the missing values of every row. Instead, passing the row counter is moved to the field writers so that they can backfill null values in bulk.

https://github.com/apache/druid/pull/13101

# Support for more formats

Druid nested columns and the associated JSON transform functions now support Avro, ORC, and Parquet.

https://github.com/apache/druid/pull/13325
https://github.com/apache/druid/pull/13375

# Refactored a datasource before unnest

When data requires "flattening" during processing, the operator now takes in an array and then flattens the array into N (N=number of elements in the array) rows where each row has one of the values from the array.

https://github.com/apache/druid/pull/13085

# Ingestion

# Improved filtering for cloud objects

You can now stop at arbitrary subfolders using glob syntax in the ioConfig.inputSource.filter field for native batch ingestion from cloud storage, such as S3.

https://github.com/apache/druid/pull/13027

# Async task client for streaming ingestion

You can now enable asynchronous communication between the stream supervisor and indexing tasks by setting chatAsync to true in the tuningConfig. The async task client uses its own internal thread pool and thus ignrores the chatThreads property.

https://github.com/apache/druid/pull/13354

# Improved handling of JSON data with streaming ingestion

You can now better control how Druid reads JSON data for streaming ingestion by setting the following fields in the input format specification:

  • assumedNewlineDelimited to parse lines of JSON independently.
  • useJsonNodeReader to retain valid JSON events when parsing multi-line JSON events when a parsing exception occurs.

The web console has been updated to include these options.

https://github.com/apache/druid/pull/13089

# Ingesting from an idle Kafka stream

When a Kafka stream becomes inactive, the supervisor ingesting from it can be configured to stop creating new indexing tasks. The supervisor automatically resumes creation of new indexing tasks once the stream becomes active again. Set the property dataSchema.ioConfig.idleConfig.enabled to true in the respective supervisor spec or set druid.supervisor.idleConfig.enabled on the overlord to enable this behaviour. Please see the following for details:

https://github.com/apache/druid/pull/13144

# Kafka Consumer improvement

You can now configure the Kafka Consumer's custom deserializer after its instantiation.

https://github.com/apache/druid/pull/13097

# Kafka supervisor logging

Kafka supervisor logs are now less noisy. The supervisors now log events at the DEBUG level instead of INFO.

https://github.com/apache/druid/pull/13392

# Fixed Overlord leader election

Fixed a problem where Overlord leader election failed due to lock reacquisition issues. Druid now fails these tasks and clears all locks so that the Overlord leader election isn't blocked.

https://github.com/apache/druid/pull/13172

# Support for inline protobuf descriptor

Added a new inline type protoBytesDecoder that allows a user to pass inline the contents of a Protobuf descriptor file, encoded as a Base64 string.

https://github.com/apache/druid/pull/13192

# Duplicate notices

For streaming ingestion, notices that are the same as one already in queue won't be enqueued. This will help reduce notice queue size.

https://github.com/apache/druid/pull/13334

# Sampling from stream input now respects the configured timeout

Fixed a problem where sampling from a stream input, such as Kafka or Kinesis, failed to respect the configured timeout when the stream had no records available. You can now set the maximum amount of time in which the entry iterator will return results.

https://github.com/apache/druid/pull/13296

# Streaming tasks resume on Overlord switch

Fixed a problem where streaming ingestion tasks continued to run until their duration elapsed after the Overlord leader had issued a pause to the tasks. Now, when the Overlord switch occurs right after it has issued a pause to the task, the task remains in a paused state even after the Overlord re-election.

https://github.com/apache/druid/pull/13223

# Fixed Parquet list conversion

Fixed an issue with Parquet list conversion, where lists of complex objects could unexpectedly be wrapped in an extra object, appearing as [{"element":<actual_list_element>},{"element":<another_one>}...] instead of the direct list. This changes the behavior of the parquet reader for lists of structured objects to be consistent with other parquet logical list conversions. The data is now fetched directly, more closely matching its expected structure.

https://github.com/apache/druid/pull/13294

# Introduced a tree type to flattenSpec

Introduced a tree type to flattenSpec. In the event that a simple hierarchical lookup is required, the tree type allows for faster JSON parsing than jq and path parsing types.

https://github.com/apache/druid/pull/12177

# Operations

# Compaction

Compaction behavior has changed to improve the amount of time it takes and disk space it takes:

  • When segments need to be fetched, download them one at a time and delete them when Druid is done with them. This still takes time but minimizes the required disk space.
  • Don't fetch segments on the main compact task when they aren't needed. If the user provides a full granularitySpec, dimensionsSpec, and metricsSpec, Druid skips fetching segments.

For more information, see the documentation on Compaction and Automatic compaction.

https://github.com/apache/druid/pull/13280

# Idle configs for the Supervisor

You can now set the Supervisor to idle, which is useful in cases where freeing up slots so that autoscaling can be more effective.

To configure the idle behavior, use the following properties:

Property Description Default
druid.supervisor.idleConfig.enabled (Cluster wide) If true, supervisor can become idle if there is no data on input stream/topic for some time. false
druid.supervisor.idleConfig.inactiveAfterMillis (Cluster wide) Supervisor is marked as idle if all existing data has been read from input topic and no new data has been published for inactiveAfterMillis milliseconds. 600_000
inactiveAfterMillis (Individual Supervisor) Supervisor is marked as idle if all existing data has been read from input topic and no new data has been published for inactiveAfterMillis milliseconds. no (default == 600_000)

https://github.com/apache/druid/pull/13311

# Improved supervisor termination

Fixed issues with delayed supervisor termination during certain transient states.

https://github.com/apache/druid/pull/13072

# Backoff for HttpPostEmitter

The HttpPostEmitter option now has a backoff. This means that there should be less noise in the logs and lower CPU usage if you use this option for logging.

https://github.com/apache/druid/pull/12102

# DumpSegment tool for nested columns

The DumpSegment tool can now be used on nested columns with the --dump nested option.

For more information, see dump-segment tool.

https://github.com/apache/druid/pull/13356

# Segment loading and balancing

# Batched segment allocation

Segment allocation on the Overlord can take some time to finish, which can cause ingestion lag while a task waits for segments to be allocated. Performing segment allocation in batches can help improve performance.

There are two new properties that affect how Druid performs segment allocation:

Property Description Default
druid.indexer.tasklock.batchSegmentAllocation If set to true, Druid performs segment allocate actions in batches to improve throughput and reduce the average task/action/run/time. See batching segmentAllocate actions for details. false
druid.indexer.tasklock.batchAllocationWaitTime Number of milliseconds after Druid adds the first segment allocate action to a batch, until it executes the batch. Allows the batch to add more requests and improve the average segment allocation run time. This configuration takes effect only if batchSegmentAllocation is enabled. 500

In addition to these properties, there are new metrics to track batch segment allocation. For more information, see New metrics for segment allocation.

For more information, see the following:

https://github.com/apache/druid/pull/13369
https://github.com/apache/druid/pull/13503

# Improved cachingCost balancer strategy

The cachingCost balancer strategy now behaves more similarly to cost strategy. When computing the cost of moving a segment to a server, the following calculations are performed:

  • Subtract the self cost of a segment if it is being served by the target server
  • Subtract the cost of segments that are marked to be dropped

https://github.com/apache/druid/pull/13321

# Faster segment assignment

You can now use a round-robin segment strategy to speed up initial segment assignments. Set useRoundRobinSegmentAssigment to true in the Coordinator dynamic config to enable this feature.

https://github.com/apache/druid/pull/13367

# Default to batch sampling for balancing segments

Batch sampling is now the default method for sampling segments during balancing as it performs significantly better than the alternative when there is a large number of used segments in the cluster.

As part of this change, the following have been deprecated and will be removed in future releases:

  • coordinator dynamic config useBatchedSegmentSampler
  • coordinator dynamic config percentOfSegmentsToConsiderPerMove
  • old non-batch method of sampling segments

# Remove unused property

The unused coordinator property druid.coordinator.loadqueuepeon.repeatDelay has been removed. Use only druid.coordinator.loadqueuepeon.http.repeatDelay to configure repeat delay for the HTTP-based segment loading queue.

https://github.com/apache/druid/pull/13391

# Avoid segment over-replication

Improved the process of checking server inventory to prevent over-replication of segments during segment balancing.

https://github.com/apache/druid/pull/13114

# Provided service specific log4j overrides in containerized deployments

Provided an option to override log4j configs setup at the service level directories so that it works with Druid-operator based deployments.

https://github.com/apache/druid/pull/13020

# Various Docker improvements

  • Updated Docker to run with JRE 11 by default.
  • Updated Docker to use gcr.io/distroless/java11-debian11 image as base by default.
  • Enabled Docker buildkit cache to speed up building.
  • Downloaded bash-static to the Docker image so that scripts that require bash can be executed.
  • Bumped builder image from 3.8.4-jdk-11-slim to 3.8.6-jdk-11-slim.
  • Switched busybox from amd64/busybox:1.30.0-glibc to busybox:1.35.0-glibc.
  • Added support to build arm64-based image.

https://github.com/apache/druid/pull/13059

# Enabled cleaner JSON for various input sources and formats

Added JsonInclude to various properties, to avoid population of default values in serialized JSON.

https://github.com/apache/druid/pull/13064

# Improved direct memory check on startup

Improved direct memory check on startup by providing better support for Java 9+ in RuntimeInfo, and clearer log messages where validation fails.

https://github.com/apache/druid/pull/13207

# Improved the run time of the MarkAsUnusedOvershadowedSegments duty

Improved the run time of the MarkAsUnusedOvershadowedSegments duty by iterating over all overshadowed segments and marking segments as unused in batches.

https://github.com/apache/druid/pull/13287

# Web console

# Delete an interval

You can now pick an interval to delete from a dropdown in the kill task dialog.

https://github.com/apache/druid/pull/13431

# Removed the old query view

The old query view is removed. Use the new query view with tabs.
For more information, see Web console.

https://github.com/apache/druid/pull/13169

# Filter column values in query results

The web console now allows you to add to existing filters for a selected column.

https://github.com/apache/druid/pull/13169

# Support for Kafka lookups in the web-console

Added support for Kafka-based lookups rendering and input in the web console.

https://github.com/apache/druid/pull/13098

# Query task status information

The web console now exposes a textual indication about running and pending tasks when a query is stuck due to lack of task slots.

https://github.com/apache/druid/pull/13291

# Extensions

# Extension optimization

Optimized the compareTo function in CompressedBigDecimal.

https://github.com/apache/druid/pull/13086

# CompressedBigDecimal cleanup and extension

Removed unnecessary generic type from CompressedBigDecimal, added support for number input types, added support for reading aggregator input types directly (uningested data), and fixed scaling bug in buffer aggregator.

https://github.com/apache/druid/pull/13048

# Support for Kubernetes discovery

Added POD_NAME and POD_NAMESPACE env variables to all Kubernetes Deployments and StatefulSets.
Helm deployment is now compatible with druid-kubernetes-extension.

https://github.com/apache/druid/pull/13262

# Docs

# Jupyter Notebook tutorials

We released our first Jupyter Notebook-based tutorial to learn the basics of the Druid API. Download the notebook and follow along with the tutorial to learn how to get basic cluster information, ingest data, and query data.
For more information, see Jupyter Notebook tutorials.

https://github.com/apache/druid/pull/13342
https://github.com/apache/druid/pull/13345

# Dependency updates

# Updated Kafka version

Updated the Apache Kafka core dependency to version 3.3.1.

https://github.com/apache/druid/pull/13176

# Docker improvements

Updated dependencies for the Druid image for Docker, including JRE 11. Docker BuildKit cache is enabled to speed up building.

https://github.com/apache/druid/pull/13059

# Upgrading to 25.0.0

Consider the following changes and updates when upgrading from Druid 24.0.x to 25.0.0. If you're updating from an earlier version, see the release notes of the relevant intermediate versions.

# Default HTTP-based segment discovery and task management

The default segment discovery method now uses HTTP instead of ZooKeeper.

This update changes the defaults for the following properties:

Property New default Previous default
druid.serverview.type for segment management http batch
druid.coordinator.loadqueuepeon.type for segment management http curator
druid.indexer.runner.type for the Overlord httpRemote local

To use ZooKeeper instead of HTTP, change the values for the properties back to the previous defaults. ZooKeeper-based implementations for these properties are deprecated and will be removed in a subsequent release.

https://github.com/apache/druid/pull/13092

# Finalizing HLL and quantiles sketch aggregates

The aggregation functions for HLL and quantiles sketches returned sketches or numbers when they are finalized depending on where they were in the native query plan.

Druid no longer finalizes aggregators in the following two cases:

  • aggregators appear in the outer level of a query
  • aggregators are used as input to an expression or finalizing-field-access post-aggregator

This change aligns the behavior of HLL and quantiles sketches with theta sketches.

To restore old behaviour, you can set sqlFinalizeOuterSketches=true in the query context.

https://github.com/apache/druid/pull/13247

# Kill tasks mark segments as unused only if specified

When you issue a kill task, Druid marks the underlying segments as unused only if explicitly specified. For more information, see the API reference

https://github.com/apache/druid/pull/13104

# Incompatible changes

# Upgrade curator to 5.3.0

Apache Curator upgraded to the latest version, 5.3.0. This version drops support for ZooKeeper 3.4 but Druid has already officially dropped support in 0.22. In 5.3.0, Curator has removed support for Exhibitor so all related configurations and tests have been removed.

https://github.com/apache/druid/pull/12939

# Fixed Parquet list conversion

The behavior of the parquet reader for lists of structured objects has been changed to be consistent with other parquet logical list conversions. The data is now fetched directly, more closely matching its expected structure. See parquet list conversion for more details.

https://github.com/apache/druid/pull/13294

# Credits

Thanks to everyone who contributed to this release!

@317brian
@599166320
@a2l007
@abhagraw
@abhishekagarwal87
@adarshsanjeev
@adelcast
@AlexanderSaydakov
@amaechler
@AmatyaAvadhanula
@ApoorvGuptaAi
@arvindanugula
@asdf2014
@churromorales
@clintropolis
@cloventt
@cristian-popa
@cryptoe
@dampcake
@dependabot[bot]
@didip
@ektravel
@eshengit
@findingrish
@FrankChen021
@gianm
@hnakamor
@hosswald
@imply-cheddar
@jasonk000
@jon-wei
@Junge-401
@kfaraz
@LakshSingla
@mcbrewster
@paul-rogers
@petermarshallio
@rash67
@rohangarg
@sachidananda007
@santosh-d3vpl3x
@senthilkv
@somu-imply
@techdocsmith
@tejaswini-imply
@vogievetsky
@vtlim
@wcc526
@writer-jill
@xvrl
@zachjsh

druid - Druid 24.0.2

Published by kfaraz almost 2 years ago

Apache Druid 24.0.2 is a bug fix release that fixes some issues in the 24.0.1 release.
See the complete set of changes for additional details.

# Bug fixes

https://github.com/apache/druid/pull/13138 to fix dependency errors while launching a Hadoop task.

# Credits

@kfaraz
@LakshSingla

druid - Druid 24.0.1

Published by kfaraz almost 2 years ago

Apache Druid 24.0.1 is a bug fix release that fixes some issues in the 24.0 release.
See the complete set of changes for additional details.

# Notable Bug fixes

https://github.com/apache/druid/pull/13214 to fix SQL planning when using the JSON_VALUE function.
https://github.com/apache/druid/pull/13297 to fix values that match a range filter on nested columns.
https://github.com/apache/druid/pull/13077 to fix detection of nested objects while generating an MSQ SQL in the web-console.
https://github.com/apache/druid/pull/13172 to correctly handle overlord leader election even when tasks cannot be reacquired.
https://github.com/apache/druid/pull/13259 to fix memory leaks from SQL statement objects.
https://github.com/apache/druid/pull/13273 to fix overlord API failures by de-duplicating task entries in memory.
https://github.com/apache/druid/pull/13049 to fix a race condition while processing query context.
https://github.com/apache/druid/pull/13151 to fix assertion error in SQL planning.

# Credits

Thanks to everyone who contributed to this release!

@abhishekagarwal87
@AmatyaAvadhanula
@clintropolis
@gianm
@kfaraz
@LakshSingla
@paul-rogers
@vogievetsky

# Known issues

druid - Druid 24.0.0

Published by abhishekagarwal87 about 2 years ago

Apache Druid 24.0.0 contains over 300 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 67 contributors. See the complete set of changes for additional details.

# Major version bump

Starting with this release, we have dropped the leading 0 from the release version and promoted all other digits one place to the left. Druid is now at major version 24, a jump up from the prior 0.23.0 release. In terms of backward-compatibility or breaking changes, this release is not significantly different than other previous major releases such as 0.23.0 or 0.22.0. We are continuing with the same policy as we have used in prior releases: minimizing the number of changes that require special attention when upgrading, and calling out any that do exist in the release notes. For this release, please refer to the Upgrading to 24.0.0 section for a list of backward-incompatible changes in this release.

# New Features

# Multi-stage query task engine

SQL-based ingestion for Apache Druid uses a distributed multi-stage query architecture, which includes a query engine called the multi-stage query task engine (MSQ task engine). The MSQ task engine extends Druid's query capabilities, so you can write queries that reference external data as well as perform ingestion with SQL INSERT and REPLACE. Essentially, you can perform SQL-based ingestion instead of using JSON ingestion specs that Druid's native ingestion uses. In addition to the easy-to-use syntax, the SQL interface lets you perform transformations that involve multiple shuffles of data.

SQL-based ingestion using the multi-stage query task engine is recommended for batch ingestion starting in Druid 24.0.0. Native batch and Hadoop-based ingestion continue to be supported as well. We recommend you review the known issues and test the feature in a staging environment before rolling out in production. Using the multi-stage query task engine with plain SELECT statements (not INSERT ... SELECT or REPLACE ... SELECT) is experimental.

If you're upgrading from an earlier version of Druid or you're using Docker, you'll need to add the druid-multi-stage-query extension to druid.extensions.loadlist in your common.runtime.properties file.

For more information, refer to the Overview documentation for SQL-based ingestion.

#12524
#12386
#12523
#12589

# Nested columns

Druid now supports directly storing nested data structures in a newly added COMPLEX<json> column type. COMPLEX<json> columns store a copy of the structured data in JSON format as well as specialized internal columns and indexes for nested literal values—STRING, LONG, and DOUBLE types. An optimized virtual column allows Druid to read and filter these values at speeds consistent with standard Druid LONG, DOUBLE, and STRING columns.

Newly added Druid SQL, native JSON functions, and virtual column allow you to extract, transform, and create COMPLEX<json> values in at query time. You can also use the JSON functions in INSERT and REPLACE statements in SQL-based ingestion, or in a transformSpec in native ingestion as an alternative to using a flattenSpec object to "flatten" nested data for ingestion.

See SQL JSON functions, native JSON functions, Nested columns, virtual columns, and the feature summary for more detail.

#12753
#12714
#12753
#12920

# Updated Java support

Java 11 is fully supported is no longer experimental. Java 17 support is improved.

#12839

# Query engine updates

# Updated column indexes and query processing of filters

Reworked column indexes to be extraordinarily flexible, which will eventually allow us to model a wide range of index types. Added machinery to build the filters that use the updated indexes, while also allowing for other column implementations to implement the built-in index types to provide adapters to make use indexing in the current set filters that Druid provides.

#12388

# Time filter operator

You can now use the Druid SQL operator TIME_IN_INTERVAL to filter query results based on time. Prefer TIME_IN_INTERVAL over the SQL BETWEEN operator to filter on time. For more information, see Date and time functions.

#12662

# Null values and the "in" filter

If a values array contains null, the "in" filter matches null values. This differs from the SQL IN filter, which does not match null values.

For more information, see Query filters and SQL data types.
#12863

# Virtual columns in search queries

Previously, a search query could only search on dimensions that existed in the data source. Search queries now support virtual columns as a parameter in the query.

#12720

# Optimize simple MIN / MAX SQL queries on __time

Simple queries like select max(__time) from ds now run as a timeBoundary queries to take advantage of the time dimension sorting in a segment. You can set a feature flag to enable this feature.

#12472
#12491

# String aggregation results

The first/last string aggregator now only compares based on values. Previously, the first/last string aggregator’s values were compared based on the _time column first and then on values.

If you have existing queries and want to continue using both the _time column and values, update your queries to use ORDER BY MAX(timeCol).

#12773

# Reduced allocations due to Jackson serialization

Introduced and implemented new helper functions in JacksonUtils to enable reuse of
SerializerProvider objects.

Additionally, disabled backwards compatibility for map-based rows in the GroupByQueryToolChest by default, which eliminates the need to copy the heavyweight ObjectMapper. Introduced a configuration option to allow administrators to explicitly enable backwards compatibility.

#12468

# Updated IPAddress Java library

Added a new IPAddress Java library dependency to handle IP addresses. The library includes IPv6 support. Additionally, migrated IPv4 functions to use the new library.

#11634

# Query performance improvements

Optimized SQL operations and functions as follows:

  • Vectorized numeric latest aggregators (#12439)
  • Optimized isEmpty() and equals() on RangeSets (#12477)
  • Optimized reuse of Yielder objects (#12475)
  • Operations on numeric columns with indexes are now faster (#12830)
  • Optimized GroupBy by reducing allocations. Reduced allocations by reusing entry and key holders (#12474)
  • Added a vectorized version of string last aggregator (#12493)
  • Added Direct UTF-8 access for IN filters (#12517)
  • Enabled virtual columns to cache their outputs in case Druid calls them multiple times on the same underlying row (#12577)
  • Druid now rewrites a join as a filter when possible in IN joins (#12225)
  • Added automatic sizing for GroupBy dictionaries (#12763)
  • Druid now distributes JDBC connections more evenly amongst brokers (#12817)

# Streaming ingestion

# Kafka consumers

Previously, consumers that were registered and used for ingestion persisted until Kafka deleted them. They were only used to make sure that an entire topic was consumed. There are no longer consumer groups that linger.

#12842

# Kinesis ingestion

You can now perform Kinesis ingestion even if there are empty shards. Previously, all shards had to have at least one record.

#12792

# Batch ingestion

# Batch ingestion from S3

You can now ingest data from endpoints that are different from your default S3 endpoint and signing region.
For more information, see S3 config.
#11798

# Improvements to ingestion in general

This release includes the following improvements for ingestion in general.

# Increased robustness for task management

Added setNumProcessorsPerTask to prevent various automatically-sized thread pools from becoming unreasonably large. It isn't ideal for each task to size its pools as if it is the only process on the entire machine. On large machines, this solves a common cause of OutOfMemoryError due to "unable to create native thread".

#12592

# Avatica JDBC driver

The JDBC driver now follows the JDBC standard and uses two kinds of statements, Statement and PreparedStatement.

#12709

# Eight hour granularity

Druid now accepts the EIGHT_HOUR granularity. You can segment incoming data to EIGHT_HOUR buckets as well as group query results by eight hour granularity.
#12717

# Ingestion general

# Updated Avro extension

The previous Avro extension leaked objects from the parser. If these objects leaked into your ingestion, you had objects being stored as a string column with the value as the .toString(). This string column will remain after you upgrade but will return Map.toString() instead of GenericRecord.toString. If you relied on the previous behavior, you can use the Avro extension from an earlier release.

#12828

# Sampler API

The sampler API has additional limits: maxBytesInMemory and maxClientResponseBytes. These options augment the existing options numRows and timeoutMs. maxBytesInMemory can be used to control the memory usage on the Overlord while sampling. maxClientResponseBytes can be used by clients to specify the maximum size of response they would prefer to handle.

#12947

# SQL

# Column order

The DruidSchema and SegmentMetadataQuery properties now preserve column order instead of ordering columns alphabetically. This means that query order better matches ingestion order.

#12754

# Converting JOINs to filter

You can improve performance by pushing JOINs partially or fully to the base table as a filter at runtime by setting the enableRewriteJoinToFilter context parameter to true for a query.

Druid now pushes down join filters in case the query computing join references any columns from the right side.

#12749
#12868

# Add is_active to sys.segments

Added is_active as shorthand for (is_published = 1 AND is_overshadowed = 0) OR is_realtime = 1). This represents "all the segments that should be queryable, whether or not they actually are right now".

#11550

# useNativeQueryExplain now defaults to true

The useNativeQueryExplain property now defaults to true. This means that EXPLAIN PLAN FOR returns the explain plan as a JSON representation of equivalent native query(s) by default. For more information, see Broker Generated Query Configuration Supplementation.

#12936

# Running queries with inline data using druid query engine

Some queries that do not refer to any table, such as select 1, are now always translated to a native Druid query with InlineDataSource before execution. If translation is not possible, for queries such as SELECT (1, 2), then an error occurs. In earlier versions, this query would still run.

#12897

# Coordinator/Overlord

# You can configure the Coordinator to kill segments in the future

You can now set druid.coordinator.kill.durationToRetain to a negative period to configure the Druid cluster to kill segments whose interval_end is a date in the future. For example, PT-24H would allow segments to be killed if their interval_end date was 24 hours or less into the future at the time that the kill task is generated by the system.
A cluster operator can also disregard the druid.coordinator.kill.durationToRetain entirely by setting a new configuration, druid.coordinator.kill.ignoreDurationToRetain=true. This ignores interval_end date when looking for segments to kill, and can instead kill any segment marked unused. This new configuration is turned off by default, and a cluster operator should fully understand and accept the risks before enabling it.

# Improved Overlord stability

Reduced contention between the management thread and the reception of status updates from the cluster. This improves the stability of Overlord and all tasks in a cluster when there are large (1000+) task counts.

#12099

# Improved Coordinator segment logging

Updated Coordinator load rule logging to include current replication levels. Added missing segment ID and tier information from some of the log messages.

#12511

# Optimized overlord GET tasks memory usage

Addressed the significant memory overhead caused by the web-console indirectly calling the Overlord’s GET tasks API. This could cause unresponsiveness or Overlord failure when the ingestion tab was opened multiple times.

#12404

# Reduced time to create intervals

In order to optimize segment cost computation time by reducing time taken for interval creation, store segment interval instead of creating it each time from primitives and reduce memory overhead of storing intervals by interning them. The set of intervals for segments is low in cardinality.

#12670

# Brokers/Overlord

Brokers now have a default of 25MB maximum queued per query. Previously, there was no default limit. Depending on your use case, you may need to increase the value, especially if you have large result sets or large amounts of intermediate data. To adjust the maximum memory available, use the druid.broker.http.maxQueuedBytes property.
For more information, see Configuration reference.

# Web console

Prepare to have your Web Console experience elevated! - @vogievetsky

# New query view (WorkbenchView) with tabs and long running query support

You can use the new query view to execute multi-stage, task based, queries with the /druid/v2/sql/task and /druid/indexer/v1/task/* APIs as well as native and sql-native queries just like the old Query view. A key point of the sql-msq-task based queries is that they may run for a long time. This inspired / necessitated many UX changes including, but not limited to the following:

# Tabs

You can now have many queries stored and running at the same time, significantly improving the query view UX.

You can open several tabs, duplicate them, and copy them as text to paste into any console and reopen there.

# Progress reports (counter reports)

Queries run with the multi-stage query task engine have detailed progress reports shown in the summary progress bar and the in detail execution table that provides summaries of the counters for every step.

# Error and warning reports

Queries run with the multi-stage query task engine present user friendly warnings and errors should anything go wrong.
The new query view has components to visualize these with their full detail including a stack-trace.

# Recent query tasks panel

Queries run with the multi-stage query task engine are tasks. This makes it possible to show queries that are executing currently and that have executed in the recent past.

For any query in the Recent query tasks panel you can view the execution details for it and you can also attach it as a new tab and continue iterating on the query. It is also possible to download the "query detail archive", a JSON file containing all the important details for a given query to use for troubleshooting.

# Connect external data flow

Connect external data flow lets you use the sampler to sample your source data to, determine its schema and generate a fully formed SQL query that you can edit to fit your use case before you launch your ingestion job. This point-and-click flow will save you much typing.

# Preview button

The Preview button appears when you type in an INSERT or REPLACE SQL query. Click the button to remove the INSERT or REPLACE clause and execute your query as an "inline" query with a limi). This gives you a sense of the shape of your data after Druid applies all your transformations from your SQL query.

# Results table

The query results table has been improved in style and function. It now shows you type icons for the column types and supports the ability to manipulate nested columns with ease.

# Helper queries

The Web Console now has some UI affordances for notebook and CTE users. You can reference helper queries, collapsable elements that hold a query, from the main query just like they were defined with a WITH statement. When you are composing a complicated query, it is helpful to break it down into multiple queries to preview the parts individually.

# Additional Web Console tools

More tools are available from the ... menu:

  • Explain query - show the query plan for sql-native and multi-stage query task engine queries.
  • Convert ingestion spec to SQL - Helps you migrate your native batch and Hadoop based specs to the SQL-based format.
  • Open query detail archive - lets you open a query detail archive downloaded earlier.
  • Load demo queries - lets you load a set of pre-made queries to play around with multi-stage query task engine functionality.

# New SQL-based data loader

The data loader exists as a GUI wizard to help users craft a JSON ingestion spec using point and click and quick previews. The SQL data loader is the SQL-based ingestion analog of that.

Like the native based data loader, the SQL-based data loader stores all the state in the SQL query itself. You can opt to manipulate the query directly at any stage. See (#12919) for more information about how the data loader differs from the Connect external data workflow.

# Other changes and improvements

  • The query view has so much new functionality that it has moved to the far left as the first view available in the header.
  • You can now click on a datasource or segment to see a preview of the data within.
  • The task table now explicitly shows if a task has been canceled in a different color than a failed task.
  • The user experience when you view a JSON payload in the Druid console has been improved. There’s now syntax highlighting and a search.
  • The Druid console can now use the column order returned by a scan query to determine the column order for reindexing data.
  • The way errors are displayed in the Druid console has been improved. Errors no longer appear as a single long line.

See (#12919) for more details and other improvements

# Metrics

# Sysmonitor stats for Peons

Sysmonitor stats, like memory or swap, are no longer reported since Peons always run on the same host as MiddleManagerse. This means that duplicate stats will no longer be reported.

#12802

# Prometheus

You can now include the host and service as labels for Prometheus by setting the following properties to true:

  • druid.emitter.prometheus.addHostAsLabel
  • druid.emitter.prometheus.addServiceAsLabel

#12769

# Rows per segment

(Experimental) You can now see the average number of rows in a segment and the distribution of segments in predefined buckets with the following metrics: segment/rowCount/avg and segment/rowCount/range/count.
Enable the metrics with the following property: org.apache.druid.server.metrics.SegmentStatsMonitor
#12730

# New sqlQuery/planningTimeMs metric

There’s a new sqlQuery/planningTimeMs metric for SQL queries that computes the time it takes to build a native query from a SQL query.

#12923

# StatsD metrics reporter

The StatsD metrics reporter extension now includes the following metrics:

  • coordinator/time
  • coordinator/global/time
  • tier/required/capacity
  • tier/total/capacity
  • tier/replication/factor
  • tier/historical/count
  • compact/task/count
  • compactTask/maxSlot/count
  • compactTask/availableSlot/count
  • segment/waitCompact/bytes
  • segment/waitCompact/count
  • interval/waitCompact/count
  • segment/skipCompact/bytes
  • segment/skipCompact/count
  • interval/skipCompact/count
  • segment/compacted/bytes
  • segment/compacted/count
  • interval/compacted/count
    #12762

# New worker level task metrics

Added a new monitor, WorkerTaskCountStatsMonitor, that allows each middle manage worker to report metrics for successful / failed tasks, and task slot usage.

#12446

# Improvements to the JvmMonitor

The JvmMonitor can now handle more generation and collector scenarios. The monitor is more robust and works properly for ZGC on both Java 11 and 15.

#12469

# Garbage collection

Garbage collection metrics now use MXBeans.

#12481

# Metric for task duration in the pending queue

Introduced the metric task/pending/time to measure how long a task stays in the pending queue.

#12492

# Emit metrics object for Scan, Timeseries, and GroupBy queries during cursor creation

Adds vectorized metric for scan, timeseries and groupby queries.

#12484

# Emit state of replace and append for native batch tasks

Druid now emits metrics so you can monitor and assess the use of different types of batch ingestion, in particular replace and tombstone creation.

#12488
#12840

# KafkaEmitter emits queryType

The KafkaEmitter now properly emits the queryType property for native queries.

#12915

# Security

You can now hide properties that are sensitive in the API response from /status/properties, such as S3 access keys. Use the druid.server.hiddenProperties property in common.runtime.properties to specify the properties (case insensitive) you want to hide.

#12950

# Other changes

  • You can now configure the retention period for request logs stored on disk with the druid.request.logging.durationToRetain property. Set the retention period to be longer than P1D (#12559)
  • You can now specify liveness and readiness probe delays for the historical StatefulSet in your values.yaml file. The default is 60 seconds (#12805)
  • Improved exception message for native binary operators (#12335)
  • ​​Improved error messages when URI points to a file that doesn't exist (#12490)
  • ​​Improved build performance of modules (#12486)
  • Improved lookups made using the druid-kafka-extraction-namespace extension to handle records that have been deleted from a kafka topic (#12819)
  • Updated core Apache Kafka dependencies to 3.2.0 (#12538)
  • Updated ORC to 1.7.5 (#12667)
  • Updated Jetty to 9.4.41.v20210516 (#12629)
  • Added Zstandard compression library to CompressionStrategy (#12408)
  • Updated the default gzip buffer size to 8 KB to for improved performance (#12579)
  • Updated the default inputSegmentSizeBytes in Compaction configuration to 100,000,000,000,000 (~100TB)

# Bug fixes

Druid 24.0 contains over 68 bug fixes. You can find the complete list here

# Upgrading to 24.0

# Permissions for multi-stage query engine

To read external data using the multi-stage query task engine, you must have READ permissions for the EXTERNAL resource type. Users without the correct permission encounter a 403 error when trying to run SQL queries that include EXTERN.

The way you assign the permission depends on your authorizer. For example, with [basic security]((/docs/development/extensions-core/druid-basic-security.md) in Druid, add the EXTERNAL READ permission by sending a POST request to the roles API.

The example adds permissions for users with the admin role using a basic authorizer named MyBasicMetadataAuthorizer. The following permissions are granted:

  • DATASOURCE READ
  • DATASOURCE WRITE
  • CONFIG READ
  • CONFIG WRITE
  • STATE READ
  • STATE WRITE
  • EXTERNAL READ
curl --location --request POST 'http://localhost:8081/druid-ext/basic-security/authorization/db/MyBasicMetadataAuthorizer/roles/admin/permissions' \
--header 'Content-Type: application/json' \
--data-raw '[
{
  "resource": {
    "name": ".*",
    "type": "DATASOURCE"
  },
  "action": "READ"
},
{
  "resource": {
    "name": ".*",
    "type": "DATASOURCE"
  },
  "action": "WRITE"
},
{
  "resource": {
    "name": ".*",
    "type": "CONFIG"
  },
  "action": "READ"
},
{
  "resource": {
    "name": ".*",
    "type": "CONFIG"
  },
  "action": "WRITE"
},
{
  "resource": {
    "name": ".*",
    "type": "STATE"
  },
  "action": "READ"
},
{
  "resource": {
    "name": ".*",
    "type": "STATE"
  },
  "action": "WRITE"
},
{
  "resource": {
    "name": "EXTERNAL",
    "type": "EXTERNAL"
  },
  "action": "READ"
}
]'

# Behavior for unused segments

Druid automatically retains any segments marked as unused. Previously, Druid permanently deleted unused segments from metadata store and deep storage after their duration to retain passed. This behavior was reverted from 0.23.0.
#12693

# Default for druid.processing.fifo

The default for druid.processing.fifo is now true. This means that tasks of equal priority are treated in a FIFO manner. For most use cases, this change can improve performance on heavily loaded clusters.

#12571

# Update to JDBC statement closure

In previous releases, Druid automatically closed the JDBC Statement when the ResultSet was closed. Druid closed the ResultSet on EOF. Druid closed the statement on any exception. This behavior is, however, non-standard.
In this release, Druid's JDBC driver follows the JDBC standards more closely:
The ResultSet closes automatically on EOF, but does not close the Statement or PreparedStatement. Your code must close these statements, perhaps by using a try-with-resources block.
The PreparedStatement can now be used multiple times with different parameters. (Previously this was not true since closing the ResultSet closed the PreparedStatement.)
If any call to a Statement or PreparedStatement raises an error, the client code must still explicitly close the statement. According to the JDBC standards, statements are not closed automatically on errors. This allows you to obtain information about a failed statement before closing it.
If you have code that depended on the old behavior, you may have to change your code to add the required close statement.

#12709

# Known issues

# Credits

@2bethere
@317brian
@a2l007
@abhagraw
@abhishekagarwal87
@abhishekrb19
@adarshsanjeev
@aggarwalakshay
@AmatyaAvadhanula
@BartMiki
@capistrant
@chenrui333
@churromorales
@clintropolis
@cloventt
@CodingParsley
@cryptoe
@dampcake
@dependabot[bot]
@dherg
@didip
@dongjoon-hyun
@ektravel
@EsoragotoSpirit
@exherb
@FrankChen021
@gianm
@hellmarbecker
@hwball
@iandr413
@imply-cheddar
@jarnoux
@jasonk000
@jihoonson
@jon-wei
@kfaraz
@LakshSingla
@liujianhuanzz
@liuxiaohui1221
@lmsurpre
@loquisgon
@machine424
@maytasm
@MC-JY
@Mihaylov93
@nishantmonu51
@paul-rogers
@petermarshallio
@pjfanning
@rockc2020
@rohangarg
@somu-imply
@suneet-s
@superivaj
@techdocsmith
@tejaswini-imply
@TSFenwick
@vimil-saju
@vogievetsky
@vtlim
@williamhyun
@wiquan
@writer-jill
@xvrl
@yuanlihan
@zachjsh
@zemin-piao

druid - Druid 0.23.0

Published by abhishekagarwal87 over 2 years ago

Apache Druid 0.23.0 contains over 450 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 81 contributors. See the complete set of changes for additional details.

# New Features

# Query engine

# Grouping on arrays without exploding the arrays

You can now group on a multi-value dimension as an array. For a datasource named "test":

{"timestamp": "2011-01-12T00:00:00.000Z", "tags": ["t1","t2","t3"]}  #row1
{"timestamp": "2011-01-13T00:00:00.000Z", "tags": ["t3","t4","t5"]}  #row2
{"timestamp": "2011-01-14T00:00:00.000Z", "tags": ["t5","t6","t7"]}  #row3
{"timestamp": "2011-01-14T00:00:00.000Z", "tags": []}                #row4

The following query:

{
  "queryType": "groupBy",
  "dataSource": "test",
  "intervals": [
    "1970-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"
  ],
  "granularity": {
    "type": "all"
  },
  "virtualColumns" : [ {
    "type" : "expression",
    "name" : "v0",
    "expression" : "mv_to_array(\"tags\")",
    "outputType" : "ARRAY<STRING>"
  } ],
  "dimensions": [
    {
      "type": "default",
      "dimension": "v0",
      "outputName": "tags"
      "outputType":"ARRAY<STRING>"
    }
  ],
  "aggregations": [
    {
      "type": "count",
      "name": "count"
    }
  ]
}

Returns the following:

[
 {
    "timestamp": "1970-01-01T00:00:00.000Z",
    "event": {
      "count": 1,
      "tags": "[]"
    }
  },
  {
    "timestamp": "1970-01-01T00:00:00.000Z",
    "event": {
      "count": 1,
      "tags": "["t1","t2","t3"]"
    }
  },
  {
    "timestamp": "1970-01-01T00:00:00.000Z",
    "event": {
      "count": 1,
      "tags": "[t3","t4","t5"]"
    }
  },
  {
    "timestamp": "1970-01-01T00:00:00.000Z",
    "event": {
      "count": 2,
      "tags": "["t5","t6","t7"]"
    }
  }
]

(#12078)
(#12253)

# Specify a column other than __time column for row comparison in first/last aggregators

You can pass time column in *first/*last aggregators by using LATEST_BY / EARLIEST_BY SQL functions. This provides support for cases where the time is stored as a part of a column different than "__time". You can also specify another logical time column.
(#11949)
(#12145)

# Improvements to querying user experience

This release includes several improvements for querying:

  • Added the SQL query ID to response header for failed SQL query to aid in locating the error messages (#11756)
  • Added input type validation for DataSketches HLL (#12131)
  • Improved JDBC logging (#11676)
  • Added SQL functions MV_FILTER_ONLY and MV_FILTER_NONE to filter rows of multi-value string dimensions to include only the supplied list of values or none of them respectively (#11650)
  • Added ARRAY_CONCAT_AGG to aggregate array inputs together into a single array (#12226)
  • Added the ability to authorize the usage of query context parameters (#12396)
  • Improved query IDs to make it easier to link queries and sub-queries for end-to-end query visibility (#11809)
  • Added a safe divide function to protect against division by 0 (#11904)
  • You can now add a query context to internally generated SegmentMetadata query (#11429)
  • Added support for Druid complex types to the native expression processing system to make all Druid data usable within expressions (#11853, #12016)
  • You can control the size of the on-heap segment-level dictionary via druid.query.groupBy.maxSelectorDictionarySize when grouping on string or array-valued expressions that do not have pre-existing dictionaries.
  • You have better protection against filter explosion during CNF conversion (#12314) (#12324)
  • You can get the complete native query on explaining the SQL query by setting useNativeQueryExplain to true in query context (#11908)
  • You can have broker ignore real time nodes or specific historical tiers. (#11766) (#11732)

# Streaming Ingestion

# Kafka input format for parsing headers and key

We've introduced a Kafka input format so you can ingest header data in addition to the message contents. For example:

  • the event key field
  • event headers
  • the Kafka event timestamp
  • the Kafka event value that stores the payload.

(#11630)

# Kinesis ingestion - Improvements

We have made following improvements in kinesis ingestion

  • Re-sharding can affect and slow down ingestion as many intermediate empty shards are created. These shards get assigned to tasks causing imbalance in load assignment. You can set skipIgnorableShards to true in kinesis ingestion tuning config to ignore such shards. (#12235)
  • Currently, kinesis ingestion uses DescribeStream to fetch the list of shards. This call is deprecated and slower. In this release, you can switch to a newer API listShards by setting useListShards to true in kinesis ingestion tuning config. (#12161)

# Native Batch Ingestion

# Multi-dimension range partitioning

Multi-dimension range partitioning allows users to partition their data on the ranges of any number of dimensions. It develops further on the concepts behind "single-dim" partitioning and is now arguably the most preferable secondary partitioning, both for query performance and storage efficiency.
(#11848)
(#11973)

# Improved replace data behavior

In previous versions of Druid, if ingested data with dropExisting flag to replace data, Druid would retain the existing data for a time chunk if there was no new data to replace it. Now, if you set dropExisting to true in your ioSpec and ingest data for a time range that includes a time chunk with no data, Druid uses a tombstone to overshadow the existing data in the empty time chunk.
(#12137)

This release includes several improvements for native batch ingestion:

  • Druid now emits a new metric when a batch task finishes waiting for segment availability. (#11090)
  • Added segmentAvailabilityWaitTimeMs, the duration in milliseconds that a task waited for its segments to be handed off to Historical nodes, to IngestionStatsAndErrorsTaskReportData (#11090)
  • Added functionality to preserve existing metrics during ingestion (#12185)
  • Parallel native batch task can now provide task reports for the sequential and single phase mode (e.g., used with dynamic partitioning) as well as single phase mode subtasks (#11688)
  • Added support for RowStats in druid/indexer/v1/task/{task_id}/reports API for multi-phase parallel indexing task (#12280)
  • Fixed the OOM failures in the dimension distribution phase of parallel indexing (#12331)
  • Added support to handle null dimension values while creating partition boundaries (#11973)

# Improvements to ingestion in general

This release includes several improvements for ingestion in general:

  • Removed the template modifier from IncrementalIndex<AggregatorType> because it is no longer required
  • You can now use JsonPath functions in JsonPath expressions during ingestion (#11722)
  • Druid no longer creates a materialized list of segment files and elimited looping over the files to reduce OOM issues (#11903)
  • Added an intermediate-persist IndexSpec to the main "merge" method in IndexMerger (#11940)
  • Granularity.granularitiesFinerThan now returns ALL if you pass in ALL (#12003)
  • Added a configuation parameter for appending tasks to allow them to use a SHARED lock (#12041)
  • SchemaRegistryBasedAvroBytesDecoder now throws a ParseException instead of RE when it fails to retrieve a schema (#12080)
  • Added includeAllDimensions to dimensionsSpec to put all explicit dimensions first in InputRow and subsequently any other dimensions found in input data (#12276)
  • Added the ability to store null columns in segments (#12279)

# Compaction

This release includes several improvements for compaction:

  • Automatic compaction now supports complex dimensions (#11924)
  • Automatic compaction now supports overlapping segment intervals (#12062)
  • You can now configure automatic compaction to calculate the ratio of slots available for compaction tasks from maximum slots, including autoscaler maximum worker nodes (#12263)
  • You can now configure the Coordinator auto compaction duty period separately from other indexing duties (#12263)
  • Default inputSegmentSizeBytes is now changed to ~ 100 TB (#12534)
  • You can change query granularity, change dimension schema, filter data, add metrics through auto-compaction (#11856) (#11874) (#11922) (#12125)
  • You can control roll-up as well for auto and manual compaction (#11850)

# SQL

# Human-readable and actionable SQL error messages

Until version 0.22.1, if you issued an unsupported SQL query, Druid would throw very cryptic and unhelpful error messages. With this change, error messages include exactly the part of the SQL query that is not supported in Druid. For example, if you run a scan query that is ordered on a dimension other than the time column.

(#11911)

# Cancel API for SQL queries

We've added a new API to cancel SQL queries, so you can now cancel SQL queries just like you can cancel native queries. You can use the API from the web console. In previous versions, cancellation from the console only closed the client connection while the SQL query kept running on Druid.

(#11643)
(#11738)
(#11710)

# Improved SQL compatibility

We have made changes to expressions that make expression evaluation more SQL compliant. This new behaviour is disabled by default. It can be enabled by setting druid.expressions.useStrictBooleans to true. We recommend enabling this behaviour since it is also more performant in some cases.

(#11184)

# Improvements to SQL user experience

This release includes several additional improvements for SQL:

  • You no longer need to include a trailing slash / for JDBC connections to Druid (#11737)
  • You can now use scans as outer queries (#11831)
  • Added a class to sanitize JDBC exceptions and to log them (#11843)
  • Added type headers to response format to make it easier for clients to interpret the results of SQL queries (#11914)
  • Improved the way the DruidRexExecutor handles numeric arrays (#11968)
  • Druid now returns an empty result after optimizing a GROUP BY query to a time series query (#12065)
  • As an administrator, you can now configure the implementation for APPROX_COUNT_DISTINCT and COUNT(DISTINCT expr) in approximate mode (#11181)

# Coordinator/Overlord

  • Coordinator can be overwhelmed by the connections from other druid services, especially when TLS is enabled. You can mitigate this by setting druid.global.http.eagerInitialization to false in common runtime properties.

# Web console

  • Query view can now cancel all queries issued from it (#11738)
  • The auto refresh functions will now run in foreground only (#11750) this prevents forgotten background console tabs from putting any load on the cluster.
  • Add a Segment size (in bytes) column to the Datasources view (#11797)
  • Format numbers with commas in the query view (#12031)
  • Add a JSON Diff view for supervisor specs (#12085)

image

  • Improve the formatting and info contents of code auto suggestion docs (#12085)

image

  • Add shard detail column to segments view (#12212)

image

  • Avoid refreshing tables if a menu is open (#12435)
  • Misc other bug fixes and usability improvements

# Metrics

# Query metrics now also set the vectorized dimension by default. This can be helpful in understanding performance profile of queries.

12464

# Auto-compaction duty also report duty metrics now. A dimension to indicate the duty group has also been added.

12352

This release includes several additional improvements for metrics:

  • Druid includes the Prometheus emitter by defult (#11812)
  • Fixed the missing conversionFactor in Prometheus emitter (12338)
  • Fixed an issue with the ingest/events/messageGap metric (#12337)
  • Added metrics for Shenandoah GC (#12369)
  • Added metrics as follows: Cpu and CpuSet to java.util.metrics.cgroups, ProcFsUtil for procfs info, and CgroupCpuMonitor and CgroupCpuSetMonitor (#11763)
  • Added support to route data through an HTTP proxy (#11891)
  • Added more metrics for Jetty server thread pool usage (#11113)
  • Added worker category as a dimension TaskSlot metric of the indexing service (#11554)
  • Added partitioningType dimension to segment/added/bytes metric to track usage of different partitioning schemes (#11902)
  • Added query laning metrics to visualize lane assignment (#12111)

# Cloud integrations

# Allow authenticating via Shared access resource for azure storage

12266

# Other changes

  • Druid now processes lookup load failures more quickly (#12397)
  • BalanceSegments#balanceServers now exits early when there is no balancing work to do (#11768)
  • DimensionHandler now allows you to define a DimensionSpec appropriate for the type of dimension to handle (#11873)
  • Added an interface for external schema providers to Druid SQL (#12043)

# Security fixes

# Support for access control on setting query contexts

Today, any context params are allowed to users. This can cause 1) a bad UX if the context param is not matured yet or 2) even query failure or system fault in the worst case if a sensitive param is abused, ex) maxSubqueryRows. Druid now has an ability to limit context params per user role. That means, a query will fail if you have a context param set in the query that is not allowed to you.

The context parameter authorization can be enabled using Druid.auth.authorizeQueryContextParams. This is disabled by default to enable a smoother upgrade experience.

(#12396)

# Other security improvements

This release includes several additional improvements for security:

  • You can now optionally enable auhorization on Druid system tables (#11720)
  • Log4j2 has been upgraded to 2.17.1 (#12106)

# Performance improvements

# Ingestion

  • More accurate memory estimations while building an on-heap incremental index. Rather than using the maximum possible aggregated row size, Druid can now use (based on a task context flag) a closer estimate of the actual heap footprint of an aggregated row. This enables the indexer to fit more rows in memory before performing an intermediate persist. (#12073)

# SQL

  • Vectorized virtual column processing is enabled by default. It will improve performance for majority of the queries. (#12520)
  • Improved performance for SQL queries with large IN filters. You can achieve better performance by reducing inSubQueryThreshold in SQL query context. (#12357)
  • time_shift is now vectorized (#12254)

# Bug fixes

Druid 0.23.0 contains over 68 bug fixes. You can find the complete list here

# Upgrading to 0.23.0

Consider the following changes and updates when upgrading from Druid 0.22.x to 0.23.0. If you're updating from an earlier version than 0.22.1, see the release notes of the relevant intermediate versions.

# Auto-killing of segments

In 0.23.0, Auto killing of segments is now enabled by default (#12187). The new defaults should kill all unused segments older than 90 days. If users do not want this behavior on an upgrade, they should explicitly disable the behavior. This is a risky change since depending on the interval, segments will be killed immediately after being marked unused. this behavior will be reverted or changed in the next druid release. Please see (#12693) for more details.

# Other changes

  • Kinesis ingestion requires listShards API access on the stream.
  • Kafka clients libraries have been upgraded to 3.0.0 (#11735)
  • The dynamic coordinator config, percentOfSegmentsToConsiderPerMove has been deprecated and will be removed in a future release of Druid. It is being replaced by a new segment picking strategy introduced in (#11257). This new strategy is currently toggled off by default, but can be toggled on if you set the dynamic coordinator config useBatchedSegmentSampler to true. Setting this as such, will disable the use of the deprecated percentOfSegmentsToConsiderPerMove. In a future release, useBatchedSegmentSampler will become permanently true. (#11960)

# Developer notices

# updated airline dependency to 2.x

https://github.com/airlift/airline is no longer maintained and so druid has upgraded to https://github.com/rvesse/airline (Airline 2) to use an actively
maintained version, while minimizing breaking changes.

This is a backwards incompatible change, and custom extensions relying on the CliCommandCreator extension point will also need to be updated.

12270

# Return 404 instead of 400 for unknown supervisors or tasks

Earlier supervisor/task endpoint return 400 when a supervisor or a task is not found. This status code is not friendly and confusing for the 3rd system. And according to the definition of HTTP status code, 404 is right code for such case. So we have changed the status code from 400 to 404 to eliminate the ambigiuty. Any clients of these endpoints should change the response code handling accordingly.

11724

# Return 400 instead of 500 when SQL query cannot be planned

Any SQL query that cannot be planned by Druid is not considered a bad request. For such queries, we now return 400. Developers using SQL API should change the response code handling if needed.

12033

# ResponseContext refactoring

0.23.0 changes the the ResponseContext and it's keys in a breaking way. The prior version of the response context suggested that keys be defined in an enum, then registered. This version suggests that keys be defined as objects, then registered. See the ResponseContext class itself for the details.

(#11828)

# Other changes

  • SingleServerInventoryView has been removed. (#11770)
  • LocalInputSource does not allow ingesting same file multiple times. (#11965)
  • getType() in PostAggregator is deprecated in favour of getType(ColumnInspector) (#11818)

# Known issues

For a full list of open issues, please see Bug .

# Credits

Thanks to everyone who contributed to this release!

@2bethere
@317brian
@a2l007
@abhishekagarwal87
@adarshsanjeev
@aggarwalakshay
@AlexanderSaydakov
@AmatyaAvadhanula
@andreacyc
@ApoorvGuptaAi
@arunramani
@asdf2014
@AshishKapoor
@benkrug
@capistrant
@Caroline1000
@cheddar
@chenhuiyeh
@churromorales
@clintropolis
@cryptoe
@davidferlay
@dbardbar
@dependabot[bot]
@didip
@dkoepke
@dungdm93
@ektravel
@emirot
@FrankChen021
@gianm
@hqx871
@iMichka
@imply-cheddar
@isandeep41
@IvanVan
@jacobtolar
@jasonk000
@jgoz
@jihoonson
@jon-wei
@josephglanville
@joyking7
@kfaraz
@klarose
@LakshSingla
@liran-funaro
@lokesh-lingarajan
@loquisgon
@mark-imply
@maytasm
@mchades
@nikhil-ddu
@paul-rogers
@petermarshallio
@pjain1
@pjfanning
@rohangarg
@samarthjain
@sergioferragut
@shallada
@somu-imply
@sthetland
@suneet-s
@syacobovitz
@Tassatux
@techdocsmith
@tejaswini-imply
@themarcelor
@TSFenwick
@uschindler
@v-vishwa
@Vespira
@vogievetsky
@vtlim
@wangxiaobaidu11
@williamhyun
@wjhypo
@xvrl
@yuanlihan
@zachjsh

druid - druid-0.22.1

Published by jihoonson almost 3 years ago

Apache Druid 0.22.1 is a bug fix release that fixes some security issues. See the complete set of changes for additional details.

# Bug fixes

https://github.com/apache/druid/pull/12051 Update log4j to 2.15.0 to address CVE-2021-44228
https://github.com/apache/druid/pull/11787 JsonConfigurator no longer logs sensitive properties
https://github.com/apache/druid/pull/11786 Update axios to 0.21.4 to address CVE-2021-3749
https://github.com/apache/druid/pull/11844 Update netty4 to 4.1.68 to address CVE-2021-37136 and CVE-2021-37137

# Credits

Thanks to everyone who contributed to this release!

@abhishekagarwal87
@andreacyc
@clintropolis
@gianm
@jihoonson
@kfaraz
@xvrl

druid - druid-0.22.0

Published by clintropolis about 3 years ago

Apache Druid 0.22.0 contains over 400 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 73 contributors. See the complete set of changes for additional details.

# New features

# Query engine

# Support for multiple distinct aggregators in same query

Druid now can support multiple DISTINCT 'exact' counts using the grouping aggregator typically used with grouping sets. Note that this only applies to exact counts - when druid.sql.planner.useApproximateCountDistinct is false, and can be enabled by setting druid.sql.planner.useGroupingSetForExactDistinct to true.

https://github.com/apache/druid/pull/11014

# SQL ARRAY_AGG and STRING_AGG aggregator functions

The ARRAY_AGG aggregation function has been added, to allow accumulating values or distinct values of a column into a single array result. This release also adds STRING_AGG, which is similar to ARRAY_AGG, except it joins the array values into a single string with a supplied 'delimiter' and it ignores null values. Both of these functions accept a maximum size parameter to control maximum result size, and will fail if this value is exceeded. See SQL documentation for additional details.

https://github.com/apache/druid/pull/11157
https://github.com/apache/druid/pull/11241

# Bitwise math function expressions and aggregators

Several new SQL functions functions for performing 'bitwise' math (along with corresponding native expressions), including BITWISE_AND, BITWISE_OR, BITWISE_XOR and so on. Additionally, aggregation functions BIT_AND, BIT_OR, and BIT_XOR have been added to accumulate values in a column with the corresponding bitwise function. For complete details see SQL documentation.

https://github.com/apache/druid/pull/10605
https://github.com/apache/druid/pull/10823
https://github.com/apache/druid/pull/11280

# Human readable number format functions

Three new SQL and native expression number format functions have been added in Druid 0.22.0, HUMAN_READABLE_BINARY_BYTE_FORMAT, HUMAN_READABLE_DECIMAL_BYTE_FORMAT, and HUMAN_READABLE_DECIMAL_FORMAT, which allow transforming results into a more friendly consumption format for query results. For more information see SQL documentation.

https://github.com/apache/druid/issues/10584
https://github.com/apache/druid/pull/10635

# Expression aggregator

Druid 0.22.0 adds a new 'native' JSON query expression aggregator function, that lets you use Druid native expressions to perform "fold" (alternatively known as "reduce") operations to accumulate some value on any number of input columns. This adds significant flexibility to what can be done in a Druid aggregator, similar in a lot of ways to what was possible with the Javascript aggregator, but in a much safer, sandboxed manner.

Expressions now being able to perform a "fold" on input columns also really rounds out the abilities of native expressions in addition to the previously possible "map" (expression virtual columns), "filter" (expression filters) and post-transform (expression post-aggregators) functions.

Since this uses expressions, performance is not yet optimal, and it is not directly documented yet, but it is the underlying technology behind the SQL ARRAY_AGG, STRING_AGG, and bitwise aggregator functions also added in this release.

https://github.com/apache/druid/pull/11104

# SQL query routing improvements

Druid 0.22 adds some new facilities to provide extension writers with enhanced control over how queries are routed between Druid routers and brokers. The first adds a new manual broker selection strategy to the Druid router, which allows a query to manually specify which Druid brokers a query should be sent to based on a query context parameter brokerService to any broker pool defined in druid.router.tierToBrokerMap (this corresponds to the 'service name' of the broker set, druid.service).

The second new feature allows the Druid router to parse and examine SQL queries so that broker selection strategies can also function for SQL queries. This can be enabled by setting druid.router.sql.enable to true. This does not affect JDBC queries, which use a different mechanism to facilitate "sticky" connections to a single broker.

https://github.com/apache/druid/pull/11566
https://github.com/apache/druid/pull/11495

# Avatica protobuf JDBC Support

Druid now supports using Avatica Protobuf JDBC connections, such as for use with the Avatica Golang Driver, and has a separate endpoint from the JSON JDBC uri.

String url = "jdbc:avatica:remote:url=http://localhost:8082/druid/v2/sql/avatica-protobuf/;serialization=protobuf";

https://github.com/apache/druid/pull/10543

# Improved query error logging

Query exceptions have been changed from WARN level to ERROR level to include additional information in the logs to help troubleshoot query failures. Additionally, a new query context flag, enableQueryDebugging has been added that will include stack traces in these query error logs, to provide even more information without the need to enable logs at the DEBUG level.

https://github.com/apache/druid/pull/11519

# Streaming Ingestion

# Task autoscaling for Kafka and Kinesis streaming ingestion

Druid 0.22.0 now offers experimental support for dynamic Kafka and Kinesis task scaling. The included strategies are driven by periodic measurement of stream lag (which is based on message count for Kafka, and difference of age between the message iterator and the oldest message for Kinesis), and will adjust the number of tasks based on the amount of 'lag' and several configuration parameters. See Kafka and Kinesis documentation for complete information.

https://github.com/apache/druid/pull/10524
https://github.com/apache/druid/pull/10985

# Avro and Protobuf streaming InputFormat and Confluent Schema Registry Support

Druid streaming ingestion now has support for Avro and Protobuf in the updated InputFormat specification format, which replaces the deprecated firehose/parser specification used by legacy Druid streaming formats. Alongside this, comes support for obtaining schemas for these formats from Confluent Schema Registry. See data formats documentation for further information.

https://github.com/apache/druid/pull/11040
https://github.com/apache/druid/pull/11018
https://github.com/apache/druid/pull/10314
https://github.com/apache/druid/pull/10839

# Kafka ingestion support for specifying group.id

Druid Kafka streaming ingestion now optionally supports specifying group.id on the connections Druid tasks make to the Kafka brokers. This is useful for accessing clusters which require this be set as part of authorization, and can be specified in the consumerProperties section of the Kafka supervisor spec. See Kafka ingestion documentation for more details.

https://github.com/apache/druid/pull/11147

# Native Batch Ingestion

# Support for using deep storage for intermediary shuffle data

Druid native 'perfect rollup' 2-phase ingestion tasks now support using deep storage as a shuffle location, as an alternative to local disks on middle-managers or indexers. To use this feature, set druid.processing.intermediaryData.storage.type to deepstore, which uses the configured deep storage type.

Note - With "deepstore" type, data is stored in shuffle-data directory under the configured deep storage path, auto clean up for this directory is not supported yet. One can setup cloud storage lifecycle rules for auto clean up of data at shuffle-data prefix location.

https://github.com/apache/druid/pull/11507

# Improved native batch ingestion task memory usage

Druid native batch ingestion has received a new configuration option, druid.indexer.task.batchProcessingMode which introduces two new operating modes that should allow batch ingestion to operate with a smaller and more predictable heap memory usage footprint. The CLOSED_SEGMENTS_SINKS mode is the most aggressive, and should have the smallest memory footprint, and works by eliminating in memory tracking and mmap of intermediary segments produced during segment creation, but isn't super well tested at this point so considered experimental. CLOSED_SEGMENTS, which is the new default option, eliminates mmap of intermediary segments, but still tracks the entire set of segments in heap, though it is relatively well tested at this point and considered stable. OPEN_SEGMENTS will use the previous ingestion path, which is shared with streaming ingestion and performs a mmap on intermediary segments and builds a timeline so that these segments can be queryable by realtime queries. This is not needed at all for batch, but OPEN_SEGMENTS mode can be selected if any problems occur with the 2 newer modes.

https://github.com/apache/druid/pull/11123
https://github.com/apache/druid/pull/11294
https://github.com/apache/druid/pull/11536

# Allow batch tasks to wait until segment handoff before completion

Druid native batch ingestion tasks can now be optionally configured to not terminate until after the ingested segments are completely loaded by Historical servers. This can be useful for scenarios when the trade-off of keeping an extra task slot occupied is worth using the task state as a measure of if ingestion is complete and segments are available to query.

This can be enabled by adding awaitSegmentAvailabilityTimeoutMillis to the tuningConfig in the ingestion spec, which specifies the maximum amount of time that a task will wait for segments to be loaded before terminating. If not all segments become available by the time this timeout expires, the job will still succeed. However, in the ingestion report, segmentAvailabilityConfirmed will be false. This indicates that handoff was not successful and these newly indexed segments may not all be available for query. On the other hand, if all segments become available for query on the Historical services before the timeout expires, the value for that key in the report will be true.

This tuningConfig value is not supported for compaction tasks at this time. If a user tries to specify a value for awaitSegmentAvailabilityTimeoutMillis for Compaction, the task will fail telling the user it is not supported.

https://github.com/apache/druid/pull/10676

# Data lifecycle management

# Support managing segment and query granularity for auto-compaction

Druid manual and automatic compaction can now be configured to change segment granularity, and manual compaction can also change query granularity. Additionally, compaction will preserve segment granularity by default. This allows operators to more easily perform options like changing older data to larger segment and query granularities in exchange for decreased data size. See compaction docs for details.

https://github.com/apache/druid/pull/10843
https://github.com/apache/druid/pull/10856
https://github.com/apache/druid/pull/10900
https://github.com/apache/druid/pull/10912
https://github.com/apache/druid/pull/11009

# Allow compaction to temporarily skip locked intervals

Druid auto-compaction will now by default temporarily skip locked intervals instead of waiting for the lock to become free, which should improve the rate at which datasources can be compacted. This is controlled by druid.coordinator.compaction.skipLockedIntervals, and can be set to false if this behavior is not desired for some reason.

https://github.com/apache/druid/pull/11190

# Support for additional automatic metadata cleanup

You can configure automated cleanup to remove records from the metadata store after you delete delete some entities from Druid:

  • segments records
  • audit records
  • supervisor records
  • rule records
  • compaction configuration records
  • datasource records created by supervisors

This feature helps maintain performance when you have a high datasource churn rate, meaning you frequently create and delete many short-lived datasources or other related entities. You can limit the length of time to retain unused metadata records to prevent your metadata store from filling up. See automatic cleanup documentation for more information.

https://github.com/apache/druid/pull/11078
https://github.com/apache/druid/pull/11084
https://github.com/apache/druid/pull/11164
https://github.com/apache/druid/pull/11200
https://github.com/apache/druid/pull/11227
https://github.com/apache/druid/pull/11232
https://github.com/apache/druid/pull/11245

# Dropping data

A new setting, dropExisting has been added to the ioConfig of Druid native batch ingestion tasks and compaction, which if set to true (and appendToExist is false), then the ingestion task will transactionally mark all existing segments in the interval as unused, replacing them with the new set of segments. This can be useful in compaction use cases where normal overshadowing does not completely replace a set of segments in an interval, such as when changing segment granularity to a smaller size and some of the smaller granularity buckets would have no data, leaving the original segments only partially overshadowed.

Note that this functionality is still experimental, and can result in temporary data unavailability for data within the compacted interval. Changing this config does not cause intervals to be compacted again.

Similarly, markAsUnused has been added as an option to the Druid kill task, which will mark any segments in the supplied interval as 'unused' prior to deleting all of the unused segments. This is useful for allowing the mark unused -> delete sequence to happen with a single API call for the caller, as well as allowing the unmark action to occur under a task interval lock.

https://github.com/apache/druid/pull/11070
https://github.com/apache/druid/pull/11025
https://github.com/apache/druid/pull/11501

# Coordinator

# Control over coordinator segment load timeout timeout behavior with Apache Zookeeper based segment management

A new Druid coordinator dynamic configuration option allows controlling the behavior whenever a segment load action times out when using Zookeeper based segment management. replicateAfterLoadTimeout when set to true, the coordinator will attempt to replicate the segment that failed to load to a different historical server. This helps improve the segment availability if there are a few slow historical servers in the cluster. However, the slow historical may still load the segment later and the coordinator may need to issue drop requests if the segment is over-replicated.

https://github.com/apache/druid/pull/10213

# Faster coordinator segment balancing

Another new coordinator dynamic configuration option, useBatchedSegmentSampler, when set to true can potentially provide a large performance increase in the speed which the coordinator can process the segment balancing phase. This should be particularly notable at very large cluster sizes with many segments, but is disabled by default to err on the side of caution.

https://github.com/apache/druid/pull/11257

# Improved loadstatus API to optionally compute under-replication based on cluster size

The Druid coordinator load status API now supports a new optional URL query parameter, computeUsingClusterView, which when specified will cause the coordinator compute under-replication for segments based on the number of servers available within cluster that the segment can be replicated to, instead of the configured replication count configured in load rule. For example, if the load rules specify 2 replicas, but there is only 1 server which can hold segments, this API would not report as under-replicated because the segments are as replicated as is possible for the given cluster size.

https://github.com/apache/druid/pull/11056

# Optional limits on the number of non-primary replicants loaded per coordination cycle

A new coordinator dynamic configuration, maxNonPrimaryReplicantsToLoad, with default value of Integer.MAX_VALUE, lets operators to define a hard upper limit on the number of non-primary replicants that will be loaded in a single coordinator execution cycle. The default value will mimic the behavior that exists today.

Example usage: If you set this configuration to 1000, the coordinator will load a maximum of 1000 non-primary replicants in each run cycle execution. Meaning if you ingested 2000 segments with a replication factor of 2, the coordinator would load 2000 primary replicants and 1000 non-primary replicants on the first execution. Then the next execution, the last 1000 non-primary replicants will be loaded.

https://github.com/apache/druid/pull/11135

# Web Console

# General improvements

The Druid web-console 'services' tab will now display which coordinator and overlord servers are serving as the leader, displayed in the 'Detail' column of the table. This should help operators be able to more quickly determine which node is the leader and thus which likely has the interesting logs to examine.

The web-console now also supports using ASCII control characters, by entering them in the form of \uNNNN where NNNN is the unicode code point for the character.

https://github.com/apache/druid/pull/10951
https://github.com/apache/druid/pull/10795

# Query view

The query view of the web-console has received a number of 'quality of life' improvements in Druid 0.22.0. First, the query view now provides an indicator of how long a query took to execute:
image

Also, queries will no longer auto-run when opening a fresh page, to prevent stale queries from being executed when opening a browser, the page will be reset to 0 if the query result changes and the query limit will automatically increase when the last page is loaded re-running the query.

Inline documentation now also should include Druid type information:
image
and should provide better suggestions whenever a query error occurs:
image

Finally, the web console query view now supports the hot-key combination command + enter (on mac) and ctrl + enter on Windows and Linux.

https://github.com/apache/druid/pull/11158
https://github.com/apache/druid/pull/11128
https://github.com/apache/druid/pull/11203
https://github.com/apache/druid/pull/11365

# Data management

The web-console segments view timeline now has the ability to pick any time interval, instead of just the previous year!
image

The web-console segments view has also been improved to hopefully be more performant when interacting with the sys.segments table, including providing the ability to 'force' the web-console to only use the native JSON API methods to display segment information:
image

The lookup view has also been improved, so that now 'poll period' and 'summary' are available as columns in the list view:
image

We have also added validation for poll period to prevent user error, and improved error reporting:
image

https://github.com/apache/druid/pull/11359
https://github.com/apache/druid/pull/10909
https://github.com/apache/druid/pull/11620

# Metrics

# Prometheus metric emitter

A new "contrib" extension has been added, prometheus-emitter, which allows Druid metrics to be sent directly to a Prometheus server. See the extension documentation page for complete details: https://druid.apache.org/docs/0.22.0/development/extensions-contrib/prometheus.html

https://github.com/apache/druid/pull/10412
https://github.com/apache/druid/pull/11618

# ingest/notices/queueSize

ingest/notices/queueSize is a new metric added to provide monitoring for supervisor ingestion task control message processing queue sizes, to help in determining if a supervisor might be overloaded by a large volume of these notices. This metric is emitted by default for every running supervisor.

https://github.com/apache/druid/pull/11417

# query/segments/count

query/segments/count is a new metric which has been added to track the number of segments which participate in a query. This metric is not enabled by default, so must be enabled via a custom extension to override which QueryMetrics are emitted similar to other query metrics that are not emitted by default. (We know this is definitely not friendly, and hope someday in the future to make this easier, sorry).

https://github.com/apache/druid/pull/11394

# Cloud integrations

# AWS Web Identity / IRSA Support

Druid 0.22.0 adds AWS Web Identity Token Support, which allows for the use of IAM roles for service accounts on Kubernetes, if configured as the AWS credentials provider.

https://github.com/apache/druid/pull/10541

# S3 ingestion support for assuming a role

Druid native batch ingestion from S3 input sources can now use the AssumeRole capability in AWS for cross-account file access. This can be utilized by setting assumeRoleArn and assumeRoleExternalId on the S3 input source specification in a batch ingestion task. See AWS documentation and native batch documentation for more details.

https://github.com/apache/druid/pull/10995

# Google Cloud Storage support for URI lookups

Druid lookups now support loading via Google Cloud Storage, similar to existing functionality available with S3. This requires the druid-google-extensions must be loaded in addition to the lookup extensions, but beyond that it is as simple as using a Google Cloud Storage URI.

https://github.com/apache/druid/pull/11026

# Other changes

# Extracting Avro union fields by type

Avro ingestion using Druid batch or streaming ingestion now supports an alternative mechanism of extracting data for Avro Union types. This new option, extractUnionsByType only works when utilizing a flattenSpec to extract nested data from union types, and will cause the extracted data to be available with the type as part of the flatten path. For example, given a multi-typed union column someMultiMemberUnion, with this option enabled a long value would be extracted by $.someMultiMemberUnion.long instead of $.someMultiMemberUnion, and would only extract long values from the union. See Avro documentation for complete information.

https://github.com/apache/druid/pull/10505

# Support using MariaDb connector with MySQL extensions

Druid MySQL extensions now supports using the MariaDB connector library as an alternative to the MySQL connector. This can be done by setting druid.metadata.mysql.driver.driverClassName to org.mariadb.jdbc.Driver and includes full support for JDBC URI parameter whitelists used by JDBC lookups and SQL based ingestion.

https://github.com/apache/druid/pull/11402

# Add Environment Variable DynamicConfigProvider

Druid now provides a DynamicConfigProvider implementation that is backed by environment variables. For example:

druid.some.config.dynamicConfigProvider={"type": "environment","variables":{"secret1": "SECRET1_VAR","secret2": "SECRET2_VAR"}}

See dynamic config provider documentation for further information.

https://github.com/apache/druid/pull/11377

# Add DynamicConfigProvider for Schema Registry

Ingestion formats which support Confluent Schema Registry now support supplying these parameters via a DynamicConfigProvider which is the newer alternative to PasswordProvider. This will allow ingestion tasks to use the config provider to supply this information instead of directly in the JSON specifications, allowing the potential for more secure manners of supplying credentials and other sensitive configuration information. See data format and dynamic config provider documentation for more details.

https://github.com/apache/druid/pull/11362

# Security fixes

# Control of allowed protocols for HTTP and HDFS input sources

Druid 0.22.0 adds new facilities to control the set of allowed protocols used by HTTP and HDFS input sources in batch ingestion. druid.ingestion.hdfs.allowedProtocols is configured by default to accept hdfs as the protocol, and druid.ingestion.http.allowedProtocols by default will allow http and https. This might cause issue with existing deployments since it is more restrictive than the current default behavior in older versions of Druid, but overall allows operators more flexibility in securing these input sources.

https://github.com/apache/druid/pull/10830

# Fix expiration logic for LDAP internal credential cache

This version of Druid also fixes a flaw in druid-basic-security extension when using LDAP, where the credentials cache would not correctly expire, potentially holding expired credential information after it should have expired, until another trigger was hit or the service was restarted. Druid clusters using LDAP for authorization should update to 0.22.0 whenever possible to fix this issue.

https://github.com/apache/druid/pull/11395

# Performance improvements

# General performance

# JOIN query enhacements

# SQL

# Vectorized query engine

# Bug fixes

Druid 0.22.0 contains over 80 bug fixes, you can see the complete list here.

# Upgrading to 0.22.0

Consider the following changes and updates when upgrading from Druid 0.21.x to 0.22.0. If you're updating from an earlier version than 0.21.0, see the release notes of the relevant intermediate versions.

# Dropped support for Apache ZooKeeper 3.4

Following up to 0.21, which officially deprecated support for Zookeeper 3.4, which has been end-of-life for a while, support for ZooKeeper 3.4 is now removed in 0.22.0. Be sure to upgrade your Zookeeper cluster prior to upgrading your Druid cluster to 0.22.0.

https://github.com/apache/druid/issues/10780
https://github.com/apache/druid/pull/11073

# Native batch ingestion segment allocation fix

Druid 0.22.0 includes an important bug-fix in native batch indexing where transient failures of indexing sub-tasks can result in non-contiguous partitions in the result segments, which will never become queryable due to logic which checks for the 'complete' set. This issue has been resolved in the latest version of Druid, but required a change in the protocol which batch tasks use to allocate segments, and this change can cause issues during rolling downgrades if you decide to roll back from Druid 0.22.0 to an earlier version.

To avoid task failure during a rolling-downgrade, set

druid.indexer.task.default.context={ "useLineageBasedSegmentAllocation" : false }

in the overlord runtime properties, and wait for all tasks which have useLineageBasedSegmentAllocation set to true to complete before initiating the downgrade. After these tasks have all completed the downgrade shouldn't have any further issue and the setting can be removed from the overlord configuration (recommended, as you will want this setting enabled if you are running Druid 0.22.0 or newer).

https://github.com/apache/druid/pull/11189

# SQL timeseries no longer skip empty buckets with all granularity

Prior to Druid 0.22, an SQL group by query which is using a single universal grouping key (e.g. only aggregators) such as SELECT COUNT(*), SUM(x) FROM y WHERE z = 'someval' would produce an empty result set instead of [0, null] that might be expected from this query matching no results. This was because underneath this would plan into a timeseries query with 'ALL' granularity, and skipEmptyBuckets set to true in the query context. This latter option caused the results of such a query to return no results, as there are no buckets with values to aggregate and so they are skipped, making an empty result set instead of a 'nil' result set. This behavior has been changed to behave in line with other SQL implementations, but the previous behavior can be obtained by explicitly setting skipEmptyBuckets on the query context.

https://github.com/apache/druid/pull/11188

# Druid reingestion incompatible changes

Batch tasks using a 'Druid' input source to reingest segment data will no longer accept the 'dimensions' and 'metrics' sections of their task spec, and now will internally use a new columns filter to specify which columns from the original segment should be retained. Additionally, timestampSpec is no longer ignored, allowing the __time column to be modified or replaced with a different column. These changes additionally fix a bug where transformed columns would be ignored and unavailable on the new segments.

https://github.com/apache/druid/pull/10267

# Druid web-console no longer supports IE11 and other older browsers

Some things might still work, but it is no longer officially supported so that newer Javascript features can be used to develop the web-console.
https://github.com/apache/druid/pull/11357

# Changed default maximum segment loading queue size

Druid coordinator maxSegmentsInNodeLoadingQueue dynamic configuration has been changed from unlimited (0) to instead to 100. This should make the coordinator behave in a much more relaxed manner during periods of cluster volatility, such as a rolling upgrade, but caps the total number of segments that will be loaded in any given coordinator cycle to 100 per server, which can slow down the speed at which a completely stopped cluster is started and loaded from deep storage.

https://github.com/apache/druid/pull/11540

# Developer notices

# CacheKeyBuilder moved from druid-processing to druid-core

The CacheKeyBuilder class, which is annotated with @PublicAPI has been moved from druid-processing to druid-core so that expressions can extend the Cacheable interface to allow expressions to generate cache keys which depend on some external state, such as lookup version.

https://github.com/apache/druid/pull/11358

# Query engine now uses new QueryProcessingPool instead of ExecutorService directly

This impacts a handful of method signatures in the query processing engine, such as QueryRunnerFactory and QuerySegmentWalker to allow extensions to hook into various parts of the query processing pool and alternative processing pool scheduling strategies in the future.

https://github.com/apache/druid/pull/11382

# SegmentLoader is now extensible and customizable

This allows extensions to provide alternative segment loading implementations to customize how Druid segments are loaded from deep storage and made available to the query engine. This should be considered an unstable api, and is annotated as such in the code.

https://github.com/apache/druid/pull/11398

# Known issues

For a full list of open issues, please see https://github.com/apache/druid/labels/Bug.

# Credits

Thanks to everyone who contributed to this release!

@2bethere
@a2l007
@abhishekagarwal87
@AKarbas
@AlexanderSaydakov
@ArvinZheng
@asdf2014
@astrohsy
@bananaaggle
@benkrug
@bergmt2000
@camteasdale143
@capistrant
@Caroline1000
@chenyuzhi459
@clintropolis
@cryptoe
@DaegiKim
@dependabot[bot]
@dkoepke
@dongjoon-hyun
@egor-ryashin
@fhan688
@FrankChen021
@gianm
@harinirajendran
@himadrisingh
@himanshug
@hqx871
@imply-jbalik
@imply-jhan
@isandeep41
@jasonk000
@jbampton
@jerryleooo
@jgoz
@jihoonson
@jon-wei
@josephglanville
@jp707049
@junegunn
@kaijianding
@kazuhirokomoda
@kfaraz
@lkm
@loquisgon
@MakDon
@maytasm
@misqos
@mprashanthsagar
@mSitkovets
@paul-rogers
@petermarshallio
@pjain1
@rohangarg
@samarthjain
@shankeerthan-kasilingam
@spinatelli
@sthetland
@suneet-s
@techdocsmith
@Tiaaa
@tushar-1728
@viatcheslavmogilevsky
@viongpanzi
@vogievetsky
@vtlim
@wjhypo
@wx930910
@xvrl
@yuanlihan
@zachjsh
@zhangyue19921010

druid - druid-0.21.1

Published by clintropolis over 3 years ago

Apache Druid 0.21.1 is a bug fix release that fixes a few regressions with the 0.21 release. The first is an issue with the published Docker image, which causes containers to fail to start due to volume permission issues, described in #11166 as fixed in #11167. This release also fixes an issue caused by a bug in the upgraded Jetty version which was released in 0.21, described in #11206 and fixed in #11207. Finally, a web console regression related to field validation has been added in #11228.

# Bug fixes

https://github.com/apache/druid/pull/11167 fix docker volume permissions
https://github.com/apache/druid/pull/11207 Upgrade jetty version
https://github.com/apache/druid/pull/11228 Web console: Fix required field treatment
https://github.com/apache/druid/pull/11299 Fix permission problems in docker

# Credits

Thanks to everyone who contributed to this release!

@a2l007
@clintropolis
@FrankChen021
@maytasm
@vogievetsky

druid - druid-0.21.0

Published by jihoonson over 3 years ago

Apache Druid 0.21.0 contains around 120 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 36 contributors. Refer to the complete list of changes and everything tagged to the milestone for further details.

# New features

# Operation

# Service discovery and leader election based on Kubernetes

The new Kubernetes extension supports service discovery and leader election based on Kubernetes. This extension works in conjunction with the HTTP-based server view (druid.serverview.type=http) and task management (druid.indexer.runner.type=httpRemote) to allow you to run a Druid cluster with zero ZooKeeper dependencies. This extension is still experimental. See Kubernetes extension for more details.

https://github.com/apache/druid/pull/10544
https://github.com/apache/druid/pull/9507
https://github.com/apache/druid/pull/10537

# New dynamic coordinator configuration to limit the number of segments when finding a candidate segment for segment balancing

You can set the percentOfSegmentsToConsiderPerMove to limit the number of segments considered when picking a candidate segment to move. The candidates are searched up to maxSegmentsToMove * 2 times. This new configuration prevents Druid from iterating through all available segments to speed up the segment balancing process, especially if you have lots of available segments in your cluster. See Coordinator dynamic configuration for more details.

https://github.com/apache/druid/pull/10284

# status and selfDiscovered endpoints for Indexers

The Indexer now supports status and selfDiscovered endpoints. See Processor information APIs for details.

https://github.com/apache/druid/pull/10679

# Querying

# New grouping aggregator function

You can use the new grouping aggregator SQL function with GROUPING SETS or CUBE to indicate which grouping dimensions are included in the current grouping set. See Aggregation functions for more details.

https://github.com/apache/druid/pull/10518

# Improved missing argument handling in expressions and functions

Expression processing now can be vectorized when inputs are missing. For example a non-existent column. When an argument is missing in an expression, Druid can now infer the proper type of result based on non-null arguments. For instance, for longColumn + nonExistentColumn, nonExistentColumn is treated as (long) 0 instead of (double) 0.0. Finally, in default null handling mode, math functions can produce output properly by treating missing arguments as zeros.

https://github.com/apache/druid/pull/10499

# Allow zero period for TIMESTAMPADD

TIMESTAMPADD function now allows zero period. This functionality is required for some BI tools such as Tableau.

https://github.com/apache/druid/pull/10550

# Ingestion

# Native parallel ingestion no longer requires explicit intervals

Parallel task no longer requires you to set explicit intervals in granularitySpec. If intervals are missing, the parallel task executes an extra step for input sampling which collects the intervals to index.

https://github.com/apache/druid/pull/10592
https://github.com/apache/druid/pull/10647

# Old Kafka version support

Druid now supports Apache Kafka older than 0.11. To read from an old version of Kafka, set the isolation.level to read_uncommitted in consumerProperties. Only 0.10.2.1 have been tested up until this release. See Kafka supervisor configurations for details.

https://github.com/apache/druid/pull/10551

Multi-phase segment merge for native batch ingestion

A new tuningConfig, maxColumnsToMerge, controls how many segments can be merged at the same time in the task. This configuration can be useful to avoid high memory pressure during the merge. See tuningConfig for native batch ingestion for more details.

https://github.com/apache/druid/pull/10689

# Native re-ingestion is less memory intensive

Parallel tasks now sort segments by ID before assigning them to subtasks. This sorting minimizes the number of time chunks for each subtask to handle. As a result, each subtask is expected to use less memory, especially when a single Parallel task is issued to re-ingest segments covering a long time period.

https://github.com/apache/druid/pull/10646

# Web console

# Updated and improved web console styles

The new web console styles make better use of the Druid brand colors and standardize paddings and margins throughout. The icon and background colors are now derived from the Druid logo.

image

https://github.com/apache/druid/pull/10515

# Partitioning information is available in the web console

The web console now shows datasource partitioning information on the new Segment granularity and Partitioning columns.

Segment granularity column in the Datasources tab

97240667-1b9cb280-17ac-11eb-9c55-e312c24cd8fc

Partitioning column in the Segments tab

97240597-ebedaa80-17ab-11eb-976f-a0d49d6d1a40

https://github.com/apache/druid/pull/10533

# The column order in the Schema table matches the dimensionsSpec

The Schema table now reflects the dimension ordering in the dimensionsSpec.

image

https://github.com/apache/druid/pull/10588

# Metrics

# Coordinator duty runtime metrics

The coordinator performs several 'duty' tasks. For example segment balancing, loading new segments, etc. Now there are two new metrics to help you analyze how fast the Coordinator is executing these duties.

  • coordinator/time: the time for an individual duty to execute
  • coordinator/global/time: the time for the whole duties runnable to execute

https://github.com/apache/druid/pull/10603

# Query timeout metric

A new metric provides the number of timed out queries. Previously timed out queries were treated as interrupted and included in the query/interrupted/count (see Changed HTTP status codes for query errors for more details).

query/timeout/count: the number of timed out queries during the emission period

https://github.com/apache/druid/pull/10567

# Shuffle metrics for batch ingestion

Two new metrics provide shuffle statistics for MiddleManagers and Indexers. These metrics have the supervisorTaskId as their dimension.

  • ingest/shuffle/bytes: number of bytes shuffled per emission period
  • ingest/shuffle/requests: number of shuffle requests per emission period

To enable the shuffle metrics, add org.apache.druid.indexing.worker.shuffle.ShuffleMonitor in druid.monitoring.monitors. See Shuffle metrics for more details.

https://github.com/apache/druid/pull/10359

# New clock-drift safe metrics monitor scheduler

The default metrics monitor scheduler is implemented based on ScheduledThreadPoolExecutor which is prone to unbounded clock drift. A new monitor scheduler, ClockDriftSafeMonitorScheduler, overcomes this limitation. To use the new scheduler, set druid.monitoring.schedulerClassName to org.apache.druid.java.util.metrics.ClockDriftSafeMonitorScheduler in the runtime.properties file.

https://github.com/apache/druid/pull/10448
https://github.com/apache/druid/pull/10732

# Others

# New extension for a password provider based on AWS RDS token

A new PasswordProvider type allows access to AWS RDS DB instances using temporary AWS tokens. This extension can be useful when an RDS is used as Druid's metadata store. See AWS RDS extension for more details.

https://github.com/apache/druid/pull/9518

# The sys.servers table shows leaders

A new long-typed column is_leader in the sys.servers table indicates whether or not the server is the leader.

https://github.com/apache/druid/pull/10680

# druid-influxdb-emitter extension supports the HTTPS protocol

See Influxdb emitter extension for new configurations.

https://github.com/apache/druid/pull/9938

# Docker

# Small docker image

The docker image size is reduced by half by eliminating unnecessary duplication.

https://github.com/apache/druid/pull/10506

# Development

# Extensible Kafka consumer properties via a new DynamicConfigProvider

A new class DynamicConfigProvider enables fetching consumer properties at runtime. For instance, you can use DynamicConfigProvider fetch bootstrap.servers from location such as a local environment variable if it is not static. Currently, only a map-based config provider is supported by default. See DynamicConfigProvider for how to implement a custom config provider.

https://github.com/apache/druid/pull/10309

# Bug fixes

Druid 0.21.0 contains 30 bug fixes, you can see the complete list here.

# Post-aggregator computation with subtotals

Before 0.21.0, the query fails with an error when you use post aggregators with sub-totals. Now this bug is fixed and you can use post aggregators with subtotals.

https://github.com/apache/druid/pull/10653

# Indexers announce themselves as segment servers

In 0.19.0 and 0.20.0, Indexers could not process queries against streaming data as they did not announce themselves as segment servers. They are fixed to announce themselves properly in 0.21.0.

https://github.com/apache/druid/pull/10631

# Validity check for segment files in historicals

Historicals now perform validity check after they download segment files and re-download automatically if those files are crashed.

https://github.com/apache/druid/pull/10650

# StorageLocationSelectorStrategy injection failure is fixed

The injection failure while reading the configurations of StorageLocationSelectorStrategy is fixed.

https://github.com/apache/druid/pull/10363

# Upgrading to 0.21.0

Consider the following changes and updates when upgrading from Druid 0.20.0 to 0.21.0. If you're updating from an earlier version than 0.20.0, see the release notes of the relevant intermediate versions.

# Improved HTTP status codes for query errors

Before this release, Druid returned the "internal error (500)" for most of the query errors. Now Druid returns different error codes based on their cause. The following table lists the errors and their corresponding codes that has changed:

Exception Description Old code New code
SqlParseException and ValidationException from Calcite Query planning failed 500 400
QueryTimeoutException Query execution didn't finish in timeout 500 504
ResourceLimitExceededException Query asked more resources than configured threshold 500 400
InsufficientResourceException Query failed to schedule because of lack of merge buffers available at the time when it was submitted 500 429, merged to QueryCapacityExceededException
QueryUnsupportedException Unsupported functionality 400 501

There is also a new query metric for query timeout errors. See New query timeout metric for more details.

https://github.com/apache/druid/pull/10464
https://github.com/apache/druid/pull/10746

# Query interrupted metric

query/interrupted/count no longer counts the queries that timed out. These queries are counted by query/timeout/count.

# context dimension in query metrics

context is now a default dimension emitted for all query metrics. context is a JSON-formatted string containing the query context for the query that the emitted metric refers to. The addition of a dimension that was not previously alters some metrics emitted by Druid. You should plan to handle this new context dimension in your metrics pipeline. Since the dimension is a JSON-formatted string, a common solution is to parse the dimension and either flatten it or extract the bits you want and discard the full JSON-formatted string blob.

https://github.com/apache/druid/pull/10578

# Deprecated support for Apache ZooKeeper 3.4

As ZooKeeper 3.4 has been end-of-life for a while, support for ZooKeeper 3.4 is deprecated in 0.21.0 and will be removed in the near future.

https://github.com/apache/druid/issues/10780

# Consistent serialization format and column naming convention for the sys.segments table

All columns in the sys.segments table are now serialized in the JSON format to make them consistent with other system tables. Column names now use the same "snake case" convention.

https://github.com/apache/druid/pull/10481

# Known issues

# Known security vulnerability in the Thrift library

The Thrift extension can be useful for ingesting files of the Thrift format into Druid. However, there is a known security vulnerability in the version of the Thrift library that Druid uses. The vulerability can be exploitable by ingesting maliciously crafted Thrift files when you use Indexers. We recommend granting the DATASOURCE WRITE permission to only trusted users.

# Permission issues in running the docker-based Druid cluster

If you run the Druid docker cluster for the first time in your machine, using the 0.21.0 image can create internal directories with the root account. As a result, Druid services can fail due lack of permissions. This issue is filed in https://github.com/apache/druid/issues/11166.

If you are using docker compose, you can use the below commands to work around this issue. These commands will create internal directories first using an old image and then start services using the 0.21.0 image.

$ cd ${PREV_SRC_DIR}
$ docker-compose -f distribution/docker/docker-compose.yml create
$ cd ${0.21.0_SRC_DIR}
$ docker-compose -f distribution/docker/docker-compose.yml up

If you are not using docker compose, you can directly pass the volume parameter for /opt/druid/var when you start services using the 0.21.0 image. For example, you can run the command below to start the coordinator service.

$ docker run -v /path/to/host/dir:/opt/druid/var apache/druid:0.21.0 coordinator

For a full list of open issues, please see https://github.com/apache/druid/labels/Bug.

# Credits

Thanks to everyone who contributed to this release!

@a2l007
@abhishekagarwal87
@asdf2014
@AshishKapoor
@awelsh93
@ayushkul2910
@bananaaggle
@capistrant
@ccaominh
@clintropolis
@cloventt
@FrankChen021
@gianm
@harinirajendran
@himanshug
@jihoonson
@jon-wei
@kroeders
@liran-funaro
@martin-g
@maytasm
@mghosh4
@michaelschiff
@nishantmonu51
@pcarrier
@QingdongZeng3
@sthetland
@suneet-s
@tdt17
@techdocsmith
@valdemar-giosg
@viatcheslavmogilevsky
@viongpanzi
@vogievetsky
@xvrl
@zhangyue19921010

druid - druid-0.20.2

Published by jihoonson over 3 years ago

Apache Druid 0.20.2 introduces new configurations to address CVE-2021-26919: Authenticated users can execute arbitrary code from malicious MySQL database systems. Users are recommended to enable new configurations in the below to mitigate vulnerable JDBC connection properties. These configurations will be applied to all JDBC connections for ingestion and lookups, but not for metadata store. See security configurations for more details.

  • druid.access.jdbc.enforceAllowedProperties: When true, Druid applies druid.access.jdbc.allowedProperties to JDBC connections starting with jdbc:postgresql: or jdbc:mysql:. When false, Druid allows any kind of JDBC connections without JDBC property validation. This config is set to false by default to not break rolling upgrade. This config is deprecated now and can be removed in a future release. The allow list will be always enforced in that case.
  • druid.access.jdbc.allowedProperties: Defines a list of allowed JDBC properties. Druid always enforces the list for all JDBC connections starting with jdbc:postgresql: or jdbc:mysql: if druid.access.jdbc.enforceAllowedProperties is set to true. This option is tested against MySQL connector 5.1.48 and PostgreSQL connector 42.2.14. Other connector versions might not work.
  • druid.access.jdbc.allowUnknownJdbcUrlFormat: When false, Druid only accepts JDBC connections starting with jdbc:postgresql: or jdbc:mysql:. When true, Druid allows JDBC connections to any kind of database, but only enforces druid.access.jdbc.allowedProperties for PostgreSQL and MySQL.
druid - druid-0.20.1

Published by jihoonson over 3 years ago

Apache Druid 0.20.1 is a bug fix release that addresses CVE-2021-25646: Authenticated users can override system configurations in their requests which allows them to execute arbitrary code.

# Known issues

# Incorrect Druid version in docker-compose.yml

The Druid version is specified as 0.20.0 in the docker-compose.yml file. We recommend to update the version to 0.20.1 before you run a Druid cluster using docker compose.

druid - druid-0.20.0

Published by jon-wei about 4 years ago

Apache Druid 0.20.0 contains around 160 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 36 contributors. Refer to the complete list of changes and everything tagged to the milestone for further details.

# New Features

# Ingestion

# Combining InputSource

A new combining InputSource has been added, allowing the user to combine multiple input sources during ingestion. Please see https://druid.apache.org/docs/0.20.0/ingestion/native-batch.html#combining-input-source for more details.

https://github.com/apache/druid/pull/10387

# Automatically determine numShards for parallel ingestion hash partitioning

When hash partitioning is used in parallel batch ingestion, it is no longer necessary to specify numShards in the partition spec. Druid can now automatically determine a number of shards by scanning the data in a new ingestion phase that determines the cardinalities of the partitioning key.

https://github.com/apache/druid/pull/10419

# Subtask file count limits for parallel batch ingestion

The size-based splitHintSpec now supports a new maxNumFiles parameter, which limits how many files can be assigned to individual subtasks in parallel batch ingestion.

The segment-based splitHintSpec used for reingesting data from existing Druid segments also has a new maxNumSegments parameter which functions similarly.

Please see https://druid.apache.org/docs/0.20.0/ingestion/native-batch.html#split-hint-spec for more details.

https://github.com/apache/druid/pull/10243

# Task slot usage metrics

New task slot usage metrics have been added. Please see the entries for the taskSlot metrics at https://druid.apache.org/docs/0.20.0/operations/metrics.html#indexing-service for more details.

https://github.com/apache/druid/pull/10379

# Compaction

# Support for all partitioning schemes for auto-compaction

A partitioning spec can now be defined for auto-compaction, allowing users to repartition their data at compaction time. Please see the documentation for the new partitionsSpec property in the compaction tuningConfig for more details:

https://druid.apache.org/docs/0.20.0/configuration/index.html#compaction-tuningconfig

https://github.com/apache/druid/pull/10307

# Auto-compaction status API

A new coordinator API which shows the status of auto-compaction for a datasource has been added. The new API shows whether auto-compaction is enabled for a datasource, and a summary of how far compaction has progressed.

The web console has also been updated to show this information:

https://user-images.githubusercontent.com/177816/94326243-9d07e780-ff57-11ea-9f80-256fa08580f0.png

Please see https://druid.apache.org/docs/latest/operations/api-reference.html#compaction-status for details on the new API, and https://druid.apache.org/docs/latest/operations/metrics.html#coordination for information on new related compaction metrics.

https://github.com/apache/druid/pull/10371
https://github.com/apache/druid/pull/10438

# Querying

# Query segment pruning with hash partitioning

Druid now supports query-time segment pruning (excluding certain segments as read candidates for a query) for hash partitioned segments. This optimization applies when all of the partitionDimensions specified in the hash partition spec during ingestion time are present in the filter set of a query, and the filters in the query filter on discrete values of the partitionDimensions (e.g., selector filters). Segment pruning with hash partitioning is not supported with non-discrete filters such as bound filters.

For existing users with existing segments, you will need to reingest those segments to take advantage of this new feature, as the segment pruning requires a partitionFunction to be stored together with the segments, which does not exist in segments created by older versions of Druid. It is not necessary to specify the partitionFunction explicitly, as the default is the same partition function that was used in prior versions of Druid.

Note that segments created with a default partitionDimensions value (partition by all dimensions + the time column) cannot be pruned in this manner, the segments need to be created with an explicit partitionDimensions.

https://github.com/apache/druid/pull/9810
https://github.com/apache/druid/pull/10288

# Vectorization

To enable vectorization features, please set the druid.query.default.context.vectorizeVirtualColumns property to true or set the vectorize property in the query context. Please see https://druid.apache.org/docs/0.20.0/querying/query-context.html#vectorization-parameters for more information.

# Vectorization support for expression virtual columns

Expression virtual columns now have vectorization support (depending on the expressions being used), which an results in a 3-5x performance improvement in some cases.

Please see https://druid.apache.org/docs/0.20.0/misc/math-expr.html#vectorization-support for details on the specific expressions that support vectorization.

https://github.com/apache/druid/pull/10388
https://github.com/apache/druid/pull/10401
https://github.com/apache/druid/pull/10432

# More vectorization support for aggregators

Vectorization support has been added for several aggregation types: numeric min/max aggregators, variance aggregators, ANY aggregators, and aggregators from the druid-histogram extension.

https://github.com/apache/druid/pull/10260 - numeric min/max
https://github.com/apache/druid/pull/10304 - histogram
https://github.com/apache/druid/pull/10338 - ANY
https://github.com/apache/druid/pull/10390 - variance

We've observed about a 1.3x to 1.8x performance improvement in some cases with vectorization enabled for the min, max, and ANY aggregator, and about 1.04x to 1.07x wuth the histogram aggregator.

# offset parameter for GroupBy and Scan queries

It is now possible set an offset parameter for GroupBy and Scan queries, which tells Druid to skip a number of rows when returning results. Please see https://druid.apache.org/docs/0.20.0/querying/limitspec.html and https://druid.apache.org/docs/0.20.0/querying/scan-query.html for details.

https://github.com/apache/druid/pull/10235
https://github.com/apache/druid/pull/10233

# OFFSET clause for SQL queries

Druid SQL queries now support an OFFSET clause. Please see https://druid.apache.org/docs/0.20.0/querying/sql.html#offset for details.

https://github.com/apache/druid/pull/10279

# Substring search operators

Druid has added new substring search operators in its expression language and for SQL queries.

Please see documentation for CONTAINS_STRING and ICONTAINS_STRING string functions for Druid SQL (https://druid.apache.org/docs/0.20.0/querying/sql.html#string-functions) and documentation for contains_string and icontains_string for the Druid expression language (https://druid.apache.org/docs/0.20.0/misc/math-expr.html#string-functions).

We've observed about a 2.5x performance improvement in some cases by using these functions instead of STRPOS.

https://github.com/apache/druid/pull/10350

# UNION ALL operator for SQL queries

Druid SQL queries now support the UNION ALL operator, which fuses the results of multiple queries together. Please see https://druid.apache.org/docs/0.20.0/querying/sql.html#union-all for details on what query shapes are supported by this operator.

https://github.com/apache/druid/pull/10324

# Cluster-wide default query context settings

It is now possible to set cluster-wide default query context properties by adding a configuration of the form druid.query.override.default.context.*, with * replaced by the property name.

https://github.com/apache/druid/pull/10208

# Other features

# Improved retention rules UI

The retention rules UI in the web console has been improved. It now provides suggestions and basic validation in the period dropdown, shows the cluster default rules, and makes editing the default rules more accessible.

https://github.com/apache/druid/pull/10226

# Redis cache extension enhancements

The Redis cache extension now supports Redis Cluster, selecting which database is used, connecting to password-protected servers, and period-style configurations for the expiration and timeout properties.

https://github.com/apache/druid/pull/10240

# Disable sending server version in response headers

It is now possible to disable sending of server version information in Druid's response headers.

This is controlled by a new property druid.server.http.sendServerVersion, which defaults to true.

https://github.com/apache/druid/pull/9832

# Specify byte-based configuration properties with units

Druid now supports units for specifying byte-based configuration properties, e.g.:

druid.server.maxSize=300g

equivalent to

druid.server.maxSize=300000000000

Please see https://druid.apache.org/docs/0.20.0/configuration/human-readable-byte.html for more details.

https://github.com/apache/druid/pull/10203

# Bug fixes

# Fix query correctness issue when historical has no segment timeline

Druid 0.20.0 fixes a query correctness issue when a broker issues a query expecting a historical to have certain segments for a datasource, but the historical when queried does not actually have any segments for that datasource (e.g., they were all unloaded before the historical processed the query). Prior to 0.20.0, the query would return successfully but without the results from the segments that were missing in the manner described previously. In 0.20.0, queries will now fail in such situations.

https://github.com/apache/druid/pull/10199

# Fix issue preventing result-level cache from being populated

Druid 0.20.0 fixes an issue introduced in 0.19.0 (https://github.com/apache/druid/issues/10337) which can prevent query caches from being populated when result-level caching is enabled.

https://github.com/apache/druid/pull/10341

# Fix for variance aggregator ordering

The variance aggregator previously used an incorrect comparator that compared using an aggregator's internal count variable instead of the variance.

https://github.com/apache/druid/pull/10340

# Fix incorrect caching for groupBy queries with limit specs

Druid 0.20.0 fixes an issues with groupBy queries and caching, where the limitSpec of the query was not considered in the cache key, leading to potentially incorrect results if queries that are identical except for the limitSpec are issued.

https://github.com/apache/druid/pull/10093

# Fix for stringFirst and stringLast with rollup enabled

https://github.com/apache/druid/issues/7243 has been resolved, the stringFirst and stringLast aggregators no longer cause an exception when used during ingestion with rollup enabled.

https://github.com/apache/druid/pull/10332

# Upgrading to Druid 0.20.0

Please be aware of the following considerations when upgrading from 0.19.0 to 0.20.0. If you're updating from an earlier version than 0.19.0, please see the release notes of the relevant intermediate versions.

# Default maxSize

druid.server.maxSize will now default to the sum of maxSize values defined within the druid.segmentCache.locations. The user can still provide a custom value for druid.server.maxSize which will take precedence over the default value.

https://github.com/apache/druid/pull/10255

# Compaction and kill task ID changes

Compaction and kill tasks issued by the coordinator will now have their task IDs prefixed by coordinator-issued, while user-issued kill tasks will be prefixed by api-issued.

https://github.com/apache/druid/pull/10278

# New size limits for parallel ingestion split hint specs

The size-based and segment-based splitHintSpec for parallel batch ingestion now apply a default file/segment limit of 1000 per subtask, controlled by the maxNumFiles and maxNumSegments respectively.

https://github.com/apache/druid/pull/10243

# New PostAggregator and AggregatorFactory methods

Users who have developed an extension with custom PostAggregator or AggregatorFactory implementions will need to update their extensions, as these two interfaces have new methods defined in 0.20.0.

PostAggregator now has a new method:

  ValueType getType();

To support type information on PostAggregator, AggregatorFactory also has 2 new methods:

  public abstract ValueType getType();

  public abstract ValueType getFinalizedType();

Please see https://github.com/apache/druid/pull/9638 for more details on the interface changes.

# New Expr-related methods

Users who have developed an extension with custom Expr implementions will need to update their extensions, as Expr and related interfaces hae changed in 0.20.0. Please see the PR below for details:

https://github.com/apache/druid/pull/10401

# More accurate query/cpu/time metric

In 0.20.0, the accuracy of the query/cpu/time metric has been improved. Previously, it did not account for certain portions of work during query processing, described in more detail in the following PR:

https://github.com/apache/druid/pull/10377

# New audit log service metric columns

If you are using audit logging, please be aware that new columns have been added to the audit log service metric (comment, remote_address, and created_date). An optional payload column has also been added, which can be enabled by setting druid.audit.manager.includePayloadAsDimensionInMetric to true.

https://github.com/apache/druid/pull/10373

# sqlQueryContext in request logs

If you are using query request logging, the request log events will now include the sqlQueryContext for SQL queries.

https://github.com/apache/druid/pull/10368

# Additional per-segment state in metadata store

Hash-partitioned segments created by Druid 0.20.0 will now have additional partitionFunction data in the metadata store.

Additionally, compaction tasks will now store additional per-segment information in the metadata store, used for tracking compaction history.

https://github.com/apache/druid/pull/10288
https://github.com/apache/druid/pull/10413

# Known issues

# druid.segmentCache.locationSelectorStrategy injection failure

Specifying a value for druid.segmentCache.locationSelectorStrategy prevents services from starting due to an injection error. Please see https://github.com/apache/druid/issues/10348 for more details.

# Resource leak in web console data sampler

When a timeout occurs while sampling data in the web console, internal resources created to read from the input source are not properly closed. Please see https://github.com/apache/druid/pull/10467 for more information.

# Credits

Thanks to everyone who contributed to this release!

@a2l007
@abhishekagarwal87
@abhishekrb19
@ArvinZheng
@belugabehr
@capistrant
@ccaominh
@clintropolis
@code-crusher
@dylwylie
@fermelone
@FrankChen021
@gianm
@himanshug
@jihoonson
@jon-wei
@josephglanville
@joykent99
@kroeders
@lightghli
@lkm
@mans2singh
@maytasm
@medb
@mghosh4
@nishantmonu51
@pan3793
@richardstartin
@sthetland
@suneet-s
@tarunparackal
@tdt17
@tourvi
@vogievetsky
@wjhypo
@xiangqiao123
@xvrl

druid - druid-0.19.0

Published by clintropolis about 4 years ago

Apache Druid 0.19.0 contains around 200 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 51 contributors. Refer to the complete list of changes and everything tagged to the milestone for further details.

# New Features

# GroupBy and Timeseries vectorized query engines enabled by default

Vectorized query engines for GroupBy and Timeseries queries were introduced in Druid 0.16, as an opt in feature. Since then we have extensively tested these engines and feel that the time has come for these improvements to find a wider audience. Note that not all of the query engine is vectorized at this time, but this change makes it so that any query which is eligible to be vectorized will do so. This feature may still be disabled if you encounter any problems by setting druid.query.vectorize to false.

https://github.com/apache/druid/pull/10065

# Druid native batch support for Apache Avro Object Container Files

New in Druid 0.19.0, native batch indexing now supports Apache Avro Object Container Format encoded files, allowing batch ingestion of Avro data without needing an external Hadoop cluster. Check out the docs for more details

https://github.com/apache/druid/pull/9671

# Updated Druid native batch support for SQL databases

An 'SqlInputSource' has been added in Druid 0.19.0 to work with the new native batch ingestion specifications first introduced in Druid 0.17, deprecating the SqlFirehose. Like the 'SqlFirehose' it currently supports MySQL and PostgreSQL, using the driver from those extensions. This is a relatively low level ingestion task, and the operator must take care to manually ensure that the correct data is ingested, either by specially crafting queries to ensure no duplicate data is ingested for appends, or ensuring that the entire set of data is queried to be replaced when overwriting. See the docs for more operational details.

https://github.com/apache/druid/pull/9449

# Apache Ranger based authorization

A new extension in Druid 0.19.0 adds an Authorizer which implements access control for Druid, backed by Apache Ranger. Please see [the extension documentation]((https://druid.apache.org/docs/0.19.0/development/extensions-core/druid-ranger-security.html) and Authentication and Authorization for more information on the basic facilities this extension provides.

https://github.com/apache/druid/pull/9579

# Alibaba Object Storage Service support

A new 'contrib' extension has been added for Alibaba Cloud Object Storage Service (OSS) to provide both deep storage and usage as a batch ingestion input source. Since this is a 'contrib' extension, it will not be packaged by default in the binary distribution, please see community extensions for more details on how to use in your cluster.

https://github.com/apache/druid/pull/9898

# Ingestion worker autoscaling for Google Compute Engine

Another 'contrib' extension new in 0.19.0 has been added to support ingestion worker autoscaling, which allows a Druid Overlord to provision or terminate worker instances (MiddleManagers or Indexers) whenever there are pending tasks or idle workers, for Google Compute Engine. Unlike the Amazon Web Services ingestion autoscaling extension, which provisions and terminates instances directly without using an Auto Scaling Group, the GCE autoscaler uses Managed Instance Groups to more closely align with how operators are likely to provision their clusters in GCE. Like other 'contrib' extensions, it will not be packaged by default in the binary distribution, please see community extensions for more details on how to use in your cluster.

https://github.com/apache/druid/pull/8987

# REGEXP_LIKE

A new REGEXP_LIKE function has been added to Druid SQL and native expressions, which behaves similar to LIKE, except using regular expressions for the pattern.

https://github.com/apache/druid/pull/9893

# Web console lookup management improvements

Druid 0.19 also web console also includes some useful improvements to the lookup table management interface. Creating and editing lookups is now done with a form to accept user input, rather than a raw text editor to enter the JSON spec.

Additionally, clicking the magnifying glass icon next to a lookup will now allow displaying the first 5000 values of that lookup.

https://github.com/apache/druid/pull/9549
https://github.com/apache/druid/pull/9587

# New Coordinator per datasource 'loadstatus' API

A coordinator API can make it easier to determine if the latest published segments are available for querying. This is similar to the existing coordinator 'loadstatus' API, but is datasource specific, may specify an interval, and can optionally live refresh the metadata store snapshot to get the latest up to date information. Note that operators should still exercise caution when using this API to query large numbers of segments, especially if forcing a metadata refresh, as it can potentially be a 'heavy' call on large clusters.

https://github.com/apache/druid/pull/9965

# Native batch append support for range and hash partitioning

Part bug fix, part new feature, Druid native batch (once again) supports appending new data to existing time chunks when those time chunks were partitioned with 'hash' or 'range' partitioning algorithms. Note that currently the appended segments only support 'dynamic' partitioning, and when rolling back to older versions that these appended segments will not be recognized by Druid after the downgrade. In order to roll back to a previous version, these appended segments should be compacted with the rest of the time chunk in order to have a homogenous partitioning scheme.

https://github.com/apache/druid/pull/10033

# Bug fixes

Druid 0.19.0 contains 65 bug fixes, you can see the complete list here.

# Fix for batch ingested 'dynamic' partitioned segments not becoming queryable atomically

Druid 0.19.0 fixes an important query correctness issue, where 'dynamic' partitioned segments produced by a batch ingestion task were not tracking the overall number of partitions. This had the implication that when these segments came online, they did not do so as a complete set, but rather as individual segments, meaning that there would be periods of swapping where results could be queried from an incomplete partition set within a time chunk.

https://github.com/apache/druid/pull/10025

# Fix to allow 'hash' and 'range' partitioned segments with empty buckets to now be queryable

Prior to 0.19.0, Druid had a bug when using hash or ranged partitioning where if data skew was such that any of the buckets were 'empty' after ingesting, the partitions would never be recognized as 'complete' and so never become queryable. Druid 0.19.0 fixes this issue by adjusting the schema of the partitioning spec. These changes to the json format should be backwards compatible, however rolling back to a previous version will again make these segments no longer queryable.

https://github.com/apache/druid/pull/10012

# Incorrect balancer behavior

A bug in Druid versions prior to 0.19.0 allowed for (incorrect) coordinator operation in the event druid.server.maxSize was not set. This bug would allow segments to load, and effectively randomly balance them in the cluster (regardless of what balancer strategy was actually configured) if all historicals did not have this value set. This bug has been fixed, but as a result druid.server.maxSize must be set to the sum of the segment cache location sizes for historicals, or else they will not load segments.

https://github.com/apache/druid/pull/10070

# Upgrading to Druid 0.19.0

Please be aware of the following issues when upgrading from 0.18.1 to 0.19.0. If you're updating from an earlier version than 0.18.1, please see the release notes of the relevant intermediate versions.

# 'druid.server.maxSize' must now be set for Historical servers

A Coordinator bug fix as a side-effect now requires druid.server.maxSize to be set for segments to be loaded. While this value should have been set correctly for previous versions, please be sure this value is configured correctly before upgrading your clusters or else segments will not be loaded.

https://github.com/apache/druid/pull/10070

# System tables 'sys.segments' column 'payload' has been removed and replaced with 'dimensions', 'metrics', and 'shardSpec'

The removal of the 'payload' column from the sys.segments table should make queries on this table much more efficient, and the most useful fields from this, the list of 'dimensions', 'metrics', and the 'shardSpec', have been split out, and so are still available to devote to processing queries.

https://github.com/apache/druid/pull/9883

# Changed default number of segment loading threads

The druid.segmentCache.numLoadingThreads configuration has had the default value changed from 'number of cores' to 'number of cores' divided by 6. This should make historicals a bit more well behaved out of the box when loading a large number of segments, limiting the impact on query performance.

https://github.com/apache/druid/pull/9856

# Broadcast load rules no longer have 'colocated datasources'

A number of incomplete changes to facilitate more efficient join queries, based on the idea of utilizing broadcast load rules to propagate smaller datasources among the cluster so that join operations can be pushed down to individual segment processing, have been added to 0.19.0. While not a finished feature yet, as part of the changes to make this happen, 'broadcast' load rules no longer have the concept of 'colocated datasources', which would attempt to only broadcast segments to servers that had segments of the configured datasource. This didn't work so well in practice, as it was non-atomic, meaning that the broadcast segments would lag behind loads and drops of the colocated datasource, so we decided to remove it.

https://github.com/apache/druid/pull/9971

# Brokers and realtime tasks may now be configured to load segments from 'broadcast' datasources

Another effect of the afforementioned preliminary work to introduce efficient 'broadcast joins', Brokers and realtime indexing tasks will now load segments loaded by 'broadcast' rules, if a segment cache is configured. Since the feature is not complete there is little reason to do this in 0.19.0, and it will not happen unless explicitly configured.

https://github.com/apache/druid/pull/9971

# lpad and rpad function behavior change

The lpad and rpad functions have gone through a slight behavior change in Druids default non-SQL compatible mode, in order to make them behave consistently with PostgreSQL. In the new behavior, if the pad expression is an empty string, then the result will be the (possibly trimmed) original characters, rather than the empty string being treated as a null and coercing the results to null.

https://github.com/apache/druid/pull/10006

# Extensions providing custom Druid expressions are now expected to implement equals and hashCode methods

A change to the Expr interface in Druid 0.19.0 requires that any extension which provides custom expressions via ExprMacroTable must also implement equals and hashCode methods to function correctly, especially with JOIN queries, which rely on filter and expression analysis for determining how to optimally process a query.

https://github.com/apache/druid/pull/9830

# Known Issues

For a full list of open issues, please see https://github.com/apache/druid/labels/Bug.

# Credits

Thanks to everyone who contributed to this release!

@2bethere
@a-chumagin
@a2l007
@abhishekrb19
@agricenko
@ahuret
@alex-plekhanov
@AlexanderSaydakov
@awelsh93
@bolkedebruin
@calvinhkf
@capistrant
@ccaominh
@chenyuzhi459
@clintropolis
@damnMeddlingKid
@danc
@dylwylie
@egor-ryashin
@FrankChen021
@frnidito
@Fullstop000
@gianm
@harshpreet93
@jihoonson
@jon-wei
@josephglanville
@kamaci
@kanibs
@leerho
@liujianhuanzz
@maytasm
@mcbrewster
@mghosh4
@morrifeldman
@pjain1
@samarthjain
@stefanbirkner
@sthetland
@suneet-s
@surekhasaharan
@tarpdalton
@viongpanzi
@vogievetsky
@willsalz
@wjhypo
@xhl0726
@xiangqiao123
@xvrl
@yuanlihan
@zachjsh

druid - druid-0.18.1

Published by jihoonson over 4 years ago

Apache Druid 0.18.1 is a bug fix release that fixes Streaming ingestion failure with Avro, ingestion performance issue, upgrade issue with HLLSketch, and so on. The complete list of bug fixes can be found at https://github.com/apache/druid/pulls?q=is%3Apr+milestone%3A0.18.1+label%3ABug+is%3Aclosed.

# Bug fixes

# Known issues

Incorrect result of nested groupBy query on Join of subqueries

A nested groupBy query can result in an incorrect result when it is on top of a Join of subqueries and the inner and the outer groupBys have different filters. See https://github.com/apache/druid/issues/9866 for more details.

# Credits

Thanks to everyone who contributed to this release!

@clintropolis
@gianm
@jihoonson
@maytasm
@suneet-s
@viongpanzi
@whutjs