Bot releases are visible (Hide)
Published by github-actions[bot] over 2 years ago
This is a patch release containing small features, bug fixes, and performance improvements.
describe table as of
GEOMETRY
typeinformation_schema
tablesdescribe table as of <commit>
or show columns from table as of <commit>
.FULLTEXT
keys as well.FULLTEXT
keysVIEWS
show up in information_schema.columns
VIEWS
now show up, but the our current implementation of ViewDefinition
makes it difficult to match MySQL capabilities.as of
expression in show columns from table
as of
expression in show columns
statements.Geometry
typegeometry
type for GMS side.geometry
type for functions:
type TransformNodeFunc func(Node) (Node, bool, error)
type TransformExprFunc func(Expression) (Expression, bool, error)
type Transformer func(TransformContext) (sql.Node, bool, error)
TransformUp's implementation uses the modification information to avoidBenchmarkTransformOld
BenchmarkTransformOld-12 396544 2782 ns/op 3000 B/op 51 allocs/op
BenchmarkTransformOldNoEdit
BenchmarkTransformOldNoEdit-12 407797 2731 ns/op 2936 B/op 50 allocs/op
BenchmarkTransformNew
BenchmarkTransformNew-12 4584258 254.1 ns/op 96 B/op 5 allocs/op
BenchmarkTransformNewNoEdit
BenchmarkTransformNewNoEdit-12 4782098 237.8 ns/op 96 B/op 5 allocs/op
We use plan.InspectUp
when possible, and then plan.TransformUp
plan.TransformUpCtx
plan.TransformUpCtxSchema
only when necessary.describe <table> as of <asof>
and show columns from <table> as of <asof>
RightIndexedJoin
commutativity correctnessPublished by github-actions[bot] over 2 years ago
This is patch release, containing bug fixes and performance improvements.
auto_increment
columnsALTER TABLE
statements that include multiple clausesshow tables
statementinformation_schema
now supportedshow create procedure
now supportedshow function status
now supportedCREATE VIEW
statements inside MySQL special commentsdolt_replicate_heads
and dolt_replicate_all_heads
both Global and Session variableTRIGGER
parsing to support database name specific queryshow tables
statement
cardinality
column of information_schema.statistics
table returned wrong type beforemysqlshim
: https://github.com/dolthub/go-mysql-server/blob/main/enginetest/mysqlshim/database.go#L169
applyJoinPlan
should apply equalitypushdown
mistakenlyIndexedJoinAccess
with nil lookups from markingapplyJoinPlan
, and letpushdown
mark those as handled. This would improve perf by bothshow create procedure
SHOW FUNCTION STATUS
functionalitySHOW FUNCTION STATUS
result is queried from INFORMATION_SCHEMA
.ROUTINES
tableSHOW CREATE PROCEDURE <procedure_name>
parseshow create procedure
logic.unknown push error
When pushing a branch with a single commit to DoltHubPublished by github-actions[bot] over 2 years ago
This is a patch release containing new features and bug fixes.
Features:
dolt_diff()
. Read more about it here
SHOW CREATE PROCEDURE
is now supportedBugs fixes:
CHARACTER SET
tokens in CREATE TABLE
statements now parse correctlyWHERE
clauses in certain indexed joinsinformation_schema
tables now contain more accurate information123a
) are now validTupleFactory
optimizations were being skipped, and 70% ofnewBinaryNomsWriter
instances for tupleBulkEditAccumulator
lets the final Map sort skipEphemeralPrinter
. It's related to how the underlying library of uilive
clears terminal lines on Windows. uilive
will only clear terminal lines successfully if it detects that the given io.Writer
is a terminal. The package does this by checking if the io.Writer has a Fd
function and using the returned file descriptor for its istty
check.io.Writer
, uilive
's terminal check fails since that does not declare Fd
. The problem is that color.Output does use the initial os.StdOut
which is in fact the terminal...uilive
flushes its output.CREATE DB ... CHARACTER SET ...
statementdolt_diff
system table function and changes the behavior of the existing dolt_commit_diff_$tablename
and dolt_history_$tablename
system tables to no longer disambiguate column names with tag suffixes.dolt_diff
system table function enables callers to use the exact to and from schemas in a two-way diff of table data. Using this sample data input, the following examples show what the dolt_diff
table function returns in different cases.> select * from dolt_diff("foo", @Commit1, @Commit9);
+------+------+-------+----------------------------------+-----------------------------------+--------+--------+---------+----------------------------------+-----------------------------------+-----------+
| to_c | to_b | to_pk | to_commit | to_commit_date | from_a | from_b | from_pk | from_commit | from_commit_date | diff_type |
+------+------+-------+----------------------------------+-----------------------------------+--------+--------+---------+----------------------------------+-----------------------------------+-----------+
| jkl | foo | 1 | dap9j5he1296trqe5dp4j2n9lut9tuaa | 2022-03-16 20:13:05.878 +0000 UTC | foo | bar | 1 | diq7v3vlqpjb21imb3bqv04spob7l0p2 | 2022-03-16 20:13:05.814 +0000 UTC | modified |
| NULL | baz | 2 | dap9j5he1296trqe5dp4j2n9lut9tuaa | 2022-03-16 20:13:05.878 +0000 UTC | baz | bash | 2 | diq7v3vlqpjb21imb3bqv04spob7l0p2 | 2022-03-16 20:13:05.814 +0000 UTC | modified |
| five | four | 3 | dap9j5he1296trqe5dp4j2n9lut9tuaa | 2022-03-16 20:13:05.878 +0000 UTC | NULL | NULL | NULL | diq7v3vlqpjb21imb3bqv04spob7l0p2 | 2022-03-16 20:13:05.814 +0000 UTC | added |
+------+------+-------+----------------------------------+-----------------------------------+--------+--------+---------+----------------------------------+-----------------------------------+-----------+
Diff across a column drop:
> select * from dolt_diff("foo", @Commit1, @Commit2);
+------+-------+----------------------------------+-----------------------------------+--------+--------+---------+----------------------------------+-----------------------------------+-----------+
| to_a | to_pk | to_commit | to_commit_date | from_a | from_b | from_pk | from_commit | from_commit_date | diff_type |
+------+-------+----------------------------------+-----------------------------------+--------+--------+---------+----------------------------------+-----------------------------------+-----------+
| foo | 1 | frfpd9k00s417gi8b88067l5oavkfpt5 | 2022-03-16 20:13:05.821 +0000 UTC | foo | bar | 1 | diq7v3vlqpjb21imb3bqv04spob7l0p2 | 2022-03-16 20:13:05.814 +0000 UTC | modified |
| baz | 2 | frfpd9k00s417gi8b88067l5oavkfpt5 | 2022-03-16 20:13:05.821 +0000 UTC | baz | bash | 2 | diq7v3vlqpjb21imb3bqv04spob7l0p2 | 2022-03-16 20:13:05.814 +0000 UTC | modified |
+------+-------+----------------------------------+-----------------------------------+--------+--------+---------+----------------------------------+-----------------------------------+-----------+
Diff across a column drop and rename:
> select * from dolt_diff("foo", @Commit1, @Commit3);
+------+-------+----------------------------------+----------------------------------+--------+--------+---------+----------------------------------+-----------------------------------+-----------+
| to_b | to_pk | to_commit | to_commit_date | from_a | from_b | from_pk | from_commit | from_commit_date | diff_type |
+------+-------+----------------------------------+----------------------------------+--------+--------+---------+----------------------------------+-----------------------------------+-----------+
| foo | 1 | tnipgs33r3kd0pt823u0uu5ek4n39dpf | 2022-03-16 20:13:05.83 +0000 UTC | foo | bar | 1 | diq7v3vlqpjb21imb3bqv04spob7l0p2 | 2022-03-16 20:13:05.814 +0000 UTC | modified |
| baz | 2 | tnipgs33r3kd0pt823u0uu5ek4n39dpf | 2022-03-16 20:13:05.83 +0000 UTC | baz | bash | 2 | diq7v3vlqpjb21imb3bqv04spob7l0p2 | 2022-03-16 20:13:05.814 +0000 UTC | modified |
+------+-------+----------------------------------+----------------------------------+--------+--------+---------+----------------------------------+-----------------------------------+-----------+
Depends on:
applyJoinPlan
should apply equalitypushdown
mistakenlyIndexedJoinAccess
with nil lookups from markingapplyJoinPlan
, and letpushdown
mark those as handled. This would improve perf by bothSHOW PROCEDURE STATUS
SHOW INDEXES FROM otherdb.tab
fails as database is initialized incorrectly in parsinginformation_schema.statistics
tableCHARACTER SET
, COLLATE
or ENCRYPTION
syntaxes when creating database. The database is still created.DISABLE KEYS
or ENABLE KEYS
options for alter table statement. Nothing changes for the table being altered.innodb_stats_auto_recalc
ctx.Done()
checks to a few key node iterators before operating on child rows (may have missed some, but tried to hit the table, index, and edit iterators)SHOW CREATE PROCEDURE <procedure_name>
parseshow create procedure
logic..
DISABLE | ENABLE KEYS
syntax for ALTER TABLE
statementDISABLE KEYS
and ENABLE KEYS
options for ALTER TABLE
statementCHARSET
, COLLATE
and ENCRYPTION
syntax in CREATE DATABASE
statementDEFAULT
value specified.use db/<hash>
breaks when server replication enabledcannot create an index over spatial type columns
dolt log
CHARACTER SET
in CREATE DATABASE
statement results in SQL parsing errorPublished by github-actions[bot] over 2 years ago
This is a patch release, containing bug fixes:
db_working
, db_staged
, db_head
system variables read-only, deprecate detached head modedolt pull
pks
to recreate the row.schema.SetPkOrdinals
to override the preexisting ordering of pks
with an indexCollection.SetPks
method.err
variables that were set but then unhandled, as well as an os.Chdir()
that looked a little risky without one.EphemeralPrinter
is tool that you can use to print temporary line(s) to the console. Every time Display
is called, the previously written lines are cleared, and the new lines are flushed to the output.myDatabase_head_ref
: essentially a branch change, changes the current head and also switches to the associated working set.myDatabase_head
: similar to the _head_ref
variable, but more general in that any commit can be used as a new head. Setting this variable puts the session into detached-head mode. Writes to this session are not transactionally written to a working set. Instead, they are written as free-floating values (eligible to be GC'd) and commits made in detached-head mode are "dangling" commits: they are not associated with a branch (and also eligible to be GC'd). A typical pattern for using this mode is to set the database head, make some writes and commits, and land this work by writing into the dolt_branches
table.myDatabase_working
: setting this variable only affects the working set of session, essentially force-setting it to some other state.myDatabase_head_ref
: if the new value is a branch or working set ref, the session will switch working sets, other refs will error.myDatabase_head
: myDatabase_working
: _head
, if the new value can be resolved to a unique working set, the session will switch to it, otherwise it will error.NOT NULL
constraints too. This fixes that.ctx.Done()
checks to a few key node iterators before operating on child rows (may have missed some, but tried to hit the table, index, and edit iterators)SELECT "A" OR "A";
+------------+
| "A" OR "A" |
+------------+
| 0 |
+------------+
Tokenizer
detects a special MySQL comment, it just creates a new Tokenizer and embeds it in the old one under the specialComment
member variable.Position
member variable from the outermost Tokenizer.Position
of the inner Tokenizer to the outer one (with an offset to handle the leading /*![12345]
. I think a better fix might be to just replace the old Tokenizer with the one we create for specialComment
?table factor
in the dolthub/vitess SQL grammar.event
as column alias-a
in dolt tag
commandPublished by github-actions[bot] over 2 years ago
newBSChunkSource
. Fix localbs
scheme using file
scheme.blobstore.GetBytes
returns a byte buffer whose capacity is too large. That buffer is passed to parseTableIndex
which takes ownership of that buffer.parseTableIndex
was changed to throw an error if the capacity or length of the given buffer is too large. This was done to prevent large memory usage by Dolt. Since the above buffer was too large, parseTableIndex would throw an error and the chunkSource would fail to initialize. Two other issues prevented this error from being caught.newBSChunkSource
were being overridden by a defer. This muted any errors returned by newBSChunkSource
including the error that parseTableIndex
was throwing.localbs
scheme were using the file
scheme under the hood. This prevented the tests from catching this bug.err
variables in performance/utils/sysbench_runner
.go get
ing the dependency.
$ cd ./go
$ go get github.com/dolthub/<dependency>/go@<commit>
dolt table import -u
> create table a (x int, y int, primary key (y,x);
> insert into a values (0,1), (2,3);
Query OK, 1 row affected
Deletes were matching rows in the index and correctly tracking the affected row count, but generating intermediate schema-order key tuples incompatible with the table PK order. The incompatible keys failed to match the source keys in the roundtrip as the edits were applied to the table:
> delete from a where x = 0;
Query OK, 1 row affected
> delete from a where x = 0;
Query OK, 1 row affected
> select * from a where x = 0:
+---+---+
| x | y |
+---+---+
| 0 | 1 |
| 2 | 3 |
+---+---+
i.e. we were matching (1,0)
and then trying to delete (0,1)
.types
dolt_schemas
and dolt_procedures
tables to remove direct types.Map
dependencies.types.Map
dependencies.DoltIndex
to only push down filters for the current format, not for prolly indexes.NOT NULL
constraint when dropping primary keys.NOT NULL
constraints to a single column, as there are checks right before anywhere we append a NOT NULL
for if there already is NOT NULL
constraint.__DOLT_1__
format with flatbuffers messages representing top-of-DAG entities like StoreRoot, WorkingSet, Commit, Tag, RootValue, etc. This particular PR creates machinery to move StoreRoot, WorkingSet and Tag to simple flatbuffer message representations. This PR includes:
HeadTag
and HeadWorkingSet
methods on the Dataset
interface instead.__DOLT_DEV__
format which uses these top-of-DAG flatbuffer messages.Strategy
has been renamed to HedgeStrategy
. HedgeStrategy
's now control the nextTry durations directly and an ExponentialHedgeStrategy
has been added that can be composed with other strategies to get the previous behavior.information_schema.innodb_*
tables as empty tablesTestAddAndDropColumn
test which will be replaced with an appropriate bat test on Dolt.ADD COLUMN
sql statement causing every row to be modified in the table. This is not-expected and a bug. We expect adding a column with no default to only modify the schema and not any Dolt row.DROP COLUMN
statement. In the "fast" drop scenario, it might be important to ensure values aren't retained across ADD COLUMN
and DROP COLUMN
statements with the same column name. This PR also adds tests to ensure this.Exchange(parallelism=3)\n └─ Project(6)\n └─ Table(dual)\n
Project(6)\n └─ Table(dual)\n
information_schema.routines
tablesql.FilteredIndex
to allow integrators to specify index filter pushdownsFLUSH PRIVILEGES
supportFLUSH PRIVILEGES
syntaxdolt table import
cannot handle table names with -
in themSELECT 1 as found
breaks-a
in dolt tag
commandPublished by github-actions[bot] over 2 years ago
remotesrv
. The symptom is a throughput below minimum allowable
error. The problem is that the dolt client can't handle large http chunks without failing to meet its minimum required download throughput.remotesrv
hosts a simple http file server that can be used to write new table files and read existing table files. When remotesrv
receives a request to read a file it will currently read all of the requested range into memory. It then, writes thehttp.ResponseWriter
in a single Write
call.Read
, it takes a long time to return. Due to the way the Dolt client measures Read
throughput, if a single Read
takes long enough, the minimum required throughput will not be met.
io.Copy
to read from the file and simultaneously write to the http.ResponseWriter
. io.Copy
produces small enough writes to the response writer. This reduces the number of bytes read during a singular Read
call on the Dolt client and allows the throughput measurement to be accurate.remotesrv
and uses an idiomatic Golang pattern.
http.Response.Body
maximum Read
sizes are determined by the Content-Length
and whether chunked transfer encoding is being used. The exact semantics can be investigated by reading the sources here:
Read
call returns in a small enough size.--batch
option for batched INSERTS
collation_character_set_applicability
table in infoSchemasql.FilteredIndex
to allow integrators to specify index filter pushdownsprocesslist
table to information_schemaSHOW PROCESSLIST
query resultinformation_schema.statistics
tableFLUSH
as a command and its optionsPROCESSLIST
to non-reserved-keyword listdolt dump
and dolt diff --sql
should feature a --batch flagdolt table import -u <table>
shows incorrect processing information for us-housing-prices
dolt login
Published by github-actions[bot] over 2 years ago
This is a patch release. It fixes a bug in displaying the primary keys of a table via SHOW CREATE TABLE
nodeBuilder
Type.SQL()
BETWEEN
function of missing inclusive logicBETWEEN()
does inclusive check including checks with NULL valuePublished by github-actions[bot] over 2 years ago
This is a patch release, containing minor features, bug fixes and performance improvements.
It adds the following features:
dolt_diff
system table now indicates whether a table in a diff had data changes, schema changes, or bothdolt_diff_table_name
system tables have simpler schemas in the presence of historical schema changes to the tableSHOW STATUS LIKE
now parsesIt addresses the following bugs:
dolt dump
NULL
whenever the value is nil and cannot be converted into sql types.OverwriteStoreManifest
.OverwriteStoreManifest
adds table files to the manifest file and persists it. It will not call Rebase, unlike UpdateManifest.OverwriteStoreManifest
as old table files will not be garbage collected. Overwriting the manifest multiple times may allow the store to grow arbitrarily large in size.Dolt_Blame
system view, now that we have migrated Dolt_DIff_$tablename
off of SuperSchema's tag-suffixed column names.err
variable in libraries/utils/editor
.DOLT_DIFF
system table to enable customers to determine if a change to a table was a schema change or a data change (or both).> create table x (a int primary key, b int, c int);
> create table y (a int primary key, b int, c int);
> insert into x values (1, 2, 3), (2, 3, 4);
> select DOLT_COMMIT('-am', 'Creating tables x and y');
> select * from dolt_diff;
+----------------------------------+------------+-----------+-------------------------+----------------------------------+-------------------------+-------------+---------------+
| commit_hash | table_name | committer | email | date | message | data_change | schema_change |
+----------------------------------+------------+-----------+-------------------------+----------------------------------+-------------------------+-------------+---------------+
| 1blnkur3m1hla2a4got9t513982a4ad1 | x | jfulghum | [email protected] | 2022-02-21 14:39:37.01 -0800 PST | Creating tables x and y | true | true |
| 1blnkur3m1hla2a4got9t513982a4ad1 | y | jfulghum | [email protected] | 2022-02-21 14:39:37.01 -0800 PST | Creating tables x and y | false | true |
+----------------------------------+------------+-----------+-------------------------+----------------------------------+-------------------------+-------------+---------------+
Resolves: https://github.com/dolthub/dolt/issues/2834
Files Written
. It will no longer report an Upload Rate
for a download as that is the speed at which the file is copied from a temp directory.Files Created
and the number of these tables files that have been uploaded as Files Uploaded
.err
variable in the store/types
package.DOLT_DIFF_$TABLE
system tables show schema history with column name conflicts, the column names are disambiguated by adding their unique tags as suffixes. This makes it difficult to work with these system tables. This change simplifies the output by restricting the output schema to be based on the current table schema and avoids any column name conflicts.dolt_diff_$table
, but does not change `dolt_commit_diff_$table yet)dolt clone
, especially when you are cloning a db with a small number of large table files. dolt clone
now lists which table files are being concurrently downloaded and shows the progress and download rate for each.uilive
dep and github.com/faith/color
. In my testing there doesn't seem to be any issues:dolt log
as it panics with Ctrl+C on Windows with more
commandROLLBACK;
fail with no database selectedROLLBACK
when no database is selected$ dolt clone https://doltremoteapi.dolthub.com/post-no-preference/stocks
$ cd stocks
$ dolt sql
stocks> select date, act_symbol, avg(close) OVER (PARTITION BY act_symbol ORDER BY date ROWS BETWEEN 128 PRECEDING AND CURRENT ROW) AS ma200 FROM ohlcv WHERE act_symbol='AAPL' having date = '2022-02-11';
offset must be a non-negative integer; found: 128
After fix:
select date, act_symbol, avg(close) OVER (PARTITION BY act_symbol ORDER BY date ROWS BETWEEN 128 PRECEDING AND CURRENT ROW) AS ma200 FROM ohlcv WHERE act_symbol='AAPL' having date = '2022-02-11';
+-------------------------------+------------+--------------------+
| date | act_symbol | ma200 |
+-------------------------------+------------+--------------------+
| 2022-02-11 00:00:00 +0000 UTC | AAPL | 158.29837209302272 |
+-------------------------------+------------+--------------------+
I haven't been able to create a testing database with the same type parsing behavior yet. Something about the stocks
database or that specific query is yielding a types.Value with value=128 and type=sql.Uint8.BETWEEN
function of missing inclusive logicBETWEEN()
does inclusive check including checks with NULL valueAssertions: []enginetest.ScriptTestAssertion{
{
Query: "select * from dolt_diff_t;",
ExpectedWarning: 1105,
ExpectedWarningsCount: 4,
ExpectedWarningMessageSubstring: "unable to coerce value from field",
SkipResultsCheck: true,
},
},
Needed for: https://github.com/dolthub/dolt/pull/2832
ddlNode
to CurDatabase
that is update with every table being droppedstring
type to plan.UnresolvedTable
typeresolve-table
rule in analyzer to support DropTable filtering out non-existent tables.ddlNode
and added tests for AddColumn
, DropColumn
, RenameColumn
, DropColumn
that check updates with different database tables than current one.CreateIndex
, AlterIndex
rollback
on unselected databaseinformation_schema.tables
UPDATE
in trigger statements don't work with multiple rowsDOLT_DIFF
system table to indicate the type of change--continue
behaviour for dolt table import -u
dolt_diff_$tablename
and dolt_commit_diff_$tablename
--continue
flag with dolt table import
Published by github-actions[bot] over 2 years ago
This is a major feature release.
dolt log
now supports the --oneline
and --decorate
options.@@dolt_transaction_commit
now works as a global as well as a session variable.dolt_transaction_commit
a global variableonHeapTableIndex
before this PR:
onHeapTableIndex
after this PR:parseTableIndex
onHeapTableIndex
retains a slice of just the index data (without the footer).mmapTableReader
was broken. Since the new index takes ownership of the buffer, as soon as mmapTableReader unmapped the index file from memory, a segfault was thrown. mmapTableReader
has instead been changed to copy index bytes into the heap.val
go/store/val/codec.go
, tried to make it easier to readtime.Time
geometry
to share serialization logic between types
and val
go/store/val/
in to sqle/index
val.SlicedBuffer
and used it to cache pointers in prolly.Node
tableIndex
, I altered the tableIndex functions to return errors and fixed any code that needs to return errors.git log
features for dolt log
--oneline
option--decorate
optionshort
long
auto
no
libraries/doltcore/doltdb
.Published by github-actions[bot] over 2 years ago
This is a feature release.
A new system table, dolt_diff
, that contains which tables changed between two commits, is now supported.
A new dolt_blame
system view which contains who last edited each row of a table is now supported.
The default behavior for SQL COMMIT
statements in the presence of merge conflicts has changed. They are now allowed by default.
Named windows are now supported.
It also includes bug fixes:
dolt login
ignores interrupt signal--set-upstream
on pushdolt table import
to correctly import auto increment data.DOLT_HISTORY_<TABLE>
using a filter expression and the underlying table that doesn't have a primary key.> SELECT * FROM DOLT_DIFF;
+----------------------------------+-----------+-------------------------+-----------------------------------+-------------------------+------------+
| commit_hash | committer | email | date | message | table_name |
+----------------------------------+-----------+-------------------------+-----------------------------------+-------------------------+------------+
| edr1ichcj8vpve9lofv31e7taiajv3uu | jfulghum | [email protected] | 2022-02-07 13:39:47.717 -0800 PST | Creating tables z | z |
| 91ff8so7alcnuiq0qa1qq9e7o58i7jdb | jfulghum | [email protected] | 2022-02-07 13:39:26.143 -0800 PST | Creating tables x and y | x |
| 91ff8so7alcnuiq0qa1qq9e7o58i7jdb | jfulghum | [email protected] | 2022-02-07 13:39:26.143 -0800 PST | Creating tables x and y | y |
+----------------------------------+-----------+-------------------------+-----------------------------------+-------------------------+------------+
An unscoped DOLT_DIFF felt more appropriate than an unscoped DOLT_HISTORY table since users will likely use DOLT_DIFF, then use a scoped DOLT_DIFF_<$TABLE> to look at changes in each table. DOLT_HISTORY_<$TABLE> also has semantics of showing the full history at each commit, where this table is intended to show which tables changed (not which tables existed at a commit).prolly
PrivilegedDatabase
and ...Provider
constructs).driver
implementation as it was literally broken (errors in the file on main
) and was obviously not relied on by anything (not even tests), so instead of fixing it for the changes I made, I just removed it altogether.mysql.user
table. This will significantly help when it comes privilege checking, especially in the context of determining active roles and such. To complement this, PrivilegeSet
was almost entirely rewritten. The related files are data_editor_view.go
and grant_table_shim.go
.dolt login
is unquittabledolt diff
and dolt_history
should show the history of all the tables per commitwith recursive
Published by github-actions[bot] over 2 years ago
This is a feature release, adding initial support for spatial types. Read more here:
https://www.dolthub.com/blog/2022-02-09-spatial-types/
Also newly supported are recursive common table expressions (WITH RECURISVE
).
This release also addresses a number of bugs:
JSON_CONTAINS
NOT BETWEEN
expressionserr
variables in libraries/doltcore/table/editor
.Node
with flatbuffers-based mapNode
Map
prolly
to include ranges defined on a prefix of key fields. This is necessary to support partial indexes and secondary index in Dolt. Various refactors were made to support this change:
prolly.Map
to use a nodeCursor
to track the end of the rangeSHOW GRANTS
was a shell that always returned the same result, and it has now been properly implemented (without USING
since active roles aren't in yet). Also added SHOW PRIVILEGES
.ST_LATITUDE
and ST_LONGITUDE
functionsST_SWAPXY
functionST_DIMENSION
functionPublished by github-actions[bot] over 2 years ago
This is a feature release that introduces support for the full range of window expressions in aggregate functions.
Other issues:
*sql.Context
) also carry auth information for later privilege checking.Published by github-actions[bot] over 2 years ago
This is a patch release. It addressees the following issues:
libraries/doltcore/merge
.Published by github-actions[bot] over 2 years ago
This is a patch release. It addresses several bugs:
Additionally, --batch mode is now disabled by default for dolt sql
when piping SQL scripts to the dolt
command. Use --batch
to enable it when running SQL import scripts. Also new is the --file
argument to dolt sql
, useful for environment where it is difficult to redirect STDIN.
Format_7_18
is still used in ld
in some casesdolt sql
by defaultdolt sql
has issues in when it flushes to disk which make it incompatible with some SQL scripts. It's now only enabled when asked for explicitly.dolt sql
, useful for environments where it's hard to redirect STDIN.durable.Index
and attempts to refactor pkg sqle
to use durable.Index
in place of types.Map
. This is a major refactor in the SQL engine with the aim of supporting both the existing storage format and the new-and-improved™ storage format. As a measure of progress roughly 70% of the engine tests are passing against sqle
while using the new format.val
in order to support the SQL types/encodings need for sysbench. Some of these encodings are experimental and will change in the future.qualifyColumns
rule was erroring, but without passthrough projections binary expressions like Arithemtic(Sum(x.i), y.i)
will fail at execution time without a full set of input dependencies.select sum(x.i) + y.i from mytable as x, mytable as y where x.i = y.i GROUP BY x.i
We correctly identify that the GroupBy node has one primary aggregation, Sum, and project the Arithmetic separately.
GroupBy -> Sum(x.i) + y.i -> ... TableScan (x,y)
=>
Project (Arithmetic(sum(x.i) + (y.i)))-> GroupBy(Sum(x.i)) -> ...
The Project
node fails downstream trying to lookup the y.i
dependency we discarded in the transform. This PR adds dependencies back to cover the new parent Project
for GroupBy
flattening.
GroupBy -> Sum(x.i) + y.i -> ... TableScan (x,y)
=>
Project (Arithmetic(sum(x.i) + (y.i)))-> GroupBy(Sum(x.i), y.i) -> ...
st_asgeojson
and st_geomfromgeojson
Published by github-actions[bot] over 2 years ago
This is a patch release. It has some small bug fixes and feature releases:
GetTypeConverter
was added, Parse
was deprecated. This PR now fully removes it. In addition, a few bugs regarding type conversions were fixed, a new test enforcing the inclusion of all types in GetTypeConverter
was added, and a few more comments were added to clarify the different string type implementations.dump_docs
doesn't dump the docs for verify constraints
, which means they're not visible in the online documentation. Hopefully this fixes that.err
variable in the store/datas
package.invalid type: INT
when selecting from viewJSON_ARRAY()
functionPublished by github-actions[bot] over 2 years ago
This is a patch release. It adds better logging for sql-server.
prolly/Node.go
. Serialization overhead for a leaf node with 200 key-value tuple pairs:key: (int32)
value: (int32, int32, int32)
data 3200 = 800 + 2400
offsets 796 = sizeof(uint16) * (199 + 199)
metadata 11 = TupleFormat * 2, tree_count, tree_level
flatbuffers 65 = (1.6% overhead)
total size 4072
This overhead estimate is after "externalizing" the format for tuples, meaning that key/value tuples in Prolly tree nodes are stored in raw byte buffers and cannot be parsed/deserialized with Flatbuffers alone. Encoding each tuple in an individual Flatbuffers struct
or table
would be prohibitively expensive in both space and time.Published by github-actions[bot] almost 3 years ago
Published by github-actions[bot] almost 3 years ago
store/type
and its subpackage edits
.select count(primary_key) from table
. We could simplify group by iter/its aggregation functions now that it's just a for loop off a child iter.GroupBy
window framing refactorGroupBy
refactor using framing (frame = partition)Aggregations
will have two execution paths while I am swapping out the Window
execution layer:GroupBy
node, agg functions will use window frame setupWindow
nodes, agg functions will continue to use AggregationBuffer
aggregation
package is a bit of a mess right now. I will delete all of the old aggregation and window code during the window refactor.windowBlockIterator
, which should drop-in work for regular windows.st_srid
function: https://dev.mysql.com/doc/refman/8.0/en/gis-general-property-functions.html#function_st-srid.st_asbinary
st_aswkb
st_pointfromwkb
st_linefromwkb
st_polyfromwkb
st_geomfromwkb
point_from_text
line_from_text
poly_from_text
geom_from_text
as_text
as_wkt
CREATE USER
statement to work, in that we have an in-memory representation of the grant tables, which may be accessed either directly (by the mysql
database) or through the user statements (of which we only have CREATE USER
for now).Published by github-actions[bot] almost 3 years ago
autocommit
is always turned on by default for incoming connections, regardless of the setting in the config.yaml file or the flag --no-auto-commit
being passed at the command line.NomsBinFormat
for new storage formatdurable.Index
interface for types.Map
durable.Index
interface to abstract types.Map
out of primary and secondary indexes. Currently, this interface has limited functionality, and in almost all cases will need to be unwrapped to be useful. This is the first step is a large refactor to decouple Index logic from a Noms-specific implementation.