a go daemon that syncs MongoDB to Elasticsearch in realtime. you know, for search.
MIT License
Published by rwynn almost 6 years ago
related
config is used and a golang plugin implements Process
.Published by rwynn almost 6 years ago
Published by rwynn almost 6 years ago
Published by rwynn about 6 years ago
Published by rwynn about 6 years ago
Published by rwynn about 6 years ago
workers
where only one worker would be used for change documentsPublished by rwynn about 6 years ago
workers
where only one worker would be used for change documentsPublished by rwynn about 6 years ago
Published by rwynn about 6 years ago
Published by rwynn about 6 years ago
relate
config to declare dependencies between collectionsPublished by rwynn about 6 years ago
relate
config to declare dependencies between collectionsPublished by rwynn about 6 years ago
[mongo-dial-settings]
timeout=15
read-timeout=0
write-timeout=0
[mongo-session-settings]
socket-timeout=0
sync-timeout=0
Published by rwynn about 6 years ago
[mongo-dial-settings]
timeout=15
read-timeout=0
write-timeout=0
[mongo-session-settings]
socket-timeout=0
sync-timeout=0
Published by rwynn about 6 years ago
[mongo-dial-settings]
timeout=10
read-timeout=600
write-timeout=30
[mongo-session-settings]
socket-timeout=600
sync-timeout=600
Published by rwynn about 6 years ago
[mongo-dial-settings]
timeout=10
read-timeout=600
write-timeout=30
[mongo-session-settings]
socket-timeout=600
sync-timeout=600
Published by rwynn about 6 years ago
This is a big release with support for change streams and aggregation pipelines!
-exit-after-direct-reads
enabledchange-stream-namespaces
optionProcess
and Pipeline
added to the existing Map
and Filter
functions. The Process
function allows one to code complex processing after an event. The Process
function has access to the MongoDB session, the Elasticsearch client, the Elasticsearch bulk processor, and information about the change that occurred (insert, update, delete). The Pipeline
function allows one to assign MongoDB pipeline stages to both direct reads and change streams. Since the pipeline stages may differ between direct reads and change streams the function is passed a boolean indicating the source of the data. For example, a $match
clause on the change stream may need to reference the fullDocument
field since the root will be the change event. For direct reads the root will simply be the full document.pipeline
allows one to create aggregation pipelines in javascript for direct reads and change streams. This can be used instead of the Pipeline
function in a golang plugin. The exported function in javascript takes a namespace and a boolean indicating whether or not the source was a change stream. The function should return an array of pipeline stages to apply.pipe-allow-disk
which when enabled allows large pipelines to use the disk to save intermediate results.script
functions named pipe
. The pipe
function is simliar to existing find
function but takes an array of aggregation pipeline stages as the first argument.direct-read-namespaces = [test.test]
change-stream-namespaces = [test.test]
[[pipeline]]
script = """
module.exports = function(ns, changeStream) {
if (changeStream) {
return [
{ $match: {"fullDocument.foo": 1} }
];
} else {
return [
{ $match: {"foo": 1} }
];
}
}
"""
[[script]]
namespace = "test.test"
script = """
module.exports = function(doc, ns) {
doc.extra = pipe([
{ $match: {foo: 1} },
{ $limit: 1 },
{ $project: { _id: 0, foo: 1}}
]);
return doc;
}
"""
Published by rwynn about 6 years ago
This is a big release with support for change streams and aggregation pipelines!
-exit-after-direct-reads
enabledchange-stream-namespaces
optionProcess
and Pipeline
added to the existing Map
and Filter
functions. The Process
function allows one to code complex processing after an event. The Process
function has access to the MongoDB session, the Elasticsearch client, the Elasticsearch bulk processor, and information about the change that occurred (insert, update, delete). The Pipeline
function allows one to assign MongoDB pipeline stages to both direct reads and change streams. Since the pipeline stages may differ between direct reads and change streams the function is passed a boolean indicating the source of the data. For example, a $match
clause on the change stream may need to reference the fullDocument
field since the root will be the change event. For direct reads the root will simply be the full document.pipeline
allows one to create aggregation pipelines in javascript for direct reads and change streams. This can be used instead of the Pipeline
function in a golang plugin. The exported function in javascript takes a namespace and a boolean indicating whether or not the source was a change stream. The function should return an array of pipeline stages to apply.pipe-allow-disk
which when enabled allows large pipelines to use the disk to save intermediate results.script
functions named pipe
. The pipe
function is simliar to existing find
function but takes an array of aggregation pipeline stages as the first argument.direct-read-namespaces = [test.test]
change-stream-namespaces = [test.test]
[[pipeline]]
script = """
module.exports = function(ns, changeStream) {
if (changeStream) {
return [
{ $match: {"fullDocument.foo": 1} }
];
} else {
return [
{ $match: {"foo": 1} }
];
}
}
"""
[[script]]
namespace = "test.test"
script = """
module.exports = function(doc, ns) {
doc.extra = pipe([
{ $match: {foo: 1} },
{ $limit: 1 },
{ $project: { _id: 0, foo: 1}}
]);
return doc;
}
"""
Published by rwynn about 6 years ago
Published by rwynn about 6 years ago
Published by rwynn over 6 years ago
index-as-update
boolean config option that allow merge instead of replacefind
and findOne
functions available in scripts