node of the decentralized oracle network, bridging on and off-chain computation
OTHER License
Bot releases are visible (Hide)
Published by tyrion70 over 3 years ago
If a CLI command is issued after the session has expired, and an api credentials file is found, auto login should now work.
GasUpdater now works on RSK and xDai
Offchain reporting jobs that have had a latest round requested can now be deleted from the UI without error
Add ETH_GAS_LIMIT_MULTIPLIER
configuration option, the gas limit is multiplied by this value before transmission. So a value of 1.1 will add 10% to the on chain gas limit when a transaction is submitted.
Add ETH_MIN_GAS_PRICE_WEI
configuration option. This defaults to 1Gwei on mainnet. Chainlink will never send a transaction at a price lower than this value.
Add chainlink node db migrate
for running database migrations. It's
recommended to use this and set MIGRATE_DATABASE=false
if you want to run
the migrations separately outside of application startup.
Chainlink now automatically cleans up old eth_txes to reduce database size. By default, any eth_txes older than a week are pruned on a regular basis. It is recommended to use the default value, however the default can be overridden by setting the ETH_TX_REAPER_THRESHOLD
env var e.g. ETH_TX_REAPER_THRESHOLD=24h
. Reaper can be disabled entirely by setting ETH_TX_REAPER_THRESHOLD=0
. The reaper will run on startup and again every hour (interval is configurable using ETH_TX_REAPER_INTERVAL
).
Heads corresponding to new blocks are now delivered in a sampled way, which is to improve
node performance on fast chains. The frequency is by default 1 second, and can be changed
by setting ETH_HEAD_TRACKER_SAMPLING_INTERVAL
env var e.g. ETH_HEAD_TRACKER_SAMPLING_INTERVAL=5s
.
Database backups: default directory is now a subdirectory 'backup' of chainlink root dir, and can be changed
to any chosed directory by setting a new configuration value: DATABASE_BACKUP_DIR
Published by tyrion70 over 3 years ago
Add MockOracle.sol
for testing contracts
New CLI command to convert v1 flux monitor jobs (JSON) to
v2 flux monitor jobs (TOML). Running it will archive the v1
job and create a new v2 job. Example:
// Get v1 job ID:
chainlink job_specs list
// Migrate it to v2:
chainlink jobs migrate fe279ed9c36f4eef9dc1bdb7bef21264
// To undo the migration:
1. Archive the v2 job in the UI
2. Unarchive the v1 job manually in the db:
update job_specs set deleted_at = null where id = 'fe279ed9-c36f-4eef-9dc1-bdb7bef21264'
Improved support for Optimism chain. Added a new boolean OPTIMISM_GAS_FEES
configuration variable which makes a call to estimate gas before all transactions, suitable for use with Optimism's L2 chain. When this option is used ETH_GAS_LIMIT_DEFAULT
is ignored.
Chainlink now supports routing certain calls to the eth node over HTTP instead of websocket, when available. This has a number of advantages - HTTP is more robust and simpler than websockets, reducing complexity and allowing us to make large queries without running the risk of hitting websocket send limits. The HTTP url should point to the same node as the ETH_URL and can be specified with an env var like so: ETH_HTTP_URL=https://my.ethereumnode.example/endpoint
.
Adding an HTTP endpoint is particularly recommended for BSC, which is hitting websocket limitations on certain queries due to its large block size.
Published by tyrion70 over 3 years ago
MockOracle.sol
for testing contractstype = "cron"
schemaVersion = 1
schedule = "*/10 * * * *"
observationSource = """
ds [type=http method=GET url="http://example.com"];
ds_parse [type=jsonparse path="data"];
ds -> ds_parse;
"""
JOB_PIPELINE_REAPER_THRESHOLD
has been reduced from 1 week to 1 day to save database space. This variable controls how long past job run history for OCR is kept. To keep the old behaviour, you can set JOB_PIPELINE_REAPER_THRESHOLD=168h
JOB_PIPELINE_PARALLELISM
.TaskRuns
in success cases. This reducesAdded GAS_UPDATER_BATCH_SIZE
option to workaround websocket: read limit exceeded
issues on BSC
Basic support for Optimism chain: node no longer gets stuck with 'nonce too low' error if connection is lost
Published by tyrion70 over 3 years ago
VRF Jobs now support an optional coordinatorAddress
field that, when present, will tell the node to check the fulfillment status of any VRF request before attempting the fulfillment transaction. This will assist in the effort to run multiple nodes with one VRF key.
Experimental: Add DATABASE_BACKUP_MODE
, DATABASE_BACKUP_FREQUENCY
and DATABASE_BACKUP_URL
configuration variables
It's now possible to configure database backups: on node start and separately, to be run at given frequency.
DATABASE_BACKUP_MODE
enables the initial backup on node start (with one of the values: none
, lite
, full
where lite
excludes
potentially large tables related to job runs, among others). Additionally, if DATABASE_BACKUP_FREQUENCY
variable is set to a duration of
at least '1m', it enables periodic backups.
DATABASE_BACKUP_URL
can be optionally set to point to e.g. a database replica, in order to avoid excessive load on the main one.
Example settings:
DATABASE_BACKUP_MODE="full"
and DATABASE_BACKUP_FREQUENCY
not set, will run a full back only at the start of the node.
DATABASE_BACKUP_MODE="lite"
and DATABASE_BACKUP_FREQUENCY="1h"
will lead to a partial backup on node start and then again a partial backup every one hour.
Periodic resending can be controlled using the ETH_TX_RESEND_AFTER_THRESHOLD
env var (default 30s). Unconfirmed transactions will be resent periodically at this interval. It is recommended to leave this at the default setting, but it can be set to any valid duration or to 0 to disable periodic resending.
Chainlink node now automatically sets the correct nonce on startup if you are restoring from a previous backup (manual setnextnonce is no longer necessary).
Flux monitor jobs should now work correctly with outlier-detection and market-closure external adapters.
Performance improvements to OCR job adds. Removed the pipeline_task_specs table
and added a new column dot_id
to the pipeline_task_runs table which links a pipeline_task_run
to a dotID in the pipeline_spec.dot_dag_source.
Fixed bug where node will occasionally submit an invalid OCR transmission which reverts with "address not authorized to sign".
Fixed bug where a node will sometimes double submit on runlog jobs causing reverted transactions on-chain
Published by tyrion70 over 3 years ago
ADMIN_CREDENTIALS_FILE
configuration variableThis variable defaults to $ROOT/apicredentials
and when defined / the
file exists, any command using the CLI that requires authentication will use it
to automatically log in.
ETH_MAX_UNCONFIRMED_TRANSACTIONS
configuration variableChainlink node now has a maximum number of unconfirmed transactions that
may be in flight at any one time (per key).
If this limit is reached, further attempts to send transactions will fail
and the relevant job will be marked as failed.
Jobs will continue to fail until at least one transaction is confirmed
and the queue size is reduced. This is introduced as a sanity limit to
prevent unbounded sending of transactions e.g. in the case that the eth
node is failing to broadcast to the network.
The default is set to 500 which considered high enough that it should
never be reached under normal operation. This limit can be changed
by setting the ETH_MAX_UNCONFIRMED_TRANSACTIONS
environment variable.
requestNewRound enables dedicated requesters to request a fresh report to
be sent to the contract right away regardless of heartbeat or deviation.
Name: "head_tracker_eth_connection_errors",
Help: "The total number of eth node connection errors",
Gas bumping can now be disabled by setting ETH_GAS_BUMP_THRESHOLD=0
Support for arbitrum
Node will now fatally error jobs if the total transaction costs exceeds the
configured cap (default 1 Eth). Also, it will no longer continue to bump gas on
transactions that started hitting this limit and instead continue to resubmit
at the highest price that worked.
Node operators should check their geth nodes and remove this cap if configured,
you can do this by running your geth node with --rpc.gascap=0 --rpc.txfeecap=0
or setting these values in your config toml.
Make head backfill asynchronous. This should eliminate some harmless but
annoying errors related to backfilling heads, logged on startup and
occasionally during normal operation on fast chains like Kovan.
Improvements to the GasUpdater
Various efficiency and correctness improvements have been made to the
GasUpdater. It places less load on the ethereum node and now features re-org
detection.
Most notably, GasUpdater no longer takes a 24 block delay to "warm up" on
application start and instead loads all relevant block history immediately.
This means that the application gas price will always be updated correctly
after reboot before the first transaction is ever sent, eliminating the previous
scenario where the node could send underpriced or overpriced transactions for a
period after a reboot, until the gas updater caught up.
ORM_MAX_OPEN_CONNS
default from 10 to 20ORM_MAX_IDLE_CONNS
default from 5 to 10Each Chainlink node will now use a maximum of 23 database connections (up from previous max of 13). Make sure your postgres database is tuned accordingly, especially if you are running multiple Chainlink nodes on a single database. If you find yourself hitting connection limits, you can consider reducing ORM_MAX_OPEN_CONNS
but this may result in degraded performance.
JOB_PIPELINE_MAX_TASK_DURATION
is no longer supportedPublished by tyrion70 over 3 years ago
Published by se3000 over 3 years ago
Published by tyrion70 over 3 years ago
Published by tyrion70 over 3 years ago
Published by tyrion70 almost 4 years ago
chainlink keys eth import
chainlink keys eth export
chainlink keys eth delete
Job spawner ORM attempted to claim locally-claimed job
warnings/v2/keys/...
, and are standardized across key types.P2P_PEER_ID
to indicate which key to use.DATABASE_TIMEOUT
is now set to 0 by default, so that nodes will wait forever for a lock. If you already have DATABASE_TIMEOUT=0
set explicitly in your env (most node operators) then you don’t need to do anything. If you didn’t have it set, and you want to keep the old default behaviour where a node exits shortly if it can’t get a lock, you can manually set DATABASE_TIMEOUT=500ms
in your env.Published by tyrion70 almost 4 years ago
Published by tyrion70 almost 4 years ago
MonitoringEndpoint
./keys
page./runs
tab back to the operator UI.ACCOUNT_ADDRESS
field from /config
page.jobs archive
=> job_specs archive
jobs create
=> job_specs create
jobs list
=> job_specs list
jobs show
=> job_specs show
jobs createocr
=> jobs create
jobs deletev2
=> jobs delete
jobs run
=> jobs run
Published by tyrion70 almost 4 years ago
Numerous key-related UX improvements:
chainlink keys
subcommand:
chainlink createextrakey
=> chainlink keys eth create
chainlink admin info
=> chainlink keys eth list
chainlink node p2p [create|list|delete]
=> chainlink keys p2p [create|list|delete]
chainlink node ocr [create|list|delete]
=> chainlink keys ocr [create|list|delete]
chainlink node vrf [create|list|delete]
=> chainlink keys vrf [create|list|delete]
--hard
flag to the command, e.g. chainlink keys p2p delete --hard 6
.--yes
or -y
.--ocrpassword
flag has been removed. OCR/P2P keys now share the same password at the ETH key (i.e., the password specified with the --password
flag).P2P_ANNOUNCE_IP
and P2P_ANNOUNCE_PORT
which allow node operators to override locally detected values for the chainlink node's externally reachable IP/port.OCR_LISTEN_IP
and OCR_LISTEN_PORT
have been renamed to P2P_LISTEN_IP
and P2P_LISTEN_PORT
for consistency.Published by tyrion70 almost 4 years ago
Published by tyrion70 almost 4 years ago
node hard-reset
which is used to remove all state for unstarted and pending job runs from the database.Published by tyrion70 about 4 years ago
ETH_SECONDARY_URL
option (i.e. concurrent transaction submission to multiple different eth nodes). This also comes with some minor performance improvements in the tx manager and more correct handling of some extremely rare edge cases.Published by tyrion70 about 4 years ago
ETH_SECONDARY_URL
option (i.e. concurrent transaction submission to multiple different eth nodes). This also comes with some minor performance improvements in the tx manager and more correct handling of some extremely rare edge cases.Published by tyrion70 about 4 years ago
Published by tyrion70 about 4 years ago
Published by tyrion70 about 4 years ago