Tyk Open Source API Gateway written in Go, supporting REST, GraphQL, TCP and gRPC protocols
OTHER License
Bot releases are visible (Hide)
Published by lonelycode almost 9 years ago
This is a drop-in replacement, you should be able to either just switch the binaries or update the package (make sure to backup your configurations!)
analytics_config.enable_detailed_recording
to true, two new fields will be added to analytics data: rawRequest and rawResponse, these will be in wire format and are NOT anonymised. This adds additional processing complexity to request throughput so could degrade performance.Published by lonelycode almost 9 years ago
In version 1.9 we have focused extensively on two things: Improved and expanded data and ease of deployment.
Tyk is already pretty easy to deploy, being a single binary that can be dropped into a system and run right there and then without any compilation, interpreters or dependencies.
We've been speaking to our clients' DevOps teams, and one thing they particularly enjoy seeing is a secure, effective and reliable pattern for deploying third-party applications to their systems.
The other thing we've heard a lot of feedback about is the host maanger, and how having NGinX as a dependency is limiting as it's "another moving part".
What have we done to address these things?
We got rid of the host manager Tyk no longer needs the host manager in order to route domains to their underlying services or portals. As of v1.9 you can configure a domain for:
All form within the dashboard or the configuration files.
And if you are running the full Tyk stack on a single instance, then we've made it easy for users to use Tyk the same way we used to use NginX - by having the Tyk nodes proxy the domain for the portal to the relevant organisation portal pages just like any other API.
We've standardised our deployment packages* As of v1.9 Tyk ships as DEB and signed RPM packages, and are provided to end-users using our GPG-signed package repository. This means that you can use APT or YUM to install Tyk and Tyk Dashboard to your servers in a repeatable and industry standard way.
We've also gone a step further and have provided init scripts for Upstart, SysV and Systemd, which means starting and stopping Tyk is as simple as sudo service tyk-gateway start|stop|restart|status
.
We think that these two changes make it much easier for you to install, setup, manage and deploy Tyk ro any Linux distribution. We have signed repositories for Ubuntu LTS releases, Red Hat Enterprise Linux 6 and 7, and Debian Jessie.
We'll still provide the tarballs to manually install tyk on our Github Repo page, but encourage users to use our package repositories to install Tyk on supported systems.
This version of tyk introduces a new feature: Uptime Awareness, with this feature, we have your tyk nodes actively poll your endpoints with specific uptime tests. Over time, Tyk collects analytics on Latency, errors and overall availability. Providing a granular view in your dashbaord to dig deeper into failures and issues.
We've made this feature as flexible as possible, enabling you to configure these tests dynamically using Service Discovery tools such as etcd or consul, while also making it possible to hook up "Host Up" and "Host Down" events to webhooks or custom javascript applications to interact with, and react to, any incidents in your infrastructure.
When enabled, Tyk can integrate this feature with it's round-robin load-balancing to remove unhealthy hosts from circulation until they come back on-line.
We've overhauled the dashboard UX, making it more robust and a little faster / easier to use. The biggest change is in how we render the graphs, whih we hope you enjoy.
We've spent a lot of time fixing bugs, improving logger output and overall trying to make things more robust, performant and better.
As always, we're open to feedback on our Github repo, or in our Community forum.
Gateway Mongo Driver updated to be compatible with MongoDB v3.0
Fixed OAuth client listings with redis cluster
Some latency improvements
Key detection now checks a local in-memory cache before reaching out to Redis, keys are cached for 10 seconds, with a 5 second purge rate (so a maximum key existence of 15s). Policies will still take instant effect on keys
key session cache is configurable, set local_session_cache.cached_session_timeout
(default 10) and local_session_cache.cached_session_eviction
(default 5) to the cache ttl and eviction scan times
key session cache can be disabled: local_session_cache.disable_cached_session_state
Test update to reduce number of errors, cleaner output
Healthcheck data now stored in a sorted set, much cleaner and faster, now works with redis cluster!
Bug fixed: Empty or invalid listen path no longer crashes proxy
Bug fixed: Basic Auth (and Oauth BA) passwords are now hashed, this is backward compatible, plaintext passwords will still work
OAuth access token expiry can now be set (in seconds) in the tyk.conf
file using oauth_token_expire:3600
Proxy now records accurate status codes for upstream requests for better error reporting
Added refresh token invalidation API: DELETE /tyk/oauth/refresh/{key}?api_id={api_id}
Global header injection now works, can be enabled on a per-version basis by adding global_headers:{"header_name": "header value"}
to the version object in the API Definition, global injections also supports key metadata variables.
Global header deletion now works: add "global_headers_remove":["header_name", "header_name"]
to your version object
Added request size limiter, request size limiter middleware will insist on content-length to be set, and check first against content-length value, and then actual request size value. To implement, add this to your version info:
"size_limits": [
{
"path": "widget/id",
"method": "PUT",
"size_limit": 25
}
]
Request size limits can also be enforced globally, these are checked first, to implement, add "global_size_limit": 30
to your version data.
Adding a key_expires_in: seconds
property to a policy definition will cause any key that is created or added using this policy to have a finite lifetime, it will expire in now()+key_expiry
seconds, handy for free trials
Dependency update (logrus)
Added support for JSON Web Token (JWT), currently HMAC Signing and RSA Public/Private key signing is supported. To enable JWT on an API, add "enable_jwt": true,
to your API Definition. Then set your tokens up with these new fields when you create them:
"jwt_data": {
"secret": "Secret"
}
HMAC JWT secrets can be any string, but the secret is shared. RSA secrets must be a PEM encoded PKCS1 or PKCS8 RSA private key, these can be generated on a linux box using:
openssl genrsa -out key.rsa
openssl rsa -in key.rsa -pubout > key.rsa.pub
Tyk JWT's MUST use the "kid" header attribute, as this is the internal access token (when creating a key) that is used to set the rate limits, policies and quotas for the user. The benefit here is that if RSA is used, then al that is stored in a Tyk installation that uses hashed keys is the hashed ID of the end user and their public key, and so very secure.
Fixed OAuth Password flow bug where a user could generate more than one token for the same API
Added realtime uptime monitoring, uptime monitoring means you can create a series of check requests for your upstream hosts (they do not need to be the same as the APIs being managed), and have the gateway poll them for uptime, if a host goes down (non-200 code or TCP Error) then an Event is fired (HostDown
), when it goes back up again another event is fired (HostUp
), this can be combined with the webhook feature for realtime alerts
Realtime monitoring also records statistics to the database so they can be analysed or graphed later
Real time monitoring can also be hooked into the load balancer to have the load balancer skip bad hosts for dynamic configuration
When hosts go up and down, sentinels are activated in Redis so all nodes in a Tyk cluster can benefit
Only one Tyk node will ever do the polling, they use a rudimentary capture-the-flag redis key to identify who is the uptime tester
Monitoring can also be disabled if you want a non-active node to manage uptime tests and analytics purging
The uptime test list can be refreshed live by hot-reloading Tyk
Active monitoring can be used together with Circuit breaker to have the circuit breaker manage failing methods, while the uptime test can take a whole host offline if it becomes unresponsive
To configure uptime tests, in your tyk.conf:
"uptime_tests": {
"disable": false, // disable uptime tests on the node completely
"config": {
"enable_uptime_analytics": true,
"failure_trigger_sample_size": 1,
"time_wait": 5,
"checker_pool_size": 50
}
}
Check lists usually sit with API configurations, so in your API Definition:
uptime_tests: {
check_list: [
{
"url": "http://google.com:3000/"
},
{
"url": "http://posttestserver.com/post.php?dir=tyk-checker-target-test&beep=boop",
"method": "POST",
"headers": {
"this": "that",
"more": "beans"
},
"body": "VEhJUyBJUyBBIEJPRFkgT0JKRUNUIFRFWFQNCg0KTW9yZSBzdHVmZiBoZXJl"
}
]
},
The body is base64 encoded in the second example, the first example will perform a simple GET, NOTE: using the simplified form will not enforce a timeout, while the more verbose form will fail with a 500ms timeout.
Uptime tests can be configured from a service (e.g. etcd or consul), simply set this up in the API Definition (this is etcd):
"uptime_tests": {
"check_list": [],
"config": {
"recheck_wait": 12,
"service_discovery": {
"use_discovery_service": true,
"query_endpoint": "http://127.0.0.1:4001/v2/keys/uptimeTest",
"data_path": "node.value"
}
}
},
Uptime tests by service discovery will load initially from the endpoint, it will not re-poll the service until it detects an error, at which point it will schedule a reload of the endpoint data. If used in conjunction with upstream target service discovery it enables dynamic reconfiguring (and monitoring) of services.
The document that Tyk requires is a JSON string encoded version of the check_list
parameter of the uptime_tests
field, for etcd:
curl -L http://127.0.0.1:4001/v2/keys/uptimeTest -XPUT -d value='[{"url": "http://domain.com:3000/"}]'
Fixed a bug where incorrect version data would be recorded in analytics for APis that use the first URL parameter as the version (domain.com/v1/blah)
Added domain name support (removes requirement for host manager). The main Tyk instance can have a hostname (e.g. apis.domain.com), and API Definitions can support their own domains (e.g. mycoolservice.com), multiple API definitions can have the same domain name so long as their listen_paths do not clash (so you can API 1 on mycoolservice.com/api1 and API 2 on mycoolservice.com/api2 if you set the listen_path for API 1 and API2 respectively.)
Domains are loaded dynamically and strictly matched, so if calls for a listen path or API ID on the main tyk hostname will not work for APIs that have custom domain names set, this means services can be nicely segregated.
If the hostname is blank, then the router is open and anything will be matched (if you are using host manager, this is the option you want as it leaves domain routing up to NginX downstream)
Set up the main tyk instance hostname by adding "hostname": "domain.com"
to the config
Enable custom api-specific domains by setting enable_custome_domains
in the tyk.conf to true
Make an API use a custom domain by adding a domain
element to the root object
Custom domains will work with your SSL certs
Refactored API loader so that it used pointers all the way down, this lessens the amount of data that needs copying in RAM (will only really affect systems running 500+ APIs)
JSVM is now disabled by default, if you are not using JS middleware, you can reduce Tyk footprint significantly by not enabling it. To re-enable set "enable_jsvm": true
in tyk.conf
Fixed CORS so that if OPTIONS passthrough is enabled an upstream server can handle all pre-flight requests without any Tyk middleware intervening
Dashboard config requires a home_dir field in order to work outside of it's home dir
Added option to segragate control API from front-end, set enable_api_segregation
to true and then add the hostname to control_api_hostname
Published by lonelycode almost 9 years ago
Fixes a bug in the JSVM with concurrently running code
Published by lonelycode about 9 years ago
"storage": {
"type": "redis",
"enable_cluster": true,
"hosts" : {
"server1": "6379",
"server2": "6379",
"server23: "6379",
},
"username": "",
"password": "",
"database": 0,
"optimisation_max_idle": 100
},
A Note on redis cluster support
Redis cluster does not support multi-key operations or scans across key ranges, so the following operations: the health-check API, OAuth client listing, and key listing in unhashed setups, could cause inconsistent behaviour.
"http_server_options": {
"flush_interval": 1
}
Enabled password grant type in OAuth:
allowed_access_types
array to include password
/oauth/token/
endpoint on your OAuth-enabled API{"access_token":"4i0VmSYMQ2iN7ivX0LaYBw","expires_in":3600,"refresh_token":"B_99PjEmQquufNWs8QYbow","token_type":"bearer"}
Published by lonelycode about 9 years ago
Dashboard:
To enable SSL edit the tyk.conf
to include your certificates:
"http_server_options": {
"use_ssl": true,
"certificates": [
{
"domain_name": "banana.com",
"cert_file": "new.cert.cert",
"key_file": "new.cert.key"
}
]
},
Published by lonelycode about 9 years ago
/listen_path/**VERSION**/resource/id)
Notes:
endpoint_returns_list
value to true or change it in the dashboard, this will treat the requested object as a listPublished by lonelycode about 9 years ago
This update is fully backwards compatible, however:
This update migrates API Catalogue (portal) entries into a new format. Portal entries used to need to be linked to an API. As of this version, this is no longer the case. instead, API Entries are linked to a policy ID. In turn, Key requests are also linked to policies instead of to APIs.
Reasoning: Policies give access to multiple APIs, allowing users to package APIs into bundles and tiers. In the previous portal incarnation this was fine so long as you only ever gave access to a single APi per portal entry. Realistically this does not make sense.
Old API Catalogue entries and key requests will still work as normal, including links to keys and key data in the developer profile section.
However all new catalogue entries created int he dashboard will be for this new version of key requests.
This update is fully backwards compatible, so existing policies/entries and key requests will work just as before. Only new catalogue entries will be affected.
Published by lonelycode about 9 years ago
Changelog
Published by lonelycode about 9 years ago
Large update to Tyk, major improvements and new features.
New: Dashbaord also has FreeBSD versions
Security option added for shared nodes: Set disable_virtual_path_blobs=true
to stop virtual paths from loading blob fields
Added session meta data variables to transform middleware:
You can reference session metadata attached to a key in the header injector using:
$tyk_meta.KEY_NAME
And in the body transform template through:
._tyk_meta.KEYNAME
You must enable sesison parsing in the TemplateData of the body tranform entry though by adding:
"enable_session": true
To the path entry
Added CORS support, add a CORS section to your API definition:
CORS: {
enable: false,
allowed_origins: [
"http://foo.com"
]
},
Full CORS Options are:
CORS struct {
Enable bool `bson:"enable" json:"enable"`
AllowedOrigins []string `bson:"allowed_origins" json:"allowed_origins"`
AllowedMethods []string `bson:"allowed_methods" json:"allowed_methods"`
AllowedHeaders []string `bson:"allowed_headers" json:"allowed_headers"`
ExposedHeaders []string `bson:"exposed_headers" json:"exposed_headers"`
AllowCredentials bool `bson:"allow_credentials" json:"allow_credentials"`
MaxAge int `bson:"max_age" json:"max_age"`
OptionsPassthrough bool `bson:"options_pasthrough" json:"options_pasthrough"`
Debug bool `bson:"debug" json:"debug"`
} `bson:"CORS" json:"CORS"`
Fixed cache bug
When using node segments, tags will be transferred into analytics data as well as any token-level tags, so for example, you could tag each node independently, and then view the trafic that went through those nodes by ID or group them in aggregate
You can now segment gateways that use a DB-backed configurations for example if you vae APIs in different regions, or only wish to service a segment of your APIs (e.g. "Health APIs", "Finance APIs"). So you can have a centralised API registry using the dashboard, and then Tag APIs according to their segment(s), then configure your Tyk nodes to only load those API endpoints, so node 1 may only serve health APIs, while node 2 might serve a mixture and node 3 will serve only finance APIs. To enable, simply configure your node and add to tyk.conf
and host_manager.conf
(if using):
"db_app_conf_options": {
"node_is_segmented": false,
"tags": ["test2"]
}
You will need to add a tags: []
sectino to your API definition in the DB to enable this feature, or set it in the dashboard.
Dynamic endpoints support response middleware
Dynamic endpoints support caching
Dynamic endpoints also count towards analytics
JSVM now has access to a TykBatchRequest
function to make batch requests in virtual paths. Use case: Create a virtual endpoint that interacts with multiple upstream APIs, gathers the data, processes the aggregates somehow and returns them as a single body. This can then be cached to save on load.
Added virtual path support, you can now have a JS Function respond to a request, makes mocking MUCh more flexible, TODO: expose batch methods to JSVM. To activate, add to extended paths:
virtual: [
{
response_function_name: "thisTest",
function_source_type: "file",
function_source_uri: "middleware/testVirtual.js",
path: "virtualtest",
method: "GET",
use_session: true
}
]
Virtual endpoint functions are pretty clean:
function thisTest(request, session, config) {
log("Virtual Test running")
log("Request Body: ")
log(request.Body)
log("Session: ")
log(session)
log("Config:")
log(config)
log("param-1:")
log(request.Params["param1"])
var responseObject = {
Body: "THIS IS A VIRTUAL RESPONSE"
Headers: {
"test": "virtual",
"test-2": "virtual"
},
Code: 200
}
return TykJsResponse(responseObject, session.meta_data)
}
log("Virtual Test initialised")
Added refresh tests for OAuth
URL Rewrite in place, you can specify URLs to rewrite in the extended_paths
seciton f the API Definition like so:
"url_rewrites": [
{
"path": "virtual/{wildcard1}/{wildcard2}",
"method": "GET",
"match_pattern": "virtual/(.*)/(\d+)",
"rewrite_to": "new-path/id/$2/something/$1"
}
]
You can now add a "tags":["tag1, "tag2", tag3"] field to token and policy definitions, these tags are transferred through to the analytics record when recorded. They will also be available to dynamic middleware. This means there is more flexibility with key ownership and reporting by segment.
Cleaned up server output, use --debug
to see more detailed debug data. Keeps log size down
TCP Errors now actually raise an error
Added circuit breaker as a path-based option. To enable, add a new sectino to your versions extended_paths
list:
circuit_breakers: [
{
path: "get",
method: "GET",
threshold_percent: 0.5,
samples: 5,
return_to_service_after: 60
}
]
Circuit breakers are individual on a singlie host, they do not centralise or pool back-end data, this is for speed. This means that in a load balanced environment where multiple Tyk nodes are used, some traffic can spill through as other nodes reach the sampling rate limit. This is for pure speed, adding a redis counter layer or data-store on every request to a servcie would jsut add latency.
Circuit breakers use a thresh-old-breaker pattern, so of sample size x if y% requests fail, trip the breaker.
The circuit breaker works across hosts (i.e. if you have multiple targets for an API, the samnple is across all upstream requests)
When a circuit breaker trips, it will fire and event: BreakerTriggered
which you can define actions for in the event_handlers
section:
```
event_handlers: {
events: {
BreakerTriggered: [
{
handler_name: "eh_log_handler",
handler_meta: {
prefix: "LOG-HANDLER-PREFIX"
}
},
{
handler_name: "eh_web_hook_handler",
handler_meta: {
method: "POST",
target_path: "http://posttestserver.com/post.php?dir=tyk-event-test",
template_path: "templates/breaker_webhook.json",
header_map: {
"X-Tyk-Test-Header": "Tyk v1.BANANA"
},
event_timeout: 10
}
}
]
}
},
```
Status codes are:
```
// BreakerTripped is sent when a breaker trips
BreakerTripped = 0
// BreakerReset is sent when a breaker resets
BreakerReset = 1
```
Added round-robin load balancing support, to enable, set up in the API Definition under the proxy
section:
...
"enable_load_balancing": true,
"target_list": [
"http://server1",
"http://server2",
"http://server3"
],
...
Added REST-based Servcie discovery for both single and load balanced entries (tested with etcd, but anything that returns JSON should work), to enable add a service discovery section to your Proxy section:
// Solo
service_discovery : {
use_discovery_service: true,
query_endpoint: "http://127.0.0.1:4001/v2/keys/services/single",
use_nested_query: true,
parent_data_path: "node.value",
data_path: "hostname",
port_data_path: "port",
use_target_list: false,
cache_timeout: 10
},
// With LB
"enable_load_balancing": true,
service_discovery: {
use_discovery_service: true,
query_endpoint: "http://127.0.0.1:4001/v2/keys/services/multiobj",
use_nested_query: true,
parent_data_path: "node.value",
data_path: "array.hostname",
port_data_path: "array.port",
use_target_list: true,
cache_timeout: 10
},
For service discovery, multiple assumptions are made:
$ curl -L http://127.0.0.1:4001/v2/keys/services/solo
{
"action": "get",
"node": {
"key": "/services/single",
"value": "{\"hostname\": \"http://httpbin.org\", \"port\": \"80\"}",
"modifiedIndex": 6,
"createdIndex": 6
}
}
$ curl -L http://127.0.0.1:4001/v2/keys/services/multiobj
{
"action": "get",
"node": {
"key": "/services/multiobj",
"value": "{\"array\":[{\"hostname\": \"http://httpbin.org\", \"port\": \"80\"},{\"hostname\": \"http://httpbin.org\", \"port\": \"80\"}]}",
"modifiedIndex": 9,
"createdIndex": 9
}
}
Here the key value is actually an encoded JSON string, which needs to be decoded separately to get to the data.
port_data_path
, the values will be zipped together and concatenated into a valid proxy string.Fixed bug where version parameter on POST requests would empty request body, streamlined request copies in general.
it is now possible to use JSVM middleware on Open (Keyless) APIs
It is now possible to configure the timeout parameters around the http server in the tyk.conf file:
"http_server_options": {
"override_defaults": true,
"read_timeout": 10,
"write_timeout": 10
}
It is now possible to set hard timeouts on a path-by-path basis, e.g. if you have a long-running microservice, but do not want to hold up a dependent client should a query take too long, you can enforce a timeout for that path so the requesting client is not held up forever (or maange it's own timeout). To do so, add this to the extended_paths section of your APi definition:
...
extended_paths: {
...
transform_response_headers: [],
hard_timeouts: [
{
path: "delay/5",
method: "GET",
timeout: 3
}
]
}
...
Published by lonelycode about 9 years ago
This release fixes a bug with Oauth Refresh tokens not being valid after consecutive use
Published by lonelycode over 9 years ago
In certain configurations, Tyk can run out of open file descriptors because the golang Http server respects the keep-alive header, this can cause problems with high-trafic APIs.
With this hotfix you can add "close_connections": true
to your tyk.conf file and tyk will not keep open TCP connections.
Published by lonelycode over 9 years ago
UPDATE: Dashboard version 0.9.4.5 Hotfix: Large data sets now supported in analytics. Fixes bug where analytics do not show up.
Major release - now with a portal :-)
Added LDAP StorageHandler, enables basic key lookups from an LDAP service
Added Policies feature, you can now define key policies for keys you generate:
Create a policies/policies.json file
Set the appropriate arguments in tyk.conf file:
"policies": {
"policy_source": "file",
"policy_record_name": "./policies/policies.json"
}
Create a policy, they look like this:
{
"default": {
"rate": 1000,
"per": 1,
"quota_max": 100,
"quota_renewal_rate": 60,
"access_rights": {
"41433797848f41a558c1573d3e55a410": {
"api_name": "My API",
"api_id": "41433797848f41a558c1573d3e55a410",
"versions": [
"Default"
]
}
},
"org_id": "54de205930c55e15bd000001",
"hmac_enabled": false
}
}
Add a apply_policy_id
field to your Session object when you create a key with your policy ID (in this case the ID is default
)
Reload Tyk
Policies will be applied to Keys when they are loaded form Redis, and the updated i nRedis so they can be ueried if necessary
Policies can invalidate whole keysets by copying over the InActive
field, set this to true in a policy and all keys that have the policy set will be refused access.
Added granular path white-list: It is now possible to define at the key level what access permissions a key has, this is a white-list of regex keys and apply to a whole API definition. Granular permissions are applied after version-based (global) ones in the api-definition. These granular permissions take the form a new field in the access rights field in either a policy definition or a session object in the new allowed_urls
field:
{
"default": {
"rate": 1000,
"per": 1,
"quota_max": 100,
"quota_renewal_rate": 60,
"access_rights": {
"41433797848f41a558c1573d3e55a410": {
"api_name": "My API",
"api_id": "41433797848f41a558c1573d3e55a410",
"versions": [
"Default"
],
"allowed_urls": [
{
"url": "/resource/(.*),
"methods": ["GET", "POST"]
}
]
}
},
"org_id": "54de205930c55e15bd000001",
"hmac_enabled": false
}
}
Added hash_keys
config option. Setting this to true
willc ause Tyk to store all keys in Redis in a hashed representation. This will also obfuscate keys in analytics data, using the hashed representation instead. Webhooks will continue to make the full API key available. This change is not backwards compatible if enabled on an existing installation.
Added cache_options.enable_upstream_cache_control
flag to API definitions
extended_paths
section, otherwise the middleware will not activate for the pathx-tyk-cache-action-set
and x-tyk-cache-action-set-ttl
.x-tyk-cache-action-set
set to 1
(or anything non empty), and upstream control is enabled. Tyk will cache the response.x-tyk-cache-action-set-ttl
to a numeric value, and upstream control is enabled, the cached object will be created for whatever number of seconds this value is set to.Added auth.use_param
option to API Definitions, set to tru if you want Tyk to check for the API Token in the request parameters instead of the header, it will look for the value set in auth.auth_header_name
and is case sensitive
Host manager now supports Portal NginX tempalte maangement, will generate portal configuration files for NginX on load for each organisation in DB
Host manager will now gracefully attempt reconnect if Redis goes down
Tyk will now reload on notifications from Redis (dashboard signal) for cluster reloads (see below), new option in config SuppressRedisSignalReload
will suppress this behaviour (for example, if you are still using old host manager)
Added new group reload endpoint (for management via LB), sending a GET to /tyk/reload/group will now send a pub/sub notification via Redis which will cause all listening nodes to reload gracefully.
Host manager can now be set to manage Tyk or not, this means host manager can be deployed alongside NGinX without managing Tyk, and Tyk nodes reloading on their own using redis pub/sub
Rate limiter now uses a rolling window, makes gaming the limiter by staddling the TTL harder
Published by lonelycode over 9 years ago
Major speed and reliability optimisations.
Published by lonelycode over 9 years ago
Update: Dashboard 0.9.5.3
Update: Dashboard 0.9.5.2
UPDATE: Dashboard 0.9.5.1
rpc_storage_handler.go
file (see the dispatcher).oauth_refresh_token_expire
setting in configuration, allows for customisation of refresh token expiry in OAuth flows--import-swagger=petstore.json
to import a swagger definition, will create a Whitelisted API.tyk.conf
to include the global check rate and target data: "monitor": {
"enable_trigger_monitors": false,
"configuration": {
"method": "POST",
"target_path": "http://posttestserver.com/post.php?dir=tyk-monitor-test",
"template_path": "templates/monitor_template.json",
"header_map": {"x-tyk-monitor-secret": "12345"},
"event_timeout": 10
},
"global_trigger_limit": 80.0,
"monitor_user_keys": false,
"monitor_org_keys": true
}
SessionObject
has been updated to include a "monitor" section which lets you define custom limits to trigger a quota event, add this to your key objects: "monitor": {
"trigger_limits": [80.0, 60.0, 50.0]
}
{
"event": "TriggerExceeded",
"message": "Quota trigger reached",
"org": "53ac07777cbb8c2d53000002",
"key": "53ac07777cbb8c2d53000002c74f43ddd714489c73ea5c3fc83a6b1e",
"trigger_limit": "80",
}
transform_response
list and the trasnformer must be registered under the new response_transforms
list, otherwise it will not be activated. {
name: "response_body_transform",
options: {}
}
response_processors
otherwise it is not loaded. Specifying options under the extended paths section will not be enough to enable response processors {
name: "header_injector",
options: {
"add_headers": {"name": "value"},
"remove_headers": ["name"]
}
}
extended_paths.transform_response_headers
filed.SupressDefaultOrgStore
- uses a default redis connection to handle unfound Org lookups, this is merely patching a potential holetyk.conf
: ...
"use_sentry": true,
"sentry_code": "https://your-dsn-string",
...
enforce_org_data_age
config parameter that allows for setting the expireAt in seconds for analytics data on an organisation level. (Requires the addition of a data_expires
filed in the Session object that is larger than 0) api_event: {
webhook: "http://posttestserver.com/post.php?dir=tyk-events",
email: "[email protected]"
},
key_event: {
webhook: "http://posttestserver.com/post.php?dir=tyk-key-events",
email: "[email protected]"
},
key_request_event: {
webhook: "http://posttestserver.com/post.php?dir=tyk-key-events",
email: "[email protected]"
}
Published by lonelycode over 9 years ago
OAuth server was having problems with client creation and extraction during authentication flow, this hotfix addresses this issue.
This release only has the binaries for the main supported linux architectures. They can be applied as a drop-in replacement to the binary on your system
Published by lonelycode over 9 years ago
Added caching middleware
Added optimisation settings for out-of-thread session updates and redis idle pool connections
Added cache option to cache safe requests, means individual paths need not be defined, but all GET, OPTIONS and HEAD requests will be cached
Added request transformation middleware, thus far only tested with JSON input. Add a golanfg template to the extended path config like so:
"transform": [
{
"path": "/",
"template_data": {
"template_mode": "file",
"template_source": "./templates/transform_test.tmpl"
}
}
]
Added header transformation middleware, simple implementation, but will delte and add headers before request is outbound:
"transform_headers": [
{
"delete_headers": ["Content-Type", "authorization"],
"add_headers": {"x-tyk-test-inject": "new-value"},
"path": "/post"
}
]
Clock skew for HMAC requests is now configurable
Event handlers now also receive an encoded version of the inbound request as a base64-encoded string.
License requirements removed
Published by lonelycode over 9 years ago
tykcommon
, data expiry headers will be added to all analytics records, set expire_analytics_after
to 0
to have data live indefinetely (currently 100 years), set to anything above zero for data in MongoDB to be removed after x seconds. requirement: You must create an expiry TTL index on the tyk_analytics collection manually (http://docs.mongodb.org/manual/tutorial/expire-data/). If you do not wish mongo to manage data expiry at all, simply do not create the index.eh_dynamic_handler
event handler type that runs JS event handlers?suppress_reset=1
accompanies the REST request. This way a key can be updated and have the quote in Redis reset to Max, OR it can be edited without affecting the quota?reset_quota=1
parameter check to /tyk/orgs/key
endpoint so that quotas can be reset for organisation-wide locksPublished by lonelycode almost 10 years ago
Tyk Dashboard 0.9.1 - Minor update, now supporting monthly licenses.
Published by lonelycode almost 10 years ago
Key features in this version are API mocking support, Blueprint importing and several health-check and end-user quota updates. Error and debug output has also been cleaned up for clutter free logging.
ignored_ips
flag in the config file (e.g. for health checks)GET /tyk/health
with an api_id
param, and the X-Tyk-Authorization
header will return upstream latency average, requests per second, throttles per second, quota violations per second and key failure events per second. Can be easily extended to add more data./{api-id}/tyk/rate-limits
with an authorised header will return the rate limit for the current user without affecting them. Fixes issue #27GET /widget/1234
will work and POST /windget/1234
will not../tyk --import-blueprint=blueprint.json --create-api --org-id=<id> --upstream-target="http://widgets.com/api/"
./tyk --import-blueprint=blueprint.json --for-api=<api_id> --as-version="2.0"
--as-mock
parameter.Published by lonelycode almost 10 years ago
It is recommended to test the new version of Dashboard against your existing database installation to ensure that there re no schema conflicts. There shouldn't be any, Tyk Dashboard v0.8 supports the full API Definition schema of Tyk v1.0+ it is recommended to back up your database before updating.