tyk.conf
:secret
node_secret
tyk_analytics.conf
:admin_secret
shared_node_secret
typ_api_config.secret
secret
and DB tyk_api_config.secret
must match
GW node_secret
and DB shared_node_secret
must match
control_api_hostname
- Set the hostname to which you want to bind the REST API.
control_api_port
- This allows you to run the Gateway Control API on a separate port, and protect it behind a firewall if needed.
If you change these values, you need to update the equivalent fields in the dashboard conf file tyk_analytics.conf
: tyk_api_config.Host
and tyk_api_config.Port
tyk_analytics.conf
file.
/health
endpoint when a request is made to /customer/{customer_id}/account/health
…
Unless you want to make use of Tyk’s flexible listen path and endpoint path matching modes and understand the need to configure patterns carefully, you should enable TYK_GW_HTTPSERVEROPTIONS_ENABLESTRICTROUTES
, TYK_GW_HTTPSERVEROPTIONS_ENABLEPATHPREFIXMATCHING
and TYK_GW_HTTPSERVEROPTIONS_ENABLEPATHSUFFIXMATCHING
.
tyk.conf
by default:
max_idle_connections_per_host
option, was capped at 100. From v2.7 you have been able to set it to any value.
max_idle_connections_per_host
limits the number of keep-alive connections between clients and Tyk. If you set this value too low, then Tyk will not re-use connections and you will have to open a lot of new connections to your upstream.
If you set this value too high, you may encounter issues when slow clients occupy your connection and you may reach OS limits.
You can calculate the right value using a straightforward formula:
If the latency between Tyk and your Upstream is around 50ms, then a single connection can handle 1s / 50ms = 20 requests. So if you plan to handle 2000 requests per second using Tyk, the size of your connection pool should be at least 2000 / 20 = 100. For example, on low-latency environments (like 5ms), a connection pool of 100 connections will be enough for 20k RPS.
/etc/sysctl.conf
.
fs.file-max=160000
will consume a maximum of 160MB ram.
The changes will apply after a system reboot, but if you do not wish to reboot quite yet, you can apply the change for the current session using echo 160000 > /proc/sys/fs/file-max
.
systemd
unit files for each of the Tyk services using systemctl edit {service_name}
.
systemctl edit tyk-gateway.service
systemctl edit tyk-dashboard.service
systemctl edit tyk-pump.service
systemctl edit tyk-sink.service
LimitNOFILE=80000
to the [Service]
directive as follows:
--ulimit
option. See Docker documentation for detail on setting ulimits