Unverified Kaydet (Commit) c7a3aa7e authored tarafından Ben Firshman's avatar Ben Firshman

Add new Sphinx documentation

Initial work thanks to @aanand.
Signed-off-by: 's avatarBen Firshman <ben@firshman.co.uk>
üst 1984f687
......@@ -10,7 +10,7 @@ dist
html/*
# Compiled Documentation
site/
_build/
README.rst
env/
......
FROM python:2.7
FROM python:3.5
RUN mkdir /home/docker-py
WORKDIR /home/docker-py
COPY requirements.txt /home/docker-py/requirements.txt
RUN pip install -r requirements.txt
COPY docs-requirements.txt /home/docker-py/docs-requirements.txt
RUN pip install -r docs-requirements.txt
......
......@@ -74,7 +74,7 @@ flake8: build
.PHONY: docs
docs: build-docs
docker run -v `pwd`/docs:/home/docker-py/docs/ -p 8000:8000 docker-py-docs mkdocs serve -a 0.0.0.0:8000
docker run --rm -it -v `pwd`:/home/docker-py docker-py-docs sphinx-build docs ./_build
.PHONY: shell
shell: build
......
dl.hide-signature > dt {
display: none;
}
{% extends "!page.html" %}
{% set css_files = css_files + ["_static/custom.css"] %}
This diff is collapsed.
Low-level API
=============
The main object-orientated API is built on top of :py:class:`APIClient`. Each method on :py:class:`APIClient` maps one-to-one with a REST API endpoint, and returns the response that the API responds with.
It's possible to use :py:class:`APIClient` directly. Some basic things (e.g. running a container) consist of several API calls and are complex to do with the low-level API, but it's useful if you need extra flexibility and power.
.. py:module:: docker.api
.. autoclass:: docker.api.client.APIClient
Containers
----------
.. py:module:: docker.api.container
.. rst-class:: hide-signature
.. autoclass:: ContainerApiMixin
:members:
:undoc-members:
.. py:module:: docker.api.image
Images
------
.. py:module:: docker.api.image
.. rst-class:: hide-signature
.. autoclass:: ImageApiMixin
:members:
:undoc-members:
Building images
---------------
.. py:module:: docker.api.build
.. rst-class:: hide-signature
.. autoclass:: BuildApiMixin
:members:
:undoc-members:
Networks
--------
.. rst-class:: hide-signature
.. autoclass:: docker.api.network.NetworkApiMixin
:members:
:undoc-members:
Utilities
~~~~~~~~~
These functions are available under ``docker.utils`` to create arguments
for :py:meth:`create_network`:
.. autofunction:: docker.utils.create_ipam_config
.. autofunction:: docker.utils.create_ipam_pool
Volumes
-------
.. py:module:: docker.api.volume
.. rst-class:: hide-signature
.. autoclass:: VolumeApiMixin
:members:
:undoc-members:
Executing commands in containers
--------------------------------
.. py:module:: docker.api.exec_api
.. rst-class:: hide-signature
.. autoclass:: ExecApiMixin
:members:
:undoc-members:
Swarms
------
.. py:module:: docker.api.swarm
.. rst-class:: hide-signature
.. autoclass:: SwarmApiMixin
:members:
:undoc-members:
Services
--------
.. py:module:: docker.api.service
.. rst-class:: hide-signature
.. autoclass:: ServiceApiMixin
:members:
:undoc-members:
The Docker daemon
-----------------
.. py:module:: docker.api.daemon
.. rst-class:: hide-signature
.. autoclass:: DaemonApiMixin
:members:
:undoc-members:
Client
======
.. py:module:: docker.client
Creating a client
-----------------
To communicate with the Docker daemon, you first need to instantiate a client. The easiest way to do that is by calling the function :py:func:`~docker.client.from_env`. It can also be configured manually by instantiating a :py:class:`~docker.client.Client` class.
.. autofunction:: from_env()
Client reference
----------------
.. autoclass:: Client()
.. autoattribute:: containers
.. autoattribute:: images
.. autoattribute:: networks
.. autoattribute:: nodes
.. autoattribute:: services
.. autoattribute:: swarm
.. autoattribute:: volumes
.. automethod:: events()
.. automethod:: info()
.. automethod:: login()
.. automethod:: ping()
.. automethod:: version()
This diff is collapsed.
Containers
==========
.. py:module:: docker.models.containers
Run and manage containers on the server.
Methods available on ``client.containers``:
.. rst-class:: hide-signature
.. autoclass:: ContainerCollection
.. automethod:: run(image, command=None, **kwargs)
.. automethod:: create(image, command=None, **kwargs)
.. automethod:: get(id_or_name)
.. automethod:: list(**kwargs)
Container objects
-----------------
.. autoclass:: Container()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: name
.. autoattribute:: status
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: attach
.. automethod:: attach_socket
.. automethod:: commit
.. automethod:: diff
.. automethod:: exec_run
.. automethod:: export
.. automethod:: get_archive
.. automethod:: kill
.. automethod:: logs
.. automethod:: pause
.. automethod:: put_archive
.. automethod:: remove
.. automethod:: rename
.. automethod:: resize
.. automethod:: restart
.. automethod:: start
.. automethod:: stats
.. automethod:: stop
.. automethod:: top
.. automethod:: unpause
.. automethod:: update
# Access to devices on the host
If you need to directly expose some host devices to a container, you can use
the devices parameter in the `host_config` param in `Client.create_container`
as shown below:
```python
cli.create_container(
'busybox', 'true', host_config=cli.create_host_config(devices=[
'/dev/sda:/dev/xvda:rwm'
])
)
```
Each string is a single mapping using the following format:
`<path_on_host>:<path_in_container>:<cgroup_permissions>`
The above example allows the container to have read-write access to
the host's `/dev/sda` via a node named `/dev/xvda` inside the container.
As a more verbose alternative, each host device definition can be specified as
a dictionary with the following keys:
```python
{
'PathOnHost': '/dev/sda1',
'PathInContainer': '/dev/xvda',
'CgroupPermissions': 'rwm'
}
```
# HostConfig object
The Docker Remote API introduced [support for HostConfig in version 1.15](http://docs.docker.com/reference/api/docker_remote_api_v1.15/#create-a-container).
This object contains all the parameters you could previously pass to `Client.start`.
*It is highly recommended that users pass the HostConfig in the `host_config`*
*param of `Client.create_container` instead of `Client.start`*
## HostConfig helper
### Client.create_host_config
Creates a HostConfig dictionary to be used with `Client.create_container`.
`binds` allows to bind a directory in the host to the container. See [Using
volumes](volumes.md) for more information.
`port_bindings` exposes container ports to the host.
See [Port bindings](port-bindings.md) for more information.
`lxc_conf` allows to pass LXC configuration options using a dictionary.
`privileged` starts the container in privileged mode.
[Links](http://docs.docker.io/en/latest/use/working_with_links_names/) can be
specified with the `links` argument. They can either be specified as a
dictionary mapping name to alias or as a list of `(name, alias)` tuples.
`dns` and `volumes_from` are only available if they are used with version v1.10
of docker remote API. Otherwise they are ignored.
`network_mode` is available since v1.11 and sets the Network mode for the
container ('bridge': creates a new network stack for the container on the
Docker bridge, 'none': no networking for this container, 'container:[name|id]':
reuses another container network stack, 'host': use the host network stack
inside the container or any name that identifies an existing Docker network).
`restart_policy` is available since v1.2.0 and sets the container's *RestartPolicy*
which defines the conditions under which a container should be restarted upon exit.
If no *RestartPolicy* is defined, the container will not be restarted when it exits.
The *RestartPolicy* is specified as a dict. For example, if the container
should always be restarted:
```python
{
"MaximumRetryCount": 0,
"Name": "always"
}
```
It is possible to restart the container only on failure as well as limit the number
of restarts. For example:
```python
{
"MaximumRetryCount": 5,
"Name": "on-failure"
}
```
`cap_add` and `cap_drop` are available since v1.2.0 and can be used to add or
drop certain capabilities. The user may specify the capabilities as an array
for example:
```python
[
"SYS_ADMIN",
"MKNOD"
]
```
**Params**
* binds: Volumes to bind. See [Using volumes](volumes.md) for more information.
* port_bindings (dict): Port bindings. See [Port bindings](port-bindings.md)
for more information.
* lxc_conf (dict): LXC config
* oom_kill_disable (bool): Whether to disable OOM killer
* oom_score_adj (int): An integer value containing the score given to the
container in order to tune OOM killer preferences
* publish_all_ports (bool): Whether to publish all ports to the host
* links (dict or list of tuples): either as a dictionary mapping name to alias
or as a list of `(name, alias)` tuples
* privileged (bool): Give extended privileges to this container
* dns (list): Set custom DNS servers
* dns_search (list): DNS search domains
* volumes_from (str or list): List of container names or Ids to get volumes
from. Optionally a single string joining container id's with commas
* network_mode (str): One of `['bridge', 'none', 'container:<name|id>', 'host']`
* restart_policy (dict): "Name" param must be one of
`['on-failure', 'always']`
* cap_add (list of str): Add kernel capabilities
* cap_drop (list of str): Drop kernel capabilities
* extra_hosts (dict): custom host-to-IP mappings (host:ip)
* read_only (bool): mount the container's root filesystem as read only
* pid_mode (str): if set to "host", use the host PID namespace inside the
container
* ipc_mode (str): Set the IPC mode for the container
* security_opt (list): A list of string values to customize labels for MLS
systems, such as SELinux.
* ulimits (list): A list of dicts or `docker.utils.Ulimit` objects. A list
of ulimits to be set in the container.
* log_config (`docker.utils.LogConfig` or dict): Logging configuration to
container
* mem_limit (str or int): Maximum amount of memory container is allowed to
consume. (e.g. `'1G'`)
* memswap_limit (str or int): Maximum amount of memory + swap a container is
allowed to consume.
* mem_swappiness (int): Tune a container's memory swappiness behavior.
Accepts number between 0 and 100.
* shm_size (str or int): Size of /dev/shm. (e.g. `'1G'`)
* cpu_group (int): The length of a CPU period in microseconds.
* cpu_period (int): Microseconds of CPU time that the container can get in a
CPU period.
* cpu_shares (int): CPU shares (relative weight)
* cpuset_cpus (str): CPUs in which to allow execution (0-3, 0,1)
* blkio_weight: Block IO weight (relative weight), accepts a weight value
between 10 and 1000.
* blkio_weight_device: Block IO weight (relative device weight) in the form of:
`[{"Path": "device_path", "Weight": weight}]`
* device_read_bps: Limit read rate (bytes per second) from a device in the
form of: `[{"Path": "device_path", "Rate": rate}]`
* device_write_bps: Limit write rate (bytes per second) from a device.
* device_read_iops: Limit read rate (IO per second) from a device.
* device_write_iops: Limit write rate (IO per second) from a device.
* group_add (list): List of additional group names and/or IDs that the
container process will run as.
* devices (list): Host device bindings. See [host devices](host-devices.md)
for more information.
* tmpfs: Temporary filesystems to mount. See [Using tmpfs](tmpfs.md) for more
information.
* sysctls (dict): Kernel parameters to set in the container.
* userns_mode (str): Sets the user namespace mode for the container when user
namespace remapping option is enabled. Supported values are: `host`
* pids_limit (int): Tune a container’s pids limit. Set -1 for unlimited.
* isolation (str): Isolation technology to use. Default: `None`.
**Returns** (dict) HostConfig dictionary
```python
>>> from docker import Client
>>> cli = Client()
>>> cli.create_host_config(privileged=True, cap_drop=['MKNOD'], volumes_from=['nostalgic_newton'])
{'CapDrop': ['MKNOD'], 'LxcConf': None, 'Privileged': True, 'VolumesFrom': ['nostalgic_newton'], 'PublishAllPorts': False}
```
Images
======
.. py:module:: docker.models.images
Manage images on the server.
Methods available on ``client.images``:
.. rst-class:: hide-signature
.. py:class:: ImageCollection
.. automethod:: build
.. automethod:: get
.. automethod:: list(**kwargs)
.. automethod:: load
.. automethod:: pull
.. automethod:: push
.. automethod:: remove
.. automethod:: search
Image objects
-------------
.. autoclass:: Image()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: tags
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: history
.. automethod:: reload
.. automethod:: save
.. automethod:: tag
# docker-py documentation
An API client for docker written in Python
## Installation
Our latest stable is always available on PyPi.
pip install docker-py
## Documentation
Full documentation is available in the `/docs/` directory.
## License
Docker is licensed under the Apache License, Version 2.0. See LICENSE for full license text
Docker SDK for Python
=====================
A Python library for the Docker Remote API. It lets you do anything the ``docker`` command does, but from within Python apps – run containers, manage containers, manage Swarms, etc.
For more information about the Remote API, `see its documentation <https://docs.docker.com/engine/reference/api/docker_remote_api/>`_.
Installation
------------
The latest stable version `is available on PyPi <https://pypi.python.org/pypi/docker/>`_. Either add ``docker`` to your ``requirements.txt`` file or install with pip::
pip install docker
Getting started
---------------
To talk to a Docker daemon, you first need to instantiate a client. You can use :py:func:`~docker.client.from_env` to connect using the default socket or the configuration in your environment:
.. code-block:: python
import docker
client = docker.from_env()
You can now run containers:
.. code-block:: python
>>> client.containers.run("ubuntu", "echo hello world")
'hello world\n'
You can run containers in the background:
.. code-block:: python
>>> client.containers.run("bfirsh/reticulate-splines", detach=True)
<Container '45e6d2de7c54'>
You can manage containers:
.. code-block:: python
>>> client.containers.list()
[<Container '45e6d2de7c54'>, <Container 'db18e4f20eaa'>, ...]
>>> container = client.containers.get('45e6d2de7c54')
>>> container.attrs['Config']['Image']
"bfirsh/reticulate-splines"
>>> container.logs()
"Reticulating spline 1...\n"
>>> container.stop()
You can stream logs:
.. code-block:: python
>>> for line in container.logs(stream=True):
... print line.strip()
Reticulating spline 2...
Reticulating spline 3...
...
You can manage images:
.. code-block:: python
>>> client.images.pull('nginx')
<Image 'nginx'>
>>> client.images.list()
[<Image 'ubuntu'>, <Image 'nginx'>, ...]
That's just a taster of what you can do with the Docker SDK for Python. For more, :doc:`take a look at the reference <client>`.
.. toctree::
:hidden:
:maxdepth: 2
Home <index>
client
containers
images
networks
nodes
services
swarm
volumes
api
tls
change-log
# Using with Docker Toolbox and Machine
In development, Docker recommends using
[Docker Toolbox](https://www.docker.com/products/docker-toolbox) to set up
Docker. It includes a tool called Machine which will create a VM running
Docker Engine and point your shell at it using environment variables.
To configure docker-py with these environment variables
First use Machine to set up the environment variables:
```bash
$ eval "$(docker-machine env)"
```
You can then use docker-py like this:
```python
import docker
client = docker.from_env(assert_hostname=False)
print client.version()
```
**Note:** This snippet is disabling TLS hostname checking with
`assert\_hostname=False`. Machine provides us with the exact certificate
the server is using so this is safe. If you are not using Machine and verifying
the host against a certificate authority, you'll want to enable hostname
verification.
# Using Networks
## Network creation
With the release of Docker 1.9 you can now manage custom networks.
Here you can see how to create a network named `network1` using
the `bridge` driver
```python
docker_client.create_network("network1", driver="bridge")
```
You can also create more advanced networks with custom IPAM configurations.
For example, setting the subnet to `192.168.52.0/24` and gateway address
to `192.168.52.254`
```python
ipam_pool = docker.utils.create_ipam_pool(
subnet='192.168.52.0/24',
gateway='192.168.52.254'
)
ipam_config = docker.utils.create_ipam_config(
pool_configs=[ipam_pool]
)
docker_client.create_network("network1", driver="bridge", ipam=ipam_config)
```
By default, when you connect a container to an overlay network, Docker also
connects a bridge network to it to provide external connectivity. If you want
to create an externally isolated overlay network, with Docker 1.10 you can
create an internal network.
```python
docker_client.create_network("network1", driver="bridge", internal=True)
```
## Container network configuration
In order to specify which network a container will be connected to, and
additional configuration, use the `networking_config` parameter in
`Client.create_container`. Note that at the time of creation, you can
only connect a container to a single network. Later on, you may create more
connections using `Client.connect_container_to_network`.
```python
networking_config = docker_client.create_networking_config({
'network1': docker_client.create_endpoint_config(
ipv4_address='172.28.0.124',
aliases=['foo', 'bar'],
links=['container2']
)
})
ctnr = docker_client.create_container(
img, command, networking_config=networking_config
)
```
## Network API documentation
### Client.create_networking_config
Create a networking config dictionary to be used as the `networking_config`
parameter in `Client.create_container_config`
**Params**:
* endpoints_config (dict): A dictionary of `network_name -> endpoint_config`
relationships. Values should be endpoint config dictionaries created by
`Client.create_endpoint_config`. Defaults to `None` (default config).
**Returns** A networking config dictionary.
```python
docker_client.create_network('network1')
networking_config = docker_client.create_networking_config({
'network1': docker_client.create_endpoint_config()
})
container = docker_client.create_container(
img, command, networking_config=networking_config
)
```
### Client.create_endpoint_config
Create an endpoint config dictionary to be used with
`Client.create_networking_config`.
**Params**:
* aliases (list): A list of aliases for this endpoint. Names in that list can
be used within the network to reach the container. Defaults to `None`.
* links (list): A list of links for this endpoint. Containers declared in this
list will be [linked](https://docs.docker.com/engine/userguide/networking/work-with-networks/#linking-containers-in-user-defined-networks)
to this container. Defaults to `None`.
* ipv4_address (str): The IP address of this container on the network,
using the IPv4 protocol. Defaults to `None`.
* ipv6_address (str): The IP address of this container on the network,
using the IPv6 protocol. Defaults to `None`.
* link_local_ips (list): A list of link-local (IPv4/IPv6) addresses.
**Returns** An endpoint config dictionary.
```python
endpoint_config = docker_client.create_endpoint_config(
aliases=['web', 'app'],
links=['app_db'],
ipv4_address='132.65.0.123'
)
docker_client.create_network('network1')
networking_config = docker_client.create_networking_config({
'network1': endpoint_config
})
container = docker_client.create_container(
img, command, networking_config=networking_config
)
```
### docker.utils.create_ipam_config
Create an IPAM (IP Address Management) config dictionary to be used with
`Client.create_network`.
**Params**:
* driver (str): The IPAM driver to use. Defaults to `'default'`.
* pool_configs (list): A list of pool configuration dictionaries as created
by `docker.utils.create_ipam_pool`. Defaults to empty list.
**Returns** An IPAM config dictionary
```python
ipam_config = docker.utils.create_ipam_config(driver='default')
network = docker_client.create_network('network1', ipam=ipam_config)
```
### docker.utils.create_ipam_pool
Create an IPAM pool config dictionary to be added to the `pool_configs` param
in `docker.utils.create_ipam_config`.
**Params**:
* subnet (str): Custom subnet for this IPAM pool using the CIDR notation.
Defaults to `None`.
* iprange (str): Custom IP range for endpoints in this IPAM pool using the
CIDR notation. Defaults to `None`.
* gateway (str): Custom IP address for the pool's gateway.
* aux_addresses (dict): A dictionary of `key -> ip_address` relationships
specifying auxiliary addresses that need to be allocated by the
IPAM driver.
**Returns** An IPAM pool config dictionary
```python
ipam_pool = docker.utils.create_ipam_pool(
subnet='124.42.0.0/16',
iprange='124.42.0.0/24',
gateway='124.42.0.254',
aux_addresses={
'reserved1': '124.42.1.1'
}
)
ipam_config = docker.utils.create_ipam_config(pool_configs=[ipam_pool])
network = docker_client.create_network('network1', ipam=ipam_config)
```
Networks
========
.. py:module:: docker.models.networks
Create and manage networks on the server. For more information about networks, `see the Engine documentation <https://docs.docker.com/engine/userguide/networking/>`_.
Methods available on ``client.networks``:
.. rst-class:: hide-signature
.. py:class:: NetworkCollection
.. automethod:: create
.. automethod:: get
.. automethod:: list
Network objects
-----------------
.. autoclass:: Network()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: name
.. autoattribute:: containers
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: connect
.. automethod:: disconnect
.. automethod:: reload
.. automethod:: remove
Nodes
=====
.. py:module:: docker.models.nodes
Get and list nodes in a swarm. Before you can use these methods, you first need to :doc:`join or initialize a swarm <swarm>`.
Methods available on ``client.nodes``:
.. rst-class:: hide-signature
.. py:class:: NodeCollection
.. automethod:: get(id_or_name)
.. automethod:: list(**kwargs)
Node objects
------------
.. autoclass:: Node()
.. autoattribute:: id
.. autoattribute:: short_id
.. py:attribute:: attrs
The raw representation of this object from the server.
.. autoattribute:: version
.. automethod:: reload
.. automethod:: update
# Port bindings
Port bindings is done in two parts. Firstly, by providing a list of ports to
open inside the container in the `Client().create_container()` method.
Bindings are declared in the `host_config` parameter.
```python
container_id = cli.create_container(
'busybox', 'ls', ports=[1111, 2222],
host_config=cli.create_host_config(port_bindings={
1111: 4567,
2222: None
})
)
```
You can limit the host address on which the port will be exposed like such:
```python
cli.create_host_config(port_bindings={1111: ('127.0.0.1', 4567)})
```
Or without host port assignment:
```python
cli.create_host_config(port_bindings={1111: ('127.0.0.1',)})
```
If you wish to use UDP instead of TCP (default), you need to declare ports
as such in both the config and host config:
```python
container_id = cli.create_container(
'busybox', 'ls', ports=[(1111, 'udp'), 2222],
host_config=cli.create_host_config(port_bindings={
'1111/udp': 4567, 2222: None
})
)
```
To bind multiple host ports to a single container port, use the following syntax:
```python
cli.create_host_config(port_bindings={
1111: [1234, 4567]
})
```
You can also bind multiple IPs to a single container port:
```python
cli.create_host_config(port_bindings={
1111: [
('192.168.0.100', 1234),
('192.168.0.101', 1234)
]
})
```
Services
========
.. py:module:: docker.models.services
Manage services on a swarm. For more information about services, `see the Engine documentation <https://docs.docker.com/engine/swarm/services/>`_.
Before you can use any of these methods, you first need to :doc:`join or initialize a swarm <swarm>`.
Methods available on ``client.services``:
.. rst-class:: hide-signature
.. py:class:: ServiceCollection
.. automethod:: create
.. automethod:: get
.. automethod:: list
Service objects
---------------
.. autoclass:: Service()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: name
.. autoattribute:: version
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: reload
.. automethod:: remove
.. automethod:: tasks
.. automethod:: update
# Swarm management
Starting with Engine version 1.12 (API 1.24), it is possible to manage the
engine's associated Swarm cluster using the API.
## Initializing a new Swarm
You can initialize a new Swarm by calling `Client.init_swarm`. An advertising
address needs to be provided, usually simply by indicating which network
interface needs to be used. Advanced options are provided using the
`swarm_spec` parameter, which can easily be created using
`Client.create_swarm_spec`.
```python
spec = client.create_swarm_spec(
snapshot_interval=5000, log_entries_for_slow_followers=1200
)
client.init_swarm(
advertise_addr='eth0', listen_addr='0.0.0.0:5000', force_new_cluster=False,
swarm_spec=spec
)
```
## Joining an existing Swarm
If you're looking to have the engine your client is connected to join an
existing Swarm, this can be accomplished by using the `Client.join_swarm`
method. You will need to provide a list of at least one remote address
corresponding to other machines already part of the swarm as well as the
`join_token`. In most cases, a `listen_addr` and `advertise_addr` for your
node are also required.
```python
client.join_swarm(
remote_addrs=['192.168.14.221:2377'], join_token='SWMTKN-1-redacted',
listen_addr='0.0.0.0:5000', advertise_addr='eth0:5000'
)
```
## Leaving the Swarm
To leave the swarm you are currently a member of, simply use
`Client.leave_swarm`. Note that if your engine is the Swarm's manager,
you will need to specify `force=True` to be able to leave.
```python
client.leave_swarm(force=False)
```
## Retrieving Swarm status
You can retrieve information about your current Swarm status by calling
`Client.inspect_swarm`. This method takes no arguments.
```python
client.inspect_swarm()
```
## Listing Swarm nodes
List all nodes that are part of the current Swarm using `Client.nodes`.
The `filters` argument allows to filter the results.
```python
client.nodes(filters={'role': 'manager'})
```
## Swarm API documentation
### Client.init_swarm
Initialize a new Swarm using the current connected engine as the first node.
**Params:**
* advertise_addr (string): Externally reachable address advertised to other
nodes. This can either be an address/port combination in the form
`192.168.1.1:4567`, or an interface followed by a port number, like
`eth0:4567`. If the port number is omitted, the port number from the listen
address is used. If `advertise_addr` is not specified, it will be
automatically detected when possible. Default: None
* listen_addr (string): Listen address used for inter-manager communication,
as well as determining the networking interface used for the VXLAN Tunnel
Endpoint (VTEP). This can either be an address/port combination in the form
`192.168.1.1:4567`, or an interface followed by a port number, like
`eth0:4567`. If the port number is omitted, the default swarm listening port
is used. Default: '0.0.0.0:2377'
* force_new_cluster (bool): Force creating a new Swarm, even if already part of
one. Default: False
* swarm_spec (dict): Configuration settings of the new Swarm. Use
`Client.create_swarm_spec` to generate a valid configuration. Default: None
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
#### Client.create_swarm_spec
Create a `docker.types.SwarmSpec` instance that can be used as the `swarm_spec`
argument in `Client.init_swarm`.
**Params:**
* task_history_retention_limit (int): Maximum number of tasks history stored.
* snapshot_interval (int): Number of logs entries between snapshot.
* keep_old_snapshots (int): Number of snapshots to keep beyond the current
snapshot.
* log_entries_for_slow_followers (int): Number of log entries to keep around
to sync up slow followers after a snapshot is created.
* heartbeat_tick (int): Amount of ticks (in seconds) between each heartbeat.
* election_tick (int): Amount of ticks (in seconds) needed without a leader to
trigger a new election.
* dispatcher_heartbeat_period (int): The delay for an agent to send a
heartbeat to the dispatcher.
* node_cert_expiry (int): Automatic expiry for nodes certificates.
* external_ca (dict): Configuration for forwarding signing requests to an
external certificate authority. Use `docker.types.SwarmExternalCA`.
* name (string): Swarm's name
**Returns:** `docker.types.SwarmSpec` instance.
#### docker.types.SwarmExternalCA
Create a configuration dictionary for the `external_ca` argument in a
`SwarmSpec`.
**Params:**
* protocol (string): Protocol for communication with the external CA (currently
only “cfssl” is supported).
* url (string): URL where certificate signing requests should be sent.
* options (dict): An object with key/value pairs that are interpreted as
protocol-specific options for the external CA driver.
### Client.inspect_node
Retrieve low-level information about a Swarm node
**Params:**
* node_id (string): ID of the node to be inspected.
**Returns:** A dictionary containing data about this node. See sample below.
```python
{u'CreatedAt': u'2016-08-11T23:28:39.695834296Z',
u'Description': {u'Engine': {u'EngineVersion': u'1.12.0',
u'Plugins': [{u'Name': u'bridge', u'Type': u'Network'},
{u'Name': u'host', u'Type': u'Network'},
{u'Name': u'null', u'Type': u'Network'},
{u'Name': u'overlay', u'Type': u'Network'},
{u'Name': u'local', u'Type': u'Volume'}]},
u'Hostname': u'dockerserv-1.local.net',
u'Platform': {u'Architecture': u'x86_64', u'OS': u'linux'},
u'Resources': {u'MemoryBytes': 8052109312, u'NanoCPUs': 4000000000}},
u'ID': u'1kqami616p23dz4hd7km35w63',
u'ManagerStatus': {u'Addr': u'10.0.131.127:2377',
u'Leader': True,
u'Reachability': u'reachable'},
u'Spec': {u'Availability': u'active', u'Role': u'manager'},
u'Status': {u'State': u'ready'},
u'UpdatedAt': u'2016-08-11T23:28:39.979829529Z',
u'Version': {u'Index': 9}}
```
### Client.inspect_swarm
Retrieve information about the current Swarm.
**Returns:** A dictionary containing information about the Swarm. See sample
below.
```python
{u'CreatedAt': u'2016-08-04T21:26:18.779800579Z',
u'ID': u'8hk6e9wh4iq214qtbgvbp84a9',
u'JoinTokens': {u'Manager': u'SWMTKN-1-redacted-1',
u'Worker': u'SWMTKN-1-redacted-2'},
u'Spec': {u'CAConfig': {u'NodeCertExpiry': 7776000000000000},
u'Dispatcher': {u'HeartbeatPeriod': 5000000000},
u'Name': u'default',
u'Orchestration': {u'TaskHistoryRetentionLimit': 10},
u'Raft': {u'ElectionTick': 3,
u'HeartbeatTick': 1,
u'LogEntriesForSlowFollowers': 500,
u'SnapshotInterval': 10000},
u'TaskDefaults': {}},
u'UpdatedAt': u'2016-08-04T21:26:19.391623265Z',
u'Version': {u'Index': 11}}
```
### Client.join_swarm
Join an existing Swarm.
**Params:**
* remote_addrs (list): Addresses of one or more manager nodes already
participating in the Swarm to join.
* join_token (string): Secret token for joining this Swarm.
* listen_addr (string): Listen address used for inter-manager communication
if the node gets promoted to manager, as well as determining the networking
interface used for the VXLAN Tunnel Endpoint (VTEP). Default: `None`
* advertise_addr (string): Externally reachable address advertised to other
nodes. This can either be an address/port combination in the form
`192.168.1.1:4567`, or an interface followed by a port number, like
`eth0:4567`. If the port number is omitted, the port number from the listen
address is used. If AdvertiseAddr is not specified, it will be automatically
detected when possible. Default: `None`
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
### Client.leave_swarm
Leave a Swarm.
**Params:**
* force (bool): Leave the Swarm even if this node is a manager.
Default: `False`
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
### Client.nodes
List Swarm nodes
**Params:**
* filters (dict): Filters to process on the nodes list. Valid filters:
`id`, `name`, `membership` and `role`. Default: `None`
**Returns:** A list of dictionaries containing data about each swarm node.
### Client.update_node
Update the Node's configuration
**Params:**
* version (int): The version number of the node object being updated. This
is required to avoid conflicting writes.
* node_spec (dict): Configuration settings to update. Any values not provided
will be removed. See the official [Docker API documentation](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/update-a-node) for more details.
Default: `None`.
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
```python
node_spec = {'Availability': 'active',
'Name': 'node-name',
'Role': 'manager',
'Labels': {'foo': 'bar'}
}
client.update_node(node_id='24ifsmvkjbyhk', version=8, node_spec=node_spec)
```
### Client.update_swarm
Update the Swarm's configuration
**Params:**
* version (int): The version number of the swarm object being updated. This
is required to avoid conflicting writes.
* swarm_spec (dict): Configuration settings to update. Use
`Client.create_swarm_spec` to generate a valid configuration.
Default: `None`.
* rotate_worker_token (bool): Rotate the worker join token. Default: `False`.
* rotate_manager_token (bool): Rotate the manager join token. Default: `False`.
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
Swarm
=====
.. py:module:: docker.models.swarm
Manage `Docker Engine's swarm mode <https://docs.docker.com/engine/swarm/>`_.
To use any swarm methods, you first need to make the Engine part of a swarm. This can be done by either initializing a new swarm with :py:meth:`~Swarm.init`, or joining an existing swarm with :py:meth:`~Swarm.join`.
These methods are available on ``client.swarm``:
.. rst-class:: hide-signature
.. py:class:: Swarm
.. automethod:: init()
.. automethod:: join()
.. automethod:: leave()
.. automethod:: update()
.. automethod:: reload()
.. autoattribute:: version
.. py:attribute:: attrs
The raw representation of this object from the server.
## Connection to daemon using HTTPS
**Note:** *These instructions are docker-py specific. Please refer to
[http://docs.docker.com/articles/https/](http://docs.docker.com/articles/https/)
first.*
## TLSConfig
**Params**:
* client_cert (tuple of str): Path to client cert, path to client key
* ca_cert (str): Path to CA cert file
* verify (bool or str): This can be `False` or a path to a CA Cert file
* ssl_version (int): A valid [SSL version](
https://docs.python.org/3.5/library/ssl.html#ssl.PROTOCOL_TLSv1)
* assert_hostname (bool): Verify hostname of docker daemon
### configure_client
**Params**:
* client: ([Client](api.md#client-api)): A client to apply this config to
## Authenticate server based on public/default CA pool
```python
client = docker.Client(base_url='<https_url>', tls=True)
```
Equivalent CLI options:
```bash
docker --tls ...
```
If you want to use TLS but don't want to verify the server certificate
(for example when testing with a self-signed certificate):
```python
tls_config = docker.tls.TLSConfig(verify=False)
client = docker.Client(base_url='<https_url>', tls=tls_config)
```
## Authenticate server based on given CA
```python
tls_config = docker.tls.TLSConfig(ca_cert='/path/to/ca.pem')
client = docker.Client(base_url='<https_url>', tls=tls_config)
```
Equivalent CLI options:
```bash
docker --tlsverify --tlscacert /path/to/ca.pem ...
```
## Authenticate with client certificate, do not authenticate server based on given CA
```python
tls_config = docker.tls.TLSConfig(
client_cert=('/path/to/client-cert.pem', '/path/to/client-key.pem')
)
client = docker.Client(base_url='<https_url>', tls=tls_config)
```
Equivalent CLI options:
```bash
docker --tls --tlscert /path/to/client-cert.pem --tlskey /path/to/client-key.pem ...
```
## Authenticate with client certificate, authenticate server based on given CA
```python
tls_config = docker.tls.TLSConfig(
client_cert=('/path/to/client-cert.pem', '/path/to/client-key.pem'),
verify='/path/to/ca.pem'
)
client = docker.Client(base_url='<https_url>', tls=tls_config)
```
Equivalent CLI options:
```bash
docker --tlsverify \
--tlscert /path/to/client-cert.pem \
--tlskey /path/to/client-key.pem \
--tlscacert /path/to/ca.pem ...
```
Using TLS
=========
.. py:module:: docker.tls
Both the main :py:class:`~docker.client.Client` and low-level
:py:class:`~docker.api.client.APIClient` can connect to the Docker daemon with TLS.
This is all configured automatically for you if you're using :py:func:`~docker.client.from_env`, but if you need some extra control it is possible to configure it manually by using a :py:class:`TLSConfig` object.
Examples
--------
For example, to check the server against a specific CA certificate:
.. code-block:: python
tls_config = docker.tls.TLSConfig(ca_cert='/path/to/ca.pem')
client = docker.Client(base_url='<https_url>', tls=tls_config)
This is the equivalent of ``docker --tlsverify --tlscacert /path/to/ca.pem ...``.
To authenticate with client certs:
.. code-block:: python
tls_config = docker.tls.TLSConfig(
client_cert=('/path/to/client-cert.pem', '/path/to/client-key.pem')
)
client = docker.Client(base_url='<https_url>', tls=tls_config)
This is the equivalent of ``docker --tls --tlscert /path/to/client-cert.pem --tlskey /path/to/client-key.pem ...``.
Reference
---------
.. autoclass:: TLSConfig()
# Using tmpfs
When creating a container, you can specify paths to be mounted with tmpfs using
the `tmpfs` argument to `create_host_config`, similarly to the `--tmpfs`
argument to `docker run`.
This capability is supported in Docker Engine 1.10 and up.
`tmpfs` can be either a list or a dictionary. If it's a list, each item is a
string specifying the path and (optionally) any configuration for the mount:
```python
client.create_container(
'busybox', 'ls',
host_config=client.create_host_config(tmpfs=[
'/mnt/vol2',
'/mnt/vol1:size=3G,uid=1000'
])
)
```
Alternatively, if it's a dictionary, each key is a path and each value contains
the mount options:
```python
client.create_container(
'busybox', 'ls',
host_config=client.create_host_config(tmpfs={
'/mnt/vol2': '',
'/mnt/vol1': 'size=3G,uid=1000'
})
)
```
# Using volumes
Volume declaration is done in two parts. Provide a list of mountpoints to
the `Client().create_container()` method, and declare mappings in the
`host_config` section.
```python
container_id = cli.create_container(
'busybox', 'ls', volumes=['/mnt/vol1', '/mnt/vol2'],
host_config=cli.create_host_config(binds={
'/home/user1/': {
'bind': '/mnt/vol2',
'mode': 'rw',
},
'/var/www': {
'bind': '/mnt/vol1',
'mode': 'ro',
}
})
)
```
You can alternatively specify binds as a list. This code is equivalent to the
example above:
```python
container_id = cli.create_container(
'busybox', 'ls', volumes=['/mnt/vol1', '/mnt/vol2'],
host_config=cli.create_host_config(binds=[
'/home/user1/:/mnt/vol2',
'/var/www:/mnt/vol1:ro',
])
)
```
Volumes
=======
.. py:module:: docker.models.volumes
Manage volumes on the server.
Methods available on ``client.volumes``:
.. rst-class:: hide-signature
.. py:class:: VolumeCollection
.. automethod:: create
.. automethod:: get
.. automethod:: list
Volume objects
--------------
.. autoclass:: Volume()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: name
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: reload
.. automethod:: remove
site_name: docker-py Documentation
site_description: An API client for Docker written in Python
site_favicon: favicon_whale.png
site_url: https://docker-py.readthedocs.io
repo_url: https://github.com/docker/docker-py/
theme: readthedocs
pages:
- Home: index.md
- Client API: api.md
- Port Bindings: port-bindings.md
- Using Volumes: volumes.md
- Using TLS: tls.md
- Host devices: host-devices.md
- Host configuration: hostconfig.md
- Network configuration: networks.md
- Swarm management: swarm.md
- Swarm services: services.md
- Using tmpfs: tmpfs.md
- Using with Docker Machine: machine.md
- Change Log: change_log.md
- Contributing: contributing.md
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment