Unverified Kaydet (Commit) c7a3aa7e authored tarafından Ben Firshman's avatar Ben Firshman

Add new Sphinx documentation

Initial work thanks to @aanand.
Signed-off-by: 's avatarBen Firshman <ben@firshman.co.uk>
üst 1984f687
......@@ -10,7 +10,7 @@ dist
html/*
# Compiled Documentation
site/
_build/
README.rst
env/
......
FROM python:2.7
FROM python:3.5
RUN mkdir /home/docker-py
WORKDIR /home/docker-py
COPY requirements.txt /home/docker-py/requirements.txt
RUN pip install -r requirements.txt
COPY docs-requirements.txt /home/docker-py/docs-requirements.txt
RUN pip install -r docs-requirements.txt
......
......@@ -74,7 +74,7 @@ flake8: build
.PHONY: docs
docs: build-docs
docker run -v `pwd`/docs:/home/docker-py/docs/ -p 8000:8000 docker-py-docs mkdocs serve -a 0.0.0.0:8000
docker run --rm -it -v `pwd`:/home/docker-py docker-py-docs sphinx-build docs ./_build
.PHONY: shell
shell: build
......
dl.hide-signature > dt {
display: none;
}
{% extends "!page.html" %}
{% set css_files = css_files + ["_static/custom.css"] %}
# Client API
To instantiate a `Client` class that will allow you to communicate with a
Docker daemon, simply do:
```python
>>> from docker import Client
>>> cli = Client(base_url='unix://var/run/docker.sock')
```
**Params**:
* base_url (str): Refers to the protocol+hostname+port where the Docker server
is hosted.
* version (str): The version of the API the client will use. Specify `'auto'`
to use the API version provided by the server.
* timeout (int): The HTTP request timeout, in seconds.
* tls (bool or [TLSConfig](tls.md#TLSConfig)): Equivalent CLI options: `docker --tls ...`
* user_agent (str): Set a custom user agent for requests to the server.
****
## attach
The `.logs()` function is a wrapper around this method, which you can use
instead if you want to fetch/stream container output without first retrieving
the entire backlog.
**Params**:
* container (str): The container to attach to
* stdout (bool): Get STDOUT
* stderr (bool): Get STDERR
* stream (bool): Return an iterator
* logs (bool): Get all previous output
**Returns** (generator or str): The logs or output for the image
## build
Similar to the `docker build` command. Either `path` or `fileobj` needs to be
set. `path` can be a local path (to a directory containing a Dockerfile) or a
remote URL. `fileobj` must be a readable file-like object to a Dockerfile.
If you have a tar file for the Docker build context (including a Dockerfile)
already, pass a readable file-like object to `fileobj` and also pass
`custom_context=True`. If the stream is compressed also, set `encoding` to the
correct value (e.g `gzip`).
**Params**:
* path (str): Path to the directory containing the Dockerfile
* tag (str): A tag to add to the final image
* quiet (bool): Whether to return the status
* fileobj: A file object to use as the Dockerfile. (Or a file-like object)
* nocache (bool): Don't use the cache when set to `True`
* rm (bool): Remove intermediate containers. The `docker build` command now
defaults to ``--rm=true``, but we have kept the old default of `False`
to preserve backward compatibility
* stream (bool): *Deprecated for API version > 1.8 (always True)*.
Return a blocking generator you can iterate over to retrieve build output as
it happens
* timeout (int): HTTP timeout
* custom_context (bool): Optional if using `fileobj`
* encoding (str): The encoding for a stream. Set to `gzip` for compressing
* pull (bool): Downloads any updates to the FROM image in Dockerfiles
* forcerm (bool): Always remove intermediate containers, even after unsuccessful builds
* dockerfile (str): path within the build context to the Dockerfile
* buildargs (dict): A dictionary of build arguments
* container_limits (dict): A dictionary of limits applied to each container
created by the build process. Valid keys:
- memory (int): set memory limit for build
- memswap (int): Total memory (memory + swap), -1 to disable swap
- cpushares (int): CPU shares (relative weight)
- cpusetcpus (str): CPUs in which to allow execution, e.g., `"0-3"`, `"0,1"`
* decode (bool): If set to `True`, the returned stream will be decoded into
dicts on the fly. Default `False`.
* shmsize (int): Size of /dev/shm in bytes. The size must be greater
than 0. If omitted the system uses 64MB.
* labels (dict): A dictionary of labels to set on the image
**Returns** (generator): A generator for the build output
```python
>>> from io import BytesIO
>>> from docker import Client
>>> dockerfile = '''
... # Shared Volume
... FROM busybox:buildroot-2014.02
... MAINTAINER first last, first.last@yourdomain.com
... VOLUME /data
... CMD ["/bin/sh"]
... '''
>>> f = BytesIO(dockerfile.encode('utf-8'))
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> response = [line for line in cli.build(
... fileobj=f, rm=True, tag='yourname/volume'
... )]
>>> response
['{"stream":" ---\\u003e a9eb17255234\\n"}',
'{"stream":"Step 1 : MAINTAINER first last, first.last@yourdomain.com\\n"}',
'{"stream":" ---\\u003e Running in 08787d0ee8b1\\n"}',
'{"stream":" ---\\u003e 23e5e66a4494\\n"}',
'{"stream":"Removing intermediate container 08787d0ee8b1\\n"}',
'{"stream":"Step 2 : VOLUME /data\\n"}',
'{"stream":" ---\\u003e Running in abdc1e6896c6\\n"}',
'{"stream":" ---\\u003e 713bca62012e\\n"}',
'{"stream":"Removing intermediate container abdc1e6896c6\\n"}',
'{"stream":"Step 3 : CMD [\\"/bin/sh\\"]\\n"}',
'{"stream":" ---\\u003e Running in dba30f2a1a7e\\n"}',
'{"stream":" ---\\u003e 032b8b2855fc\\n"}',
'{"stream":"Removing intermediate container dba30f2a1a7e\\n"}',
'{"stream":"Successfully built 032b8b2855fc\\n"}']
```
**Raises:** [TypeError](
https://docs.python.org/3.5/library/exceptions.html#TypeError) if `path` nor
`fileobj` are specified
## commit
Identical to the `docker commit` command.
**Params**:
* container (str): The image hash of the container
* repository (str): The repository to push the image to
* tag (str): The tag to push
* message (str): A commit message
* author (str): The name of the author
* changes (str): Dockerfile instructions to apply while committing
* conf (dict): The configuration for the container. See the [Docker remote api](
https://docs.docker.com/reference/api/docker_remote_api/) for full details.
## containers
List containers. Identical to the `docker ps` command.
**Params**:
* quiet (bool): Only display numeric Ids
* all (bool): Show all containers. Only running containers are shown by default
* trunc (bool): Truncate output
* latest (bool): Show only the latest created container, include non-running
ones.
* since (str): Show only containers created since Id or Name, include
non-running ones
* before (str): Show only container created before Id or Name, include
non-running ones
* limit (int): Show `limit` last created containers, include non-running ones
* size (bool): Display sizes
* filters (dict): Filters to be processed on the image list. Available filters:
- `exited` (int): Only containers with specified exit code
- `status` (str): One of `restarting`, `running`, `paused`, `exited`
- `label` (str): format either `"key"` or `"key=value"`
- `id` (str): The id of the container.
- `name` (str): The name of the container.
- `ancestor` (str): Filter by container ancestor. Format of `<image-name>[:tag]`, `<image-id>`, or `<image@digest>`.
- `before` (str): Only containers created before a particular container. Give the container name or id.
- `since` (str): Only containers created after a particular container. Give container name or id.
A comprehensive list can be found [here](https://docs.docker.com/engine/reference/commandline/ps/)
**Returns** (dict): The system's containers
```python
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> cli.containers()
[{'Command': '/bin/sleep 30',
'Created': 1412574844,
'Id': '6e276c9e6e5759e12a6a9214efec6439f80b4f37618e1a6547f28a3da34db07a',
'Image': 'busybox:buildroot-2014.02',
'Names': ['/grave_mayer'],
'Ports': [],
'Status': 'Up 1 seconds'}]
```
## connect_container_to_network
Connect a container to a network.
**Params**:
* container (str): container-id/name to be connected to the network
* net_id (str): network id
* aliases (list): A list of aliases for this endpoint. Names in that list can
be used within the network to reach the container. Defaults to `None`.
* links (list): A list of links for this endpoint. Containers declared in this
list will be [linked](https://docs.docker.com/engine/userguide/networking/work-with-networks/#linking-containers-in-user-defined-networks)
to this container. Defaults to `None`.
* ipv4_address (str): The IP address of this container on the network,
using the IPv4 protocol. Defaults to `None`.
* ipv6_address (str): The IP address of this container on the network,
using the IPv6 protocol. Defaults to `None`.
* link_local_ips (list): A list of link-local (IPv4/IPv6) addresses.
## copy
Identical to the `docker cp` command. Get files/folders from the container.
**Deprecated for API version >= 1.20** &ndash; Consider using
[`get_archive`](#get_archive) **instead.**
**Params**:
* container (str): The container to copy from
* resource (str): The path within the container
**Returns** (str): The contents of the file as a string
## create_container
Creates a container that can then be `.start()` ed. Parameters are similar to
those for the `docker run` command except it doesn't support the attach
options (`-a`).
See [Port bindings](port-bindings.md) and [Using volumes](volumes.md) for more
information on how to create port bindings and volume mappings.
The `mem_limit` variable accepts float values (which represent the memory limit
of the created container in bytes) or a string with a units identification char
('100000b', '1000k', '128m', '1g'). If a string is specified without a units
character, bytes are assumed as an intended unit.
`volumes_from` and `dns` arguments raise [TypeError](
https://docs.python.org/3.5/library/exceptions.html#TypeError) exception if
they are used against v1.10 and above of the Docker remote API. Those
arguments should be passed as part of the `host_config` dictionary.
**Params**:
* image (str): The image to run
* command (str or list): The command to be run in the container
* hostname (str): Optional hostname for the container
* user (str or int): Username or UID
* detach (bool): Detached mode: run container in the background and print new
container Id
* stdin_open (bool): Keep STDIN open even if not attached
* tty (bool): Allocate a pseudo-TTY
* mem_limit (float or str): Memory limit (format: [number][optional unit],
where unit = b, k, m, or g)
* ports (list of ints): A list of port numbers
* environment (dict or list): A dictionary or a list of strings in the
following format `["PASSWORD=xxx"]` or `{"PASSWORD": "xxx"}`.
* dns (list): DNS name servers
* dns_opt (list): Additional options to be added to the container's `resolv.conf` file
* volumes (str or list):
* volumes_from (str or list): List of container names or Ids to get volumes
from. Optionally a single string joining container id's with commas
* network_disabled (bool): Disable networking
* name (str): A name for the container
* entrypoint (str or list): An entrypoint
* working_dir (str): Path to the working directory
* domainname (str or list): Set custom DNS search domains
* memswap_limit (int):
* host_config (dict): A [HostConfig](hostconfig.md) dictionary
* mac_address (str): The Mac Address to assign the container
* labels (dict or list): A dictionary of name-value labels (e.g. `{"label1": "value1", "label2": "value2"}`) or a list of names of labels to set with empty values (e.g. `["label1", "label2"]`)
* volume_driver (str): The name of a volume driver/plugin.
* stop_signal (str): The stop signal to use to stop the container (e.g. `SIGINT`).
* networking_config (dict): A [NetworkingConfig](networks.md) dictionary
**Returns** (dict): A dictionary with an image 'Id' key and a 'Warnings' key.
```python
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> container = cli.create_container(image='busybox:latest', command='/bin/sleep 30')
>>> print(container)
{'Id': '8a61192da2b3bb2d922875585e29b74ec0dc4e0117fcbf84c962204e97564cd7',
'Warnings': None}
```
### docker.utils.parse_env_file
A utility for parsing an environment file.
The expected format of the file is as follows:
```
USERNAME=jdoe
PASSWORD=secret
```
The utility can be used as follows:
```python
>>> import docker.utils
>>> my_envs = docker.utils.parse_env_file('/path/to/file')
>>> client.create_container('myimage', 'command', environment=my_envs)
```
## create_network
Create a network, similar to the `docker network create` command. See the
[networks documentation](networks.md) for details.
**Params**:
* name (str): Name of the network
* driver (str): Name of the driver used to create the network
* options (dict): Driver options as a key-value dictionary
* ipam (dict): Optional custom IP scheme for the network
* check_duplicate (bool): Request daemon to check for networks with same name.
Default: `True`.
* internal (bool): Restrict external access to the network. Default `False`.
* labels (dict): Map of labels to set on the network. Default `None`.
* enable_ipv6 (bool): Enable IPv6 on the network. Default `False`.
**Returns** (dict): The created network reference object
## create_service
Create a service, similar to the `docker service create` command. See the
[services documentation](services.md#Clientcreate_service) for details.
## create_volume
Create and register a named volume
**Params**:
* name (str): Name of the volume
* driver (str): Name of the driver used to create the volume
* driver_opts (dict): Driver options as a key-value dictionary
* labels (dict): Labels to set on the volume
**Returns** (dict): The created volume reference object
```python
>>> from docker import Client
>>> cli = Client()
>>> volume = cli.create_volume(
name='foobar', driver='local', driver_opts={'foo': 'bar', 'baz': 'false'},
labels={"key": "value"}
)
>>> print(volume)
{
u'Mountpoint': u'/var/lib/docker/volumes/foobar/_data',
u'Driver': u'local',
u'Name': u'foobar',
u'Labels': {u'key': u'value'}
}
```
## diff
Inspect changes on a container's filesystem.
**Params**:
* container (str): The container to diff
**Returns** (str):
## disconnect_container_from_network
**Params**:
* container (str): container-id/name to be disconnected from a network
* net_id (str): network id
* force (bool): Force the container to disconnect from a network.
Default: `False`
## events
Identical to the `docker events` command: get real time events from the server. The `events`
function return a blocking generator you can iterate over to retrieve events as they happen.
**Params**:
* since (UTC datetime or int): get events from this point
* until (UTC datetime or int): get events until this point
* filters (dict): filter the events by event time, container or image
* decode (bool): If set to true, stream will be decoded into dicts on the
fly. False by default.
**Returns** (generator):
```python
{u'status': u'start',
u'from': u'image/with:tag',
u'id': u'container-id',
u'time': 1423339459}
```
## execute
This command is deprecated for docker-py >= 1.2.0 ; use `exec_create` and
`exec_start` instead.
## exec_create
Sets up an exec instance in a running container.
**Params**:
* container (str): Target container where exec instance will be created
* cmd (str or list): Command to be executed
* stdout (bool): Attach to stdout of the exec command if true. Default: True
* stderr (bool): Attach to stderr of the exec command if true. Default: True
* since (UTC datetime or int): Output logs from this timestamp. Default: `None` (all logs are given)
* tty (bool): Allocate a pseudo-TTY. Default: False
* user (str): User to execute command as. Default: root
**Returns** (dict): A dictionary with an exec 'Id' key.
## exec_inspect
Return low-level information about an exec command.
**Params**:
* exec_id (str): ID of the exec instance
**Returns** (dict): Dictionary of values returned by the endpoint.
## exec_resize
Resize the tty session used by the specified exec command.
**Params**:
* exec_id (str): ID of the exec instance
* height (int): Height of tty session
* width (int): Width of tty session
## exec_start
Start a previously set up exec instance.
**Params**:
* exec_id (str): ID of the exec instance
* detach (bool): If true, detach from the exec command. Default: False
* tty (bool): Allocate a pseudo-TTY. Default: False
* stream (bool): Stream response data. Default: False
**Returns** (generator or str): If `stream=True`, a generator yielding response
chunks. A string containing response data otherwise.
## export
Export the contents of a filesystem as a tar archive to STDOUT.
**Params**:
* container (str): The container to export
**Returns** (str): The filesystem tar archive as a str
## get_archive
Retrieve a file or folder from a container in the form of a tar archive.
**Params**:
* container (str): The container where the file is located
* path (str): Path to the file or folder to retrieve
**Returns** (tuple): First element is a raw tar data stream. Second element is
a dict containing `stat` information on the specified `path`.
```python
>>> import docker
>>> cli = docker.Client()
>>> ctnr = cli.create_container('busybox', 'true')
>>> strm, stat = cli.get_archive(ctnr, '/bin/sh')
>>> print(stat)
{u'linkTarget': u'', u'mode': 493, u'mtime': u'2015-09-16T12:34:23-07:00', u'name': u'sh', u'size': 962860}
```
## get_image
Get an image from the docker daemon. Similar to the `docker save` command.
**Params**:
* image (str): Image name to get
**Returns** (urllib3.response.HTTPResponse object): The response from the docker daemon
An example of how to get (save) an image to a file.
```python
>>> from docker import Client
>>> cli = Client(base_url='unix://var/run/docker.sock')
>>> image = cli.get_image(fedora:latest)
>>> image_tar = open(/tmp/fedora-latest.tar,w)
>>> image_tar.write(image.data)
>>> image_tar.close()
```
## history
Show the history of an image.
**Params**:
* image (str): The image to show history for
**Returns** (str): The history of the image
## images
List images. Identical to the `docker images` command.
**Params**:
* name (str): Only show images belonging to the repository `name`
* quiet (bool): Only show numeric Ids. Returns a list
* all (bool): Show all images (by default filter out the intermediate image
layers)
* filters (dict): Filters to be processed on the image list. Available filters:
- `dangling` (bool)
- `label` (str): format either `"key"` or `"key=value"`
**Returns** (dict or list): A list if `quiet=True`, otherwise a dict.
```python
[{'Created': 1401926735,
'Id': 'a9eb172552348a9a49180694790b33a1097f546456d041b6e82e4d7716ddb721',
'ParentId': '120e218dd395ec314e7b6249f39d2853911b3d6def6ea164ae05722649f34b16',
'RepoTags': ['busybox:buildroot-2014.02', 'busybox:latest'],
'Size': 0,
'VirtualSize': 2433303},
...
```
## import_image
Similar to the `docker import` command.
If `src` is a string or unicode string, it will first be treated as a path to
a tarball on the local system. If there is an error reading from that file,
src will be treated as a URL instead to fetch the image from. You can also pass
an open file handle as 'src', in which case the data will be read from that
file.
If `src` is unset but `image` is set, the `image` parameter will be taken as
the name of an existing image to import from.
**Params**:
* src (str or file): Path to tarfile, URL, or file-like object
* repository (str): The repository to create
* tag (str): The tag to apply
* image (str): Use another image like the `FROM` Dockerfile parameter
## import_image_from_data
Like `.import_image()`, but allows importing in-memory bytes data.
**Params**:
* data (bytes collection): Bytes collection containing valid tar data
* repository (str): The repository to create
* tag (str): The tag to apply
## import_image_from_file
Like `.import_image()`, but only supports importing from a tar file on
disk. If the file doesn't exist it will raise `IOError`.
**Params**:
* filename (str): Full path to a tar file.
* repository (str): The repository to create
* tag (str): The tag to apply
## import_image_from_url
Like `.import_image()`, but only supports importing from a URL.
**Params**:
* url (str): A URL pointing to a tar file.
* repository (str): The repository to create
* tag (str): The tag to apply
## import_image_from_image
Like `.import_image()`, but only supports importing from another image,
like the `FROM` Dockerfile parameter.
**Params**:
* image (str): Image name to import from
* repository (str): The repository to create
* tag (str): The tag to apply
## info
Display system-wide information. Identical to the `docker info` command.
**Returns** (dict): The info as a dict
```
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> cli.info()
{'Containers': 3,
'Debug': 1,
'Driver': 'aufs',
'DriverStatus': [['Root Dir', '/mnt/sda1/var/lib/docker/aufs'],
['Dirs', '225']],
'ExecutionDriver': 'native-0.2',
'IPv4Forwarding': 1,
'Images': 219,
'IndexServerAddress': 'https://index.docker.io/v1/',
'InitPath': '/usr/local/bin/docker',
'InitSha1': '',
'KernelVersion': '3.16.1-tinycore64',
'MemoryLimit': 1,
'NEventsListener': 0,
'NFd': 11,
'NGoroutines': 12,
'OperatingSystem': 'Boot2Docker 1.2.0 (TCL 5.3);',
'SwapLimit': 1}
```
## init_swarm
Initialize a new Swarm using the current connected engine as the first node.
See the [Swarm documentation](swarm.md#clientinit_swarm).
## insert
*DEPRECATED*
## inspect_container
Identical to the `docker inspect` command, but only for containers.
**Params**:
* container (str): The container to inspect
**Returns** (dict): Nearly the same output as `docker inspect`, just as a
single dict
## inspect_image
Identical to the `docker inspect` command, but only for images.
**Params**:
* image (str): The image to inspect
**Returns** (dict): Nearly the same output as `docker inspect`, just as a
single dict
## inspect_network
Retrieve network info by id.
**Params**:
* net_id (str): network id
**Returns** (dict): Network information dictionary
## inspect_node
Retrieve low-level information about a Swarm node.
See the [Swarm documentation](swarm.md#clientinspect_node).
## inspect_service
Create a service, similar to the `docker service create` command. See the
[services documentation](services.md#clientinspect_service) for details.
## inspect_swarm
Retrieve information about the current Swarm.
See the [Swarm documentation](swarm.md#clientinspect_swarm).
## inspect_task
Retrieve information about a task.
**Params**:
* task (str): Task identifier
**Returns** (dict): Task information dictionary
## inspect_volume
Retrieve volume info by name.
**Params**:
* name (str): volume name
**Returns** (dict): Volume information dictionary
```python
>>> cli.inspect_volume('foobar')
{u'Mountpoint': u'/var/lib/docker/volumes/foobar/_data', u'Driver': u'local', u'Name': u'foobar'}
```
## join_swarm
Join an existing Swarm.
See the [Swarm documentation](swarm.md#clientjoin_swarm).
## kill
Kill a container or send a signal to a container.
**Params**:
* container (str): The container to kill
* signal (str or int): The signal to send. Defaults to `SIGKILL`
## leave_swarm
Leave the current Swarm.
See the [Swarm documentation](swarm.md#clientleave_swarm).
## load_image
Load an image that was previously saved using `Client.get_image`
(or `docker save`). Similar to `docker load`.
**Params**:
* data (binary): Image data to be loaded
## login
Nearly identical to the `docker login` command, but non-interactive.
**Params**:
* username (str): The registry username
* password (str): The plaintext password
* email (str): The email for the registry account
* registry (str): URL to the registry. Ex:`https://index.docker.io/v1/`
* reauth (bool): Whether refresh existing authentication on the docker server.
* dockercfg_path (str): Use a custom path for the .dockercfg file
(default `$HOME/.dockercfg`)
**Returns** (dict): The response from the login request
## logs
Identical to the `docker logs` command. The `stream` parameter makes the `logs`
function return a blocking generator you can iterate over to retrieve log
output as it happens.
**Params**:
* container (str): The container to get logs from
* stdout (bool): Get STDOUT
* stderr (bool): Get STDERR
* stream (bool): Stream the response
* timestamps (bool): Show timestamps
* tail (str or int): Output specified number of lines at the end of logs: `"all"` or `number`. Default `"all"`
* since (datetime or int): Show logs since a given datetime or integer epoch (in seconds)
* follow (bool): Follow log output
**Returns** (generator or str):
## networks
List networks currently registered by the docker daemon. Similar to the `docker networks ls` command.
**Params**
* names (list): List of names to filter by
* ids (list): List of ids to filter by
The above are combined to create a filters dict.
**Returns** (dict): List of network objects.
## nodes
List Swarm nodes. See the [Swarm documentation](swarm.md#clientnodes).
## pause
Pauses all processes within a container.
**Params**:
* container (str): The container to pause
## ping
Hits the `/_ping` endpoint of the remote API and returns the result. An
exception will be raised if the endpoint isn't responding.
**Returns** (bool)
## port
Lookup the public-facing port that is NAT-ed to `private_port`. Identical to
the `docker port` command.
**Params**:
* container (str): The container to look up
* private_port (int): The private port to inspect
**Returns** (list of dict): The mapping for the host ports
```bash
$ docker run -d -p 80:80 ubuntu:14.04 /bin/sleep 30
7174d6347063a83f412fad6124c99cffd25ffe1a0807eb4b7f9cec76ac8cb43b
```
```python
>>> cli.port('7174d6347063', 80)
[{'HostIp': '0.0.0.0', 'HostPort': '80'}]
```
## pull
Identical to the `docker pull` command.
**Params**:
* repository (str): The repository to pull
* tag (str): The tag to pull
* stream (bool): Stream the output as a generator
* insecure_registry (bool): Use an insecure registry
* auth_config (dict): Override the credentials that Client.login has set for this request
`auth_config` should contain the `username` and `password` keys to be valid.
**Returns** (generator or str): The output
```python
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> for line in cli.pull('busybox', stream=True):
... print(json.dumps(json.loads(line), indent=4))
{
"status": "Pulling image (latest) from busybox",
"progressDetail": {},
"id": "e72ac664f4f0"
}
{
"status": "Pulling image (latest) from busybox, endpoint: ...",
"progressDetail": {},
"id": "e72ac664f4f0"
}
```
## push
Push an image or a repository to the registry. Identical to the `docker push`
command.
**Params**:
* repository (str): The repository to push to
* tag (str): An optional tag to push
* stream (bool): Stream the output as a blocking generator
* insecure_registry (bool): Use `http://` to connect to the registry
* auth_config (dict): Override the credentials that Client.login has set for this request
`auth_config` should contain the `username` and `password` keys to be valid.
**Returns** (generator or str): The output of the upload
```python
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> response = [line for line in cli.push('yourname/app', stream=True)]
>>> response
['{"status":"Pushing repository yourname/app (1 tags)"}\\n',
'{"status":"Pushing","progressDetail":{},"id":"511136ea3c5a"}\\n',
'{"status":"Image already pushed, skipping","progressDetail":{},
"id":"511136ea3c5a"}\\n',
...
'{"status":"Pushing tag for rev [918af568e6e5] on {
https://cdn-registry-1.docker.io/v1/repositories/
yourname/app/tags/latest}"}\\n']
```
## put_archive
Insert a file or folder in an existing container using a tar archive as source.
**Params**:
* container (str): The container where the file(s) will be extracted
* path (str): Path inside the container where the file(s) will be extracted.
Must exist.
* data (bytes): tar data to be extracted
**Returns** (bool): True if the call succeeds. `docker.errors.APIError` will
be raised if an error occurs.
## remove_container
Remove a container. Similar to the `docker rm` command.
**Params**:
* container (str): The container to remove
* v (bool): Remove the volumes associated with the container
* link (bool): Remove the specified link and not the underlying container
* force (bool): Force the removal of a running container (uses SIGKILL)
## remove_image
Remove an image. Similar to the `docker rmi` command.
**Params**:
* image (str): The image to remove
* force (bool): Force removal of the image
* noprune (bool): Do not delete untagged parents
## remove_network
Remove a network. Similar to the `docker network rm` command.
**Params**:
* net_id (str): The network's id
Failure to remove will raise a `docker.errors.APIError` exception.
## remove_service
Remove a service, similar to the `docker service rm` command. See the
[services documentation](services.md#clientremove_service) for details.
## remove_volume
Remove a volume. Similar to the `docker volume rm` command.
**Params**:
* name (str): The volume's name
Failure to remove will raise a `docker.errors.APIError` exception.
## rename
Rename a container. Similar to the `docker rename` command.
**Params**:
* container (str): ID of the container to rename
* name (str): New name for the container
## resize
Resize the tty session.
**Params**:
* container (str or dict): The container to resize
* height (int): Height of tty session
* width (int): Width of tty session
## restart
Restart a container. Similar to the `docker restart` command.
If `container` a dict, the `Id` key is used.
**Params**:
* container (str or dict): The container to restart
* timeout (int): Number of seconds to try to stop for before killing the
container. Once killed it will then be restarted. Default is 10 seconds.
## search
Identical to the `docker search` command.
**Params**:
* term (str): A term to search for
**Returns** (list of dicts): The response of the search
```python
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> response = cli.search('nginx')
>>> response[:2]
[{'description': 'Official build of Nginx.',
'is_official': True,
'is_trusted': False,
'name': 'nginx',
'star_count': 266},
{'description': 'Trusted automated Nginx (http://nginx.org/) ...',
'is_official': False,
'is_trusted': True,
'name': 'dockerfile/nginx',
'star_count': 60},
...
```
## services
List services, similar to the `docker service ls` command. See the
[services documentation](services.md#clientservices) for details.
## start
Similar to the `docker start` command, but doesn't support attach options. Use
`.logs()` to recover `stdout`/`stderr`.
**Params**:
* container (str): The container to start
**Deprecation warning:** For API version > 1.15, it is highly recommended to
provide host config options in the
[`host_config` parameter of `create_container`](#create_container)
```python
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> container = cli.create_container(
... image='busybox:latest',
... command='/bin/sleep 30')
>>> response = cli.start(container=container.get('Id'))
>>> print(response)
None
```
## stats
The Docker API parallel to the `docker stats` command.
This will stream statistics for a specific container.
**Params**:
* container (str): The container to stream statistics for
* decode (bool): If set to true, stream will be decoded into dicts on the
fly. False by default.
* stream (bool): If set to false, only the current stats will be returned
instead of a stream. True by default.
```python
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> stats_obj = cli.stats('elasticsearch')
>>> for stat in stats_obj:
>>> print(stat)
{"read":"2015-02-11T21:47:30.49388286+02:00","networks":{"eth0":{"rx_bytes":648,"rx_packets":8 ...
...
...
...
```
## stop
Stops a container. Similar to the `docker stop` command.
**Params**:
* container (str): The container to stop
* timeout (int): Timeout in seconds to wait for the container to stop before
sending a `SIGKILL`. Default: 10
## tag
Tag an image into a repository. Identical to the `docker tag` command.
**Params**:
* image (str): The image to tag
* repository (str): The repository to set for the tag
* tag (str): The tag name
* force (bool): Force
**Returns** (bool): True if successful
## tasks
Retrieve a list of tasks.
**Params**:
* filters (dict): A map of filters to process on the tasks list. Valid filters:
`id`, `name`, `service`, `node`, `label` and `desired-state`.
**Returns** (list): List of task dictionaries.
## top
Display the running processes of a container.
**Params**:
* container (str): The container to inspect
* ps_args (str): An optional arguments passed to ps (e.g., aux)
**Returns** (str): The output of the top
```python
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> cli.create_container('busybox:latest', '/bin/sleep 30', name='sleeper')
>>> cli.start('sleeper')
>>> cli.top('sleeper')
{'Processes': [['952', 'root', '/bin/sleep 30']],
'Titles': ['PID', 'USER', 'COMMAND']}
```
## unpause
Unpause all processes within a container.
**Params**:
* container (str): The container to unpause
## update_container
Update resource configs of one or more containers.
**Params**:
* container (str): The container to inspect
* blkio_weight (int): Block IO (relative weight), between 10 and 1000
* cpu_period (int): Limit CPU CFS (Completely Fair Scheduler) period
* cpu_quota (int): Limit CPU CFS (Completely Fair Scheduler) quota
* cpu_shares (int): CPU shares (relative weight)
* cpuset_cpus (str): CPUs in which to allow execution
* cpuset_mems (str): MEMs in which to allow execution
* mem_limit (int or str): Memory limit
* mem_reservation (int or str): Memory soft limit
* memswap_limit (int or str): Total memory (memory + swap), -1 to disable swap
* kernel_memory (int or str): Kernel memory limit
* restart_policy (dict): Restart policy dictionary
**Returns** (dict): Dictionary containing a `Warnings` key.
## update_node
Update a node.
See the [Swarm documentation](swarm.md#clientupdate_node).
## update_service
Update a service, similar to the `docker service update` command. See the
[services documentation](services.md#clientupdate_service) for details.
## update_swarm
Update the current Swarm.
See the [Swarm documentation](swarm.md#clientupdate_swarm).
## version
Nearly identical to the `docker version` command.
**Returns** (dict): The server version information
```python
>>> from docker import Client
>>> cli = Client(base_url='tcp://127.0.0.1:2375')
>>> cli.version()
{
"KernelVersion": "3.16.4-tinycore64",
"Arch": "amd64",
"ApiVersion": "1.15",
"Version": "1.3.0",
"GitCommit": "c78088f",
"Os": "linux",
"GoVersion": "go1.3.3"
}
```
## volumes
List volumes currently registered by the docker daemon. Similar to the `docker volume ls` command.
**Params**
* filters (dict): Server-side list filtering options.
**Returns** (dict): Dictionary with list of volume objects as value of the `Volumes` key.
```python
>>> cli.volumes()
{u'Volumes': [
{u'Mountpoint': u'/var/lib/docker/volumes/foobar/_data', u'Driver': u'local', u'Name': u'foobar'},
{u'Mountpoint': u'/var/lib/docker/volumes/baz/_data', u'Driver': u'local', u'Name': u'baz'}
]}
```
## wait
Identical to the `docker wait` command. Block until a container stops, then
return its exit code. Returns the value `-1` if the API responds without a
`StatusCode` attribute.
If `container` is a dict, the `Id` key is used.
If the timeout value is exceeded, a `requests.exceptions.ReadTimeout`
exception will be raised.
**Params**:
* container (str or dict): The container to wait on
* timeout (int): Request timeout
**Returns** (int): The exit code of the container
<!---
TODO:
* load_image
-->
****
## Version mismatch
You may encounter an error like this:
```text
client is newer than server (client API version: 1.24, server API version: 1.23)
```
To fix this, you have to either supply the exact version your server supports
when instantiating the `Client`:
```python
client = docker.Client(version="1.23")
```
or let the client automatically detect the newest version server supports:
```python
client = docker.Client(version="auto")
```
Low-level API
=============
The main object-orientated API is built on top of :py:class:`APIClient`. Each method on :py:class:`APIClient` maps one-to-one with a REST API endpoint, and returns the response that the API responds with.
It's possible to use :py:class:`APIClient` directly. Some basic things (e.g. running a container) consist of several API calls and are complex to do with the low-level API, but it's useful if you need extra flexibility and power.
.. py:module:: docker.api
.. autoclass:: docker.api.client.APIClient
Containers
----------
.. py:module:: docker.api.container
.. rst-class:: hide-signature
.. autoclass:: ContainerApiMixin
:members:
:undoc-members:
.. py:module:: docker.api.image
Images
------
.. py:module:: docker.api.image
.. rst-class:: hide-signature
.. autoclass:: ImageApiMixin
:members:
:undoc-members:
Building images
---------------
.. py:module:: docker.api.build
.. rst-class:: hide-signature
.. autoclass:: BuildApiMixin
:members:
:undoc-members:
Networks
--------
.. rst-class:: hide-signature
.. autoclass:: docker.api.network.NetworkApiMixin
:members:
:undoc-members:
Utilities
~~~~~~~~~
These functions are available under ``docker.utils`` to create arguments
for :py:meth:`create_network`:
.. autofunction:: docker.utils.create_ipam_config
.. autofunction:: docker.utils.create_ipam_pool
Volumes
-------
.. py:module:: docker.api.volume
.. rst-class:: hide-signature
.. autoclass:: VolumeApiMixin
:members:
:undoc-members:
Executing commands in containers
--------------------------------
.. py:module:: docker.api.exec_api
.. rst-class:: hide-signature
.. autoclass:: ExecApiMixin
:members:
:undoc-members:
Swarms
------
.. py:module:: docker.api.swarm
.. rst-class:: hide-signature
.. autoclass:: SwarmApiMixin
:members:
:undoc-members:
Services
--------
.. py:module:: docker.api.service
.. rst-class:: hide-signature
.. autoclass:: ServiceApiMixin
:members:
:undoc-members:
The Docker daemon
-----------------
.. py:module:: docker.api.daemon
.. rst-class:: hide-signature
.. autoclass:: DaemonApiMixin
:members:
:undoc-members:
Client
======
.. py:module:: docker.client
Creating a client
-----------------
To communicate with the Docker daemon, you first need to instantiate a client. The easiest way to do that is by calling the function :py:func:`~docker.client.from_env`. It can also be configured manually by instantiating a :py:class:`~docker.client.Client` class.
.. autofunction:: from_env()
Client reference
----------------
.. autoclass:: Client()
.. autoattribute:: containers
.. autoattribute:: images
.. autoattribute:: networks
.. autoattribute:: nodes
.. autoattribute:: services
.. autoattribute:: swarm
.. autoattribute:: volumes
.. automethod:: events()
.. automethod:: info()
.. automethod:: login()
.. automethod:: ping()
.. automethod:: version()
# -*- coding: utf-8 -*-
#
# docker-sdk-python documentation build configuration file, created by
# sphinx-quickstart on Wed Sep 14 15:48:58 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import datetime
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.napoleon',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
from recommonmark.parser import CommonMarkParser
source_parsers = {
'.md': CommonMarkParser,
}
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
source_suffix = ['.rst', '.md']
# source_suffix = '.md'
# The encoding of source files.
#
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Docker SDK for Python'
year = datetime.datetime.now().year
copyright = u'%d Docker Inc' % year
author = u'Docker Inc'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u'2.0'
# The full version, including alpha/beta/rc tags.
release = u'2.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#
# today = ''
#
# Else, today_fmt is used as the format for a strftime call.
#
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
html_theme_options = {
'description': 'A Python library for the Docker Remote API',
'fixed_sidebar': True,
}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents.
# "<project> v<release> documentation" by default.
#
# html_title = u'docker-sdk-python v2.0'
# A shorter title for the navigation bar. Default is the same as html_title.
#
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#
# html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#
# html_extra_path = []
# If not None, a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
# The empty string is equivalent to '%b %d, %Y'.
#
# html_last_updated_fmt = None
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'searchbox.html',
]
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#
# html_additional_pages = {}
# If false, no module index is generated.
#
# html_domain_indices = True
# If false, no index is generated.
#
# html_use_index = True
# If true, the index is split into individual pages for each letter.
#
# html_split_index = False
# If true, links to the reST sources are added to the pages.
#
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'
#
# html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# 'ja' uses this config value.
# 'zh' user can custom change `jieba` dictionary path.
#
# html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#
# html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'docker-sdk-pythondoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'docker-sdk-python.tex', u'docker-sdk-python Documentation',
u'Docker Inc.', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#
# latex_use_parts = False
# If true, show page references after internal links.
#
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
#
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
#
# latex_appendices = []
# It false, will not define \strong, \code, itleref, \crossref ... but only
# \sphinxstrong, ..., \sphinxtitleref, ... To help avoid clash with user added
# packages.
#
# latex_keep_old_macro_names = True
# If false, no module index is generated.
#
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'docker-sdk-python', u'docker-sdk-python Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'docker-sdk-python', u'docker-sdk-python Documentation',
author, 'docker-sdk-python', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#
# texinfo_appendices = []
# If false, no module index is generated.
#
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#
# texinfo_no_detailmenu = False
# Napoleon settings
napoleon_google_docstring = True
napoleon_numpy_docstring = False
Containers
==========
.. py:module:: docker.models.containers
Run and manage containers on the server.
Methods available on ``client.containers``:
.. rst-class:: hide-signature
.. autoclass:: ContainerCollection
.. automethod:: run(image, command=None, **kwargs)
.. automethod:: create(image, command=None, **kwargs)
.. automethod:: get(id_or_name)
.. automethod:: list(**kwargs)
Container objects
-----------------
.. autoclass:: Container()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: name
.. autoattribute:: status
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: attach
.. automethod:: attach_socket
.. automethod:: commit
.. automethod:: diff
.. automethod:: exec_run
.. automethod:: export
.. automethod:: get_archive
.. automethod:: kill
.. automethod:: logs
.. automethod:: pause
.. automethod:: put_archive
.. automethod:: remove
.. automethod:: rename
.. automethod:: resize
.. automethod:: restart
.. automethod:: start
.. automethod:: stats
.. automethod:: stop
.. automethod:: top
.. automethod:: unpause
.. automethod:: update
# Access to devices on the host
If you need to directly expose some host devices to a container, you can use
the devices parameter in the `host_config` param in `Client.create_container`
as shown below:
```python
cli.create_container(
'busybox', 'true', host_config=cli.create_host_config(devices=[
'/dev/sda:/dev/xvda:rwm'
])
)
```
Each string is a single mapping using the following format:
`<path_on_host>:<path_in_container>:<cgroup_permissions>`
The above example allows the container to have read-write access to
the host's `/dev/sda` via a node named `/dev/xvda` inside the container.
As a more verbose alternative, each host device definition can be specified as
a dictionary with the following keys:
```python
{
'PathOnHost': '/dev/sda1',
'PathInContainer': '/dev/xvda',
'CgroupPermissions': 'rwm'
}
```
# HostConfig object
The Docker Remote API introduced [support for HostConfig in version 1.15](http://docs.docker.com/reference/api/docker_remote_api_v1.15/#create-a-container).
This object contains all the parameters you could previously pass to `Client.start`.
*It is highly recommended that users pass the HostConfig in the `host_config`*
*param of `Client.create_container` instead of `Client.start`*
## HostConfig helper
### Client.create_host_config
Creates a HostConfig dictionary to be used with `Client.create_container`.
`binds` allows to bind a directory in the host to the container. See [Using
volumes](volumes.md) for more information.
`port_bindings` exposes container ports to the host.
See [Port bindings](port-bindings.md) for more information.
`lxc_conf` allows to pass LXC configuration options using a dictionary.
`privileged` starts the container in privileged mode.
[Links](http://docs.docker.io/en/latest/use/working_with_links_names/) can be
specified with the `links` argument. They can either be specified as a
dictionary mapping name to alias or as a list of `(name, alias)` tuples.
`dns` and `volumes_from` are only available if they are used with version v1.10
of docker remote API. Otherwise they are ignored.
`network_mode` is available since v1.11 and sets the Network mode for the
container ('bridge': creates a new network stack for the container on the
Docker bridge, 'none': no networking for this container, 'container:[name|id]':
reuses another container network stack, 'host': use the host network stack
inside the container or any name that identifies an existing Docker network).
`restart_policy` is available since v1.2.0 and sets the container's *RestartPolicy*
which defines the conditions under which a container should be restarted upon exit.
If no *RestartPolicy* is defined, the container will not be restarted when it exits.
The *RestartPolicy* is specified as a dict. For example, if the container
should always be restarted:
```python
{
"MaximumRetryCount": 0,
"Name": "always"
}
```
It is possible to restart the container only on failure as well as limit the number
of restarts. For example:
```python
{
"MaximumRetryCount": 5,
"Name": "on-failure"
}
```
`cap_add` and `cap_drop` are available since v1.2.0 and can be used to add or
drop certain capabilities. The user may specify the capabilities as an array
for example:
```python
[
"SYS_ADMIN",
"MKNOD"
]
```
**Params**
* binds: Volumes to bind. See [Using volumes](volumes.md) for more information.
* port_bindings (dict): Port bindings. See [Port bindings](port-bindings.md)
for more information.
* lxc_conf (dict): LXC config
* oom_kill_disable (bool): Whether to disable OOM killer
* oom_score_adj (int): An integer value containing the score given to the
container in order to tune OOM killer preferences
* publish_all_ports (bool): Whether to publish all ports to the host
* links (dict or list of tuples): either as a dictionary mapping name to alias
or as a list of `(name, alias)` tuples
* privileged (bool): Give extended privileges to this container
* dns (list): Set custom DNS servers
* dns_search (list): DNS search domains
* volumes_from (str or list): List of container names or Ids to get volumes
from. Optionally a single string joining container id's with commas
* network_mode (str): One of `['bridge', 'none', 'container:<name|id>', 'host']`
* restart_policy (dict): "Name" param must be one of
`['on-failure', 'always']`
* cap_add (list of str): Add kernel capabilities
* cap_drop (list of str): Drop kernel capabilities
* extra_hosts (dict): custom host-to-IP mappings (host:ip)
* read_only (bool): mount the container's root filesystem as read only
* pid_mode (str): if set to "host", use the host PID namespace inside the
container
* ipc_mode (str): Set the IPC mode for the container
* security_opt (list): A list of string values to customize labels for MLS
systems, such as SELinux.
* ulimits (list): A list of dicts or `docker.utils.Ulimit` objects. A list
of ulimits to be set in the container.
* log_config (`docker.utils.LogConfig` or dict): Logging configuration to
container
* mem_limit (str or int): Maximum amount of memory container is allowed to
consume. (e.g. `'1G'`)
* memswap_limit (str or int): Maximum amount of memory + swap a container is
allowed to consume.
* mem_swappiness (int): Tune a container's memory swappiness behavior.
Accepts number between 0 and 100.
* shm_size (str or int): Size of /dev/shm. (e.g. `'1G'`)
* cpu_group (int): The length of a CPU period in microseconds.
* cpu_period (int): Microseconds of CPU time that the container can get in a
CPU period.
* cpu_shares (int): CPU shares (relative weight)
* cpuset_cpus (str): CPUs in which to allow execution (0-3, 0,1)
* blkio_weight: Block IO weight (relative weight), accepts a weight value
between 10 and 1000.
* blkio_weight_device: Block IO weight (relative device weight) in the form of:
`[{"Path": "device_path", "Weight": weight}]`
* device_read_bps: Limit read rate (bytes per second) from a device in the
form of: `[{"Path": "device_path", "Rate": rate}]`
* device_write_bps: Limit write rate (bytes per second) from a device.
* device_read_iops: Limit read rate (IO per second) from a device.
* device_write_iops: Limit write rate (IO per second) from a device.
* group_add (list): List of additional group names and/or IDs that the
container process will run as.
* devices (list): Host device bindings. See [host devices](host-devices.md)
for more information.
* tmpfs: Temporary filesystems to mount. See [Using tmpfs](tmpfs.md) for more
information.
* sysctls (dict): Kernel parameters to set in the container.
* userns_mode (str): Sets the user namespace mode for the container when user
namespace remapping option is enabled. Supported values are: `host`
* pids_limit (int): Tune a container’s pids limit. Set -1 for unlimited.
* isolation (str): Isolation technology to use. Default: `None`.
**Returns** (dict) HostConfig dictionary
```python
>>> from docker import Client
>>> cli = Client()
>>> cli.create_host_config(privileged=True, cap_drop=['MKNOD'], volumes_from=['nostalgic_newton'])
{'CapDrop': ['MKNOD'], 'LxcConf': None, 'Privileged': True, 'VolumesFrom': ['nostalgic_newton'], 'PublishAllPorts': False}
```
Images
======
.. py:module:: docker.models.images
Manage images on the server.
Methods available on ``client.images``:
.. rst-class:: hide-signature
.. py:class:: ImageCollection
.. automethod:: build
.. automethod:: get
.. automethod:: list(**kwargs)
.. automethod:: load
.. automethod:: pull
.. automethod:: push
.. automethod:: remove
.. automethod:: search
Image objects
-------------
.. autoclass:: Image()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: tags
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: history
.. automethod:: reload
.. automethod:: save
.. automethod:: tag
# docker-py documentation
An API client for docker written in Python
## Installation
Our latest stable is always available on PyPi.
pip install docker-py
## Documentation
Full documentation is available in the `/docs/` directory.
## License
Docker is licensed under the Apache License, Version 2.0. See LICENSE for full license text
Docker SDK for Python
=====================
A Python library for the Docker Remote API. It lets you do anything the ``docker`` command does, but from within Python apps – run containers, manage containers, manage Swarms, etc.
For more information about the Remote API, `see its documentation <https://docs.docker.com/engine/reference/api/docker_remote_api/>`_.
Installation
------------
The latest stable version `is available on PyPi <https://pypi.python.org/pypi/docker/>`_. Either add ``docker`` to your ``requirements.txt`` file or install with pip::
pip install docker
Getting started
---------------
To talk to a Docker daemon, you first need to instantiate a client. You can use :py:func:`~docker.client.from_env` to connect using the default socket or the configuration in your environment:
.. code-block:: python
import docker
client = docker.from_env()
You can now run containers:
.. code-block:: python
>>> client.containers.run("ubuntu", "echo hello world")
'hello world\n'
You can run containers in the background:
.. code-block:: python
>>> client.containers.run("bfirsh/reticulate-splines", detach=True)
<Container '45e6d2de7c54'>
You can manage containers:
.. code-block:: python
>>> client.containers.list()
[<Container '45e6d2de7c54'>, <Container 'db18e4f20eaa'>, ...]
>>> container = client.containers.get('45e6d2de7c54')
>>> container.attrs['Config']['Image']
"bfirsh/reticulate-splines"
>>> container.logs()
"Reticulating spline 1...\n"
>>> container.stop()
You can stream logs:
.. code-block:: python
>>> for line in container.logs(stream=True):
... print line.strip()
Reticulating spline 2...
Reticulating spline 3...
...
You can manage images:
.. code-block:: python
>>> client.images.pull('nginx')
<Image 'nginx'>
>>> client.images.list()
[<Image 'ubuntu'>, <Image 'nginx'>, ...]
That's just a taster of what you can do with the Docker SDK for Python. For more, :doc:`take a look at the reference <client>`.
.. toctree::
:hidden:
:maxdepth: 2
Home <index>
client
containers
images
networks
nodes
services
swarm
volumes
api
tls
change-log
# Using with Docker Toolbox and Machine
In development, Docker recommends using
[Docker Toolbox](https://www.docker.com/products/docker-toolbox) to set up
Docker. It includes a tool called Machine which will create a VM running
Docker Engine and point your shell at it using environment variables.
To configure docker-py with these environment variables
First use Machine to set up the environment variables:
```bash
$ eval "$(docker-machine env)"
```
You can then use docker-py like this:
```python
import docker
client = docker.from_env(assert_hostname=False)
print client.version()
```
**Note:** This snippet is disabling TLS hostname checking with
`assert\_hostname=False`. Machine provides us with the exact certificate
the server is using so this is safe. If you are not using Machine and verifying
the host against a certificate authority, you'll want to enable hostname
verification.
# Using Networks
## Network creation
With the release of Docker 1.9 you can now manage custom networks.
Here you can see how to create a network named `network1` using
the `bridge` driver
```python
docker_client.create_network("network1", driver="bridge")
```
You can also create more advanced networks with custom IPAM configurations.
For example, setting the subnet to `192.168.52.0/24` and gateway address
to `192.168.52.254`
```python
ipam_pool = docker.utils.create_ipam_pool(
subnet='192.168.52.0/24',
gateway='192.168.52.254'
)
ipam_config = docker.utils.create_ipam_config(
pool_configs=[ipam_pool]
)
docker_client.create_network("network1", driver="bridge", ipam=ipam_config)
```
By default, when you connect a container to an overlay network, Docker also
connects a bridge network to it to provide external connectivity. If you want
to create an externally isolated overlay network, with Docker 1.10 you can
create an internal network.
```python
docker_client.create_network("network1", driver="bridge", internal=True)
```
## Container network configuration
In order to specify which network a container will be connected to, and
additional configuration, use the `networking_config` parameter in
`Client.create_container`. Note that at the time of creation, you can
only connect a container to a single network. Later on, you may create more
connections using `Client.connect_container_to_network`.
```python
networking_config = docker_client.create_networking_config({
'network1': docker_client.create_endpoint_config(
ipv4_address='172.28.0.124',
aliases=['foo', 'bar'],
links=['container2']
)
})
ctnr = docker_client.create_container(
img, command, networking_config=networking_config
)
```
## Network API documentation
### Client.create_networking_config
Create a networking config dictionary to be used as the `networking_config`
parameter in `Client.create_container_config`
**Params**:
* endpoints_config (dict): A dictionary of `network_name -> endpoint_config`
relationships. Values should be endpoint config dictionaries created by
`Client.create_endpoint_config`. Defaults to `None` (default config).
**Returns** A networking config dictionary.
```python
docker_client.create_network('network1')
networking_config = docker_client.create_networking_config({
'network1': docker_client.create_endpoint_config()
})
container = docker_client.create_container(
img, command, networking_config=networking_config
)
```
### Client.create_endpoint_config
Create an endpoint config dictionary to be used with
`Client.create_networking_config`.
**Params**:
* aliases (list): A list of aliases for this endpoint. Names in that list can
be used within the network to reach the container. Defaults to `None`.
* links (list): A list of links for this endpoint. Containers declared in this
list will be [linked](https://docs.docker.com/engine/userguide/networking/work-with-networks/#linking-containers-in-user-defined-networks)
to this container. Defaults to `None`.
* ipv4_address (str): The IP address of this container on the network,
using the IPv4 protocol. Defaults to `None`.
* ipv6_address (str): The IP address of this container on the network,
using the IPv6 protocol. Defaults to `None`.
* link_local_ips (list): A list of link-local (IPv4/IPv6) addresses.
**Returns** An endpoint config dictionary.
```python
endpoint_config = docker_client.create_endpoint_config(
aliases=['web', 'app'],
links=['app_db'],
ipv4_address='132.65.0.123'
)
docker_client.create_network('network1')
networking_config = docker_client.create_networking_config({
'network1': endpoint_config
})
container = docker_client.create_container(
img, command, networking_config=networking_config
)
```
### docker.utils.create_ipam_config
Create an IPAM (IP Address Management) config dictionary to be used with
`Client.create_network`.
**Params**:
* driver (str): The IPAM driver to use. Defaults to `'default'`.
* pool_configs (list): A list of pool configuration dictionaries as created
by `docker.utils.create_ipam_pool`. Defaults to empty list.
**Returns** An IPAM config dictionary
```python
ipam_config = docker.utils.create_ipam_config(driver='default')
network = docker_client.create_network('network1', ipam=ipam_config)
```
### docker.utils.create_ipam_pool
Create an IPAM pool config dictionary to be added to the `pool_configs` param
in `docker.utils.create_ipam_config`.
**Params**:
* subnet (str): Custom subnet for this IPAM pool using the CIDR notation.
Defaults to `None`.
* iprange (str): Custom IP range for endpoints in this IPAM pool using the
CIDR notation. Defaults to `None`.
* gateway (str): Custom IP address for the pool's gateway.
* aux_addresses (dict): A dictionary of `key -> ip_address` relationships
specifying auxiliary addresses that need to be allocated by the
IPAM driver.
**Returns** An IPAM pool config dictionary
```python
ipam_pool = docker.utils.create_ipam_pool(
subnet='124.42.0.0/16',
iprange='124.42.0.0/24',
gateway='124.42.0.254',
aux_addresses={
'reserved1': '124.42.1.1'
}
)
ipam_config = docker.utils.create_ipam_config(pool_configs=[ipam_pool])
network = docker_client.create_network('network1', ipam=ipam_config)
```
Networks
========
.. py:module:: docker.models.networks
Create and manage networks on the server. For more information about networks, `see the Engine documentation <https://docs.docker.com/engine/userguide/networking/>`_.
Methods available on ``client.networks``:
.. rst-class:: hide-signature
.. py:class:: NetworkCollection
.. automethod:: create
.. automethod:: get
.. automethod:: list
Network objects
-----------------
.. autoclass:: Network()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: name
.. autoattribute:: containers
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: connect
.. automethod:: disconnect
.. automethod:: reload
.. automethod:: remove
Nodes
=====
.. py:module:: docker.models.nodes
Get and list nodes in a swarm. Before you can use these methods, you first need to :doc:`join or initialize a swarm <swarm>`.
Methods available on ``client.nodes``:
.. rst-class:: hide-signature
.. py:class:: NodeCollection
.. automethod:: get(id_or_name)
.. automethod:: list(**kwargs)
Node objects
------------
.. autoclass:: Node()
.. autoattribute:: id
.. autoattribute:: short_id
.. py:attribute:: attrs
The raw representation of this object from the server.
.. autoattribute:: version
.. automethod:: reload
.. automethod:: update
# Port bindings
Port bindings is done in two parts. Firstly, by providing a list of ports to
open inside the container in the `Client().create_container()` method.
Bindings are declared in the `host_config` parameter.
```python
container_id = cli.create_container(
'busybox', 'ls', ports=[1111, 2222],
host_config=cli.create_host_config(port_bindings={
1111: 4567,
2222: None
})
)
```
You can limit the host address on which the port will be exposed like such:
```python
cli.create_host_config(port_bindings={1111: ('127.0.0.1', 4567)})
```
Or without host port assignment:
```python
cli.create_host_config(port_bindings={1111: ('127.0.0.1',)})
```
If you wish to use UDP instead of TCP (default), you need to declare ports
as such in both the config and host config:
```python
container_id = cli.create_container(
'busybox', 'ls', ports=[(1111, 'udp'), 2222],
host_config=cli.create_host_config(port_bindings={
'1111/udp': 4567, 2222: None
})
)
```
To bind multiple host ports to a single container port, use the following syntax:
```python
cli.create_host_config(port_bindings={
1111: [1234, 4567]
})
```
You can also bind multiple IPs to a single container port:
```python
cli.create_host_config(port_bindings={
1111: [
('192.168.0.100', 1234),
('192.168.0.101', 1234)
]
})
```
Services
========
.. py:module:: docker.models.services
Manage services on a swarm. For more information about services, `see the Engine documentation <https://docs.docker.com/engine/swarm/services/>`_.
Before you can use any of these methods, you first need to :doc:`join or initialize a swarm <swarm>`.
Methods available on ``client.services``:
.. rst-class:: hide-signature
.. py:class:: ServiceCollection
.. automethod:: create
.. automethod:: get
.. automethod:: list
Service objects
---------------
.. autoclass:: Service()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: name
.. autoattribute:: version
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: reload
.. automethod:: remove
.. automethod:: tasks
.. automethod:: update
# Swarm management
Starting with Engine version 1.12 (API 1.24), it is possible to manage the
engine's associated Swarm cluster using the API.
## Initializing a new Swarm
You can initialize a new Swarm by calling `Client.init_swarm`. An advertising
address needs to be provided, usually simply by indicating which network
interface needs to be used. Advanced options are provided using the
`swarm_spec` parameter, which can easily be created using
`Client.create_swarm_spec`.
```python
spec = client.create_swarm_spec(
snapshot_interval=5000, log_entries_for_slow_followers=1200
)
client.init_swarm(
advertise_addr='eth0', listen_addr='0.0.0.0:5000', force_new_cluster=False,
swarm_spec=spec
)
```
## Joining an existing Swarm
If you're looking to have the engine your client is connected to join an
existing Swarm, this can be accomplished by using the `Client.join_swarm`
method. You will need to provide a list of at least one remote address
corresponding to other machines already part of the swarm as well as the
`join_token`. In most cases, a `listen_addr` and `advertise_addr` for your
node are also required.
```python
client.join_swarm(
remote_addrs=['192.168.14.221:2377'], join_token='SWMTKN-1-redacted',
listen_addr='0.0.0.0:5000', advertise_addr='eth0:5000'
)
```
## Leaving the Swarm
To leave the swarm you are currently a member of, simply use
`Client.leave_swarm`. Note that if your engine is the Swarm's manager,
you will need to specify `force=True` to be able to leave.
```python
client.leave_swarm(force=False)
```
## Retrieving Swarm status
You can retrieve information about your current Swarm status by calling
`Client.inspect_swarm`. This method takes no arguments.
```python
client.inspect_swarm()
```
## Listing Swarm nodes
List all nodes that are part of the current Swarm using `Client.nodes`.
The `filters` argument allows to filter the results.
```python
client.nodes(filters={'role': 'manager'})
```
## Swarm API documentation
### Client.init_swarm
Initialize a new Swarm using the current connected engine as the first node.
**Params:**
* advertise_addr (string): Externally reachable address advertised to other
nodes. This can either be an address/port combination in the form
`192.168.1.1:4567`, or an interface followed by a port number, like
`eth0:4567`. If the port number is omitted, the port number from the listen
address is used. If `advertise_addr` is not specified, it will be
automatically detected when possible. Default: None
* listen_addr (string): Listen address used for inter-manager communication,
as well as determining the networking interface used for the VXLAN Tunnel
Endpoint (VTEP). This can either be an address/port combination in the form
`192.168.1.1:4567`, or an interface followed by a port number, like
`eth0:4567`. If the port number is omitted, the default swarm listening port
is used. Default: '0.0.0.0:2377'
* force_new_cluster (bool): Force creating a new Swarm, even if already part of
one. Default: False
* swarm_spec (dict): Configuration settings of the new Swarm. Use
`Client.create_swarm_spec` to generate a valid configuration. Default: None
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
#### Client.create_swarm_spec
Create a `docker.types.SwarmSpec` instance that can be used as the `swarm_spec`
argument in `Client.init_swarm`.
**Params:**
* task_history_retention_limit (int): Maximum number of tasks history stored.
* snapshot_interval (int): Number of logs entries between snapshot.
* keep_old_snapshots (int): Number of snapshots to keep beyond the current
snapshot.
* log_entries_for_slow_followers (int): Number of log entries to keep around
to sync up slow followers after a snapshot is created.
* heartbeat_tick (int): Amount of ticks (in seconds) between each heartbeat.
* election_tick (int): Amount of ticks (in seconds) needed without a leader to
trigger a new election.
* dispatcher_heartbeat_period (int): The delay for an agent to send a
heartbeat to the dispatcher.
* node_cert_expiry (int): Automatic expiry for nodes certificates.
* external_ca (dict): Configuration for forwarding signing requests to an
external certificate authority. Use `docker.types.SwarmExternalCA`.
* name (string): Swarm's name
**Returns:** `docker.types.SwarmSpec` instance.
#### docker.types.SwarmExternalCA
Create a configuration dictionary for the `external_ca` argument in a
`SwarmSpec`.
**Params:**
* protocol (string): Protocol for communication with the external CA (currently
only “cfssl” is supported).
* url (string): URL where certificate signing requests should be sent.
* options (dict): An object with key/value pairs that are interpreted as
protocol-specific options for the external CA driver.
### Client.inspect_node
Retrieve low-level information about a Swarm node
**Params:**
* node_id (string): ID of the node to be inspected.
**Returns:** A dictionary containing data about this node. See sample below.
```python
{u'CreatedAt': u'2016-08-11T23:28:39.695834296Z',
u'Description': {u'Engine': {u'EngineVersion': u'1.12.0',
u'Plugins': [{u'Name': u'bridge', u'Type': u'Network'},
{u'Name': u'host', u'Type': u'Network'},
{u'Name': u'null', u'Type': u'Network'},
{u'Name': u'overlay', u'Type': u'Network'},
{u'Name': u'local', u'Type': u'Volume'}]},
u'Hostname': u'dockerserv-1.local.net',
u'Platform': {u'Architecture': u'x86_64', u'OS': u'linux'},
u'Resources': {u'MemoryBytes': 8052109312, u'NanoCPUs': 4000000000}},
u'ID': u'1kqami616p23dz4hd7km35w63',
u'ManagerStatus': {u'Addr': u'10.0.131.127:2377',
u'Leader': True,
u'Reachability': u'reachable'},
u'Spec': {u'Availability': u'active', u'Role': u'manager'},
u'Status': {u'State': u'ready'},
u'UpdatedAt': u'2016-08-11T23:28:39.979829529Z',
u'Version': {u'Index': 9}}
```
### Client.inspect_swarm
Retrieve information about the current Swarm.
**Returns:** A dictionary containing information about the Swarm. See sample
below.
```python
{u'CreatedAt': u'2016-08-04T21:26:18.779800579Z',
u'ID': u'8hk6e9wh4iq214qtbgvbp84a9',
u'JoinTokens': {u'Manager': u'SWMTKN-1-redacted-1',
u'Worker': u'SWMTKN-1-redacted-2'},
u'Spec': {u'CAConfig': {u'NodeCertExpiry': 7776000000000000},
u'Dispatcher': {u'HeartbeatPeriod': 5000000000},
u'Name': u'default',
u'Orchestration': {u'TaskHistoryRetentionLimit': 10},
u'Raft': {u'ElectionTick': 3,
u'HeartbeatTick': 1,
u'LogEntriesForSlowFollowers': 500,
u'SnapshotInterval': 10000},
u'TaskDefaults': {}},
u'UpdatedAt': u'2016-08-04T21:26:19.391623265Z',
u'Version': {u'Index': 11}}
```
### Client.join_swarm
Join an existing Swarm.
**Params:**
* remote_addrs (list): Addresses of one or more manager nodes already
participating in the Swarm to join.
* join_token (string): Secret token for joining this Swarm.
* listen_addr (string): Listen address used for inter-manager communication
if the node gets promoted to manager, as well as determining the networking
interface used for the VXLAN Tunnel Endpoint (VTEP). Default: `None`
* advertise_addr (string): Externally reachable address advertised to other
nodes. This can either be an address/port combination in the form
`192.168.1.1:4567`, or an interface followed by a port number, like
`eth0:4567`. If the port number is omitted, the port number from the listen
address is used. If AdvertiseAddr is not specified, it will be automatically
detected when possible. Default: `None`
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
### Client.leave_swarm
Leave a Swarm.
**Params:**
* force (bool): Leave the Swarm even if this node is a manager.
Default: `False`
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
### Client.nodes
List Swarm nodes
**Params:**
* filters (dict): Filters to process on the nodes list. Valid filters:
`id`, `name`, `membership` and `role`. Default: `None`
**Returns:** A list of dictionaries containing data about each swarm node.
### Client.update_node
Update the Node's configuration
**Params:**
* version (int): The version number of the node object being updated. This
is required to avoid conflicting writes.
* node_spec (dict): Configuration settings to update. Any values not provided
will be removed. See the official [Docker API documentation](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/update-a-node) for more details.
Default: `None`.
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
```python
node_spec = {'Availability': 'active',
'Name': 'node-name',
'Role': 'manager',
'Labels': {'foo': 'bar'}
}
client.update_node(node_id='24ifsmvkjbyhk', version=8, node_spec=node_spec)
```
### Client.update_swarm
Update the Swarm's configuration
**Params:**
* version (int): The version number of the swarm object being updated. This
is required to avoid conflicting writes.
* swarm_spec (dict): Configuration settings to update. Use
`Client.create_swarm_spec` to generate a valid configuration.
Default: `None`.
* rotate_worker_token (bool): Rotate the worker join token. Default: `False`.
* rotate_manager_token (bool): Rotate the manager join token. Default: `False`.
**Returns:** `True` if the request went through. Raises an `APIError` if it
fails.
Swarm
=====
.. py:module:: docker.models.swarm
Manage `Docker Engine's swarm mode <https://docs.docker.com/engine/swarm/>`_.
To use any swarm methods, you first need to make the Engine part of a swarm. This can be done by either initializing a new swarm with :py:meth:`~Swarm.init`, or joining an existing swarm with :py:meth:`~Swarm.join`.
These methods are available on ``client.swarm``:
.. rst-class:: hide-signature
.. py:class:: Swarm
.. automethod:: init()
.. automethod:: join()
.. automethod:: leave()
.. automethod:: update()
.. automethod:: reload()
.. autoattribute:: version
.. py:attribute:: attrs
The raw representation of this object from the server.
## Connection to daemon using HTTPS
**Note:** *These instructions are docker-py specific. Please refer to
[http://docs.docker.com/articles/https/](http://docs.docker.com/articles/https/)
first.*
## TLSConfig
**Params**:
* client_cert (tuple of str): Path to client cert, path to client key
* ca_cert (str): Path to CA cert file
* verify (bool or str): This can be `False` or a path to a CA Cert file
* ssl_version (int): A valid [SSL version](
https://docs.python.org/3.5/library/ssl.html#ssl.PROTOCOL_TLSv1)
* assert_hostname (bool): Verify hostname of docker daemon
### configure_client
**Params**:
* client: ([Client](api.md#client-api)): A client to apply this config to
## Authenticate server based on public/default CA pool
```python
client = docker.Client(base_url='<https_url>', tls=True)
```
Equivalent CLI options:
```bash
docker --tls ...
```
If you want to use TLS but don't want to verify the server certificate
(for example when testing with a self-signed certificate):
```python
tls_config = docker.tls.TLSConfig(verify=False)
client = docker.Client(base_url='<https_url>', tls=tls_config)
```
## Authenticate server based on given CA
```python
tls_config = docker.tls.TLSConfig(ca_cert='/path/to/ca.pem')
client = docker.Client(base_url='<https_url>', tls=tls_config)
```
Equivalent CLI options:
```bash
docker --tlsverify --tlscacert /path/to/ca.pem ...
```
## Authenticate with client certificate, do not authenticate server based on given CA
```python
tls_config = docker.tls.TLSConfig(
client_cert=('/path/to/client-cert.pem', '/path/to/client-key.pem')
)
client = docker.Client(base_url='<https_url>', tls=tls_config)
```
Equivalent CLI options:
```bash
docker --tls --tlscert /path/to/client-cert.pem --tlskey /path/to/client-key.pem ...
```
## Authenticate with client certificate, authenticate server based on given CA
```python
tls_config = docker.tls.TLSConfig(
client_cert=('/path/to/client-cert.pem', '/path/to/client-key.pem'),
verify='/path/to/ca.pem'
)
client = docker.Client(base_url='<https_url>', tls=tls_config)
```
Equivalent CLI options:
```bash
docker --tlsverify \
--tlscert /path/to/client-cert.pem \
--tlskey /path/to/client-key.pem \
--tlscacert /path/to/ca.pem ...
```
Using TLS
=========
.. py:module:: docker.tls
Both the main :py:class:`~docker.client.Client` and low-level
:py:class:`~docker.api.client.APIClient` can connect to the Docker daemon with TLS.
This is all configured automatically for you if you're using :py:func:`~docker.client.from_env`, but if you need some extra control it is possible to configure it manually by using a :py:class:`TLSConfig` object.
Examples
--------
For example, to check the server against a specific CA certificate:
.. code-block:: python
tls_config = docker.tls.TLSConfig(ca_cert='/path/to/ca.pem')
client = docker.Client(base_url='<https_url>', tls=tls_config)
This is the equivalent of ``docker --tlsverify --tlscacert /path/to/ca.pem ...``.
To authenticate with client certs:
.. code-block:: python
tls_config = docker.tls.TLSConfig(
client_cert=('/path/to/client-cert.pem', '/path/to/client-key.pem')
)
client = docker.Client(base_url='<https_url>', tls=tls_config)
This is the equivalent of ``docker --tls --tlscert /path/to/client-cert.pem --tlskey /path/to/client-key.pem ...``.
Reference
---------
.. autoclass:: TLSConfig()
# Using tmpfs
When creating a container, you can specify paths to be mounted with tmpfs using
the `tmpfs` argument to `create_host_config`, similarly to the `--tmpfs`
argument to `docker run`.
This capability is supported in Docker Engine 1.10 and up.
`tmpfs` can be either a list or a dictionary. If it's a list, each item is a
string specifying the path and (optionally) any configuration for the mount:
```python
client.create_container(
'busybox', 'ls',
host_config=client.create_host_config(tmpfs=[
'/mnt/vol2',
'/mnt/vol1:size=3G,uid=1000'
])
)
```
Alternatively, if it's a dictionary, each key is a path and each value contains
the mount options:
```python
client.create_container(
'busybox', 'ls',
host_config=client.create_host_config(tmpfs={
'/mnt/vol2': '',
'/mnt/vol1': 'size=3G,uid=1000'
})
)
```
# Using volumes
Volume declaration is done in two parts. Provide a list of mountpoints to
the `Client().create_container()` method, and declare mappings in the
`host_config` section.
```python
container_id = cli.create_container(
'busybox', 'ls', volumes=['/mnt/vol1', '/mnt/vol2'],
host_config=cli.create_host_config(binds={
'/home/user1/': {
'bind': '/mnt/vol2',
'mode': 'rw',
},
'/var/www': {
'bind': '/mnt/vol1',
'mode': 'ro',
}
})
)
```
You can alternatively specify binds as a list. This code is equivalent to the
example above:
```python
container_id = cli.create_container(
'busybox', 'ls', volumes=['/mnt/vol1', '/mnt/vol2'],
host_config=cli.create_host_config(binds=[
'/home/user1/:/mnt/vol2',
'/var/www:/mnt/vol1:ro',
])
)
```
Volumes
=======
.. py:module:: docker.models.volumes
Manage volumes on the server.
Methods available on ``client.volumes``:
.. rst-class:: hide-signature
.. py:class:: VolumeCollection
.. automethod:: create
.. automethod:: get
.. automethod:: list
Volume objects
--------------
.. autoclass:: Volume()
.. autoattribute:: id
.. autoattribute:: short_id
.. autoattribute:: name
.. py:attribute:: attrs
The raw representation of this object from the server.
.. automethod:: reload
.. automethod:: remove
site_name: docker-py Documentation
site_description: An API client for Docker written in Python
site_favicon: favicon_whale.png
site_url: https://docker-py.readthedocs.io
repo_url: https://github.com/docker/docker-py/
theme: readthedocs
pages:
- Home: index.md
- Client API: api.md
- Port Bindings: port-bindings.md
- Using Volumes: volumes.md
- Using TLS: tls.md
- Host devices: host-devices.md
- Host configuration: hostconfig.md
- Network configuration: networks.md
- Swarm management: swarm.md
- Swarm services: services.md
- Using tmpfs: tmpfs.md
- Using with Docker Machine: machine.md
- Change Log: change_log.md
- Contributing: contributing.md
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment