Schedulers¶
Scheduling storage one resource at a time...
Overview¶
This page reviews the scheduling systems supported by REX-Ray.
Docker¶
The majority of the documentation for the Docker integration driver has been relocated to the libStorage project.
External Access¶
By default, REX-Ray's embedded Docker Volume Plug-in endpoint handles
requests from the local Docker service via a UNIX socket. Doing so
restricts the endpoint to the localhost, increasing network security by removing
a possible attack vector. If an externally accessible Docker Volume Plug-in
endpoint is required, it's still possible to create one by overriding the
address for the default-docker
module in REX-Ray's configuration file:
rexray:
modules:
default-docker:
host: tcp://:7981
The above example illustrates how to override the default-docker
module's
endpoint address. The value tcp://:7981
instructs the Docker Volume Plug-in
to listen on port 7981 for all configured interfaces.
Using a TCP endpoint has a side-effect however -- the local Docker instance will not know about the Volume Plug-in endpoint as there is no longer a UNIX socket file in the directory the Docker service continually scans.
On the local system, and in fact on all systems where the Docker service needs
to know about this externally accessible Volume Plug-in endpoint, a spec file
must be created at /etc/docker/plug-ins/rexray.spec
. Inside this file simply
include a single line with the network address of the endpoint. For example:
tcp://192.168.56.20:7981
With a spec file located at /etc/docker/plug-ins/rexray.spec
that contains
the above contents, Docker instances will query the Volume Plug-in endpoint at
tcp://192.168.56.20:7981
when volume requests are received.
Volume Management¶
The volume
sub-command for Docker 1.12+ should look similar to the following:
$ docker volume
Usage: docker volume [OPTIONS] [COMMAND]
Manage Docker volumes
Commands:
create Create a volume
inspect Return low-level information on a volume
ls List volumes
rm Remove a volume
List Volumes¶
The list command reviews a list of available volumes that have been discovered via Docker Volume Plug-in endpoints such as REX-Ray. Each volume name is expected to be unique. Thus volume names must also be unique across all endpoints, and in turn, across all storage platforms exposed by REX-Ray.
With the exception of the local
driver, the list of returned volumes is
generated by the backend storage platform to which the configured driver
communicates:
$ docker volume ls
DRIVER VOLUME NAME
local local1
scaleio Volume-001
virtualbox vbox1
Inspect Volume¶
The inspect command can be used to retrieve details about a volume related to
both Docker and the underlying storage platform. The fields listed under
Status
are all generated by REX-Ray, including Size in GB
, Volume Type
,
and Availability Zone
.
The Scope
parameter ensures that when the specified volume driver is
inspected by multiple Docker hosts, the volumes tagged as global
are all
interpreted as the same volume. This reduces unnecessary round-trips in
situations where an application such as Docker Swarm is connected to hosts
configured with REX-Ray.
$ docker volume inspect vbox1
[
{
"Name": "vbox1",
"Driver": "virtualbox",
"Mountpoint": "",
"Status": {
"availabilityZone": "",
"fields": null,
"iops": 0,
"name": "vbox1",
"server": "virtualbox",
"service": "virtualbox",
"size": 8,
"type": ""
},
"Labels": {},
"Scope": "global"
}
]
Create Volume¶
Docker's volume create
command enables the creation of new volumes on the
underlying storage platform. Newly created volumes are available immediately
to be attached and mounted. The volume create
command also supports the CLI
flag -o|--opt
in order to support providing custom data to the volume creation
workflow:
$ docker volume create --driver=virtualbox --name=vbox2 --opt=size=2
vbox2
Additional, valid options for the -o|--opt
parameter include:
option | description |
---|---|
size | Size in GB |
IOPS | IOPS |
volumeType | Type of Volume or Storage Pool |
volumeName | Create from an existing volume name |
volumeID | Create from an existing volume ID |
snapshotName | Create from an existing snapshot name |
snapshotID | Create from an existing snapshot ID |
Remove Volume¶
A volume may be removed once it is no longer in use by a container, running or otherwise. The process of removing a container actually causes the volume to be removed if that is the last container to leverage said volume:
$ docker volume rm vbox2
Containers with Volumes¶
Please review the Applications section for information on configuring popular applications with persistent storage via Docker and REX-Ray.
Kubernetes¶
REX-Ray can be integrated with Kubernetes allowing
pods to consume data stored on volumes that are orchestrated by REX-Ray. Using
Kubernetes' FlexVolume
plug-in, REX-Ray can provide uniform access to storage operatations such as attach,
mount, detach, and unmount for any configured storage provider. REX-Ray provides an
adapter script called FlexRex
which integrates with the FlexVolume to interact
with the backing storage system.
Pre-Requisites¶
- Kubernetes 1.5 or higher
- REX-Ray 0.7 or higher
- jq binary
Installation¶
It is assumed that you have a Kubernetes cluster at your disposal. On each Kubernetes node (running the kubelet), do the followings:
- Install and configure the REX-Ray binary as prescribed in the Installation section.
- Next, validate the REX-Ray installation by running
rexray volume ls
as shown in the the following:
# rexray volume ls
ID Name Status Size
925def7200000006 vol01 available 32
925def7100000005 vol02 available 32
If there is no issue, you should see an output, similar to above, which shows
a list of previously created volumes. If instead you get an error,
ensure that REX-Ray is properly configured for the intended storage system.
Next, using the REX-Ray binary, install the FlexRex
adapter script on the node
as shown below.
# rexray flexrex install
This should produce the following output showing that the FlexRex script is installed successfully:
Path Installed Modified
/usr/libexec/kubernetes/kubelet-plug-ins/volume/exec/rexray~flexrex/flexrex true false
The path shown above is the default location where the FlexVolume plug-in will
expect to find its integration code. If you are not using the default location
with FlexVolume, you can install the FlexRex
in an arbitrary location using:
# rexray flexrex install --path /opt/plug-ins/rexray~flexrex/flexrex
Next, restart the kublet process on the node:
# systemctl restart kubelet
You can validate that the FlexRex script has been started successfully by searching the kubelet log for an entry similar to the following:
I0208 10:56:57.412207 5348 plug-ins.go:350] Loaded volume plug-in "rexray/flexrex"
Pods and Persistent Volumes¶
You can now deploy pods and persistent volumes that use storage systems orchestrated
by REX-Ray. It is worth pointing out that the Kubernetes FlexVolme plug-in can only
attach volumes that already exist in the storge system. Any volume that is to be used
by a Kubernetes resource must be listed in a rexray volume ls
command.
Pod with REX-Ray volume¶
The following YAML file shows the definition of a pod that uses FlexRex to attach a volume to be used by the pod.
apiVersion: v1
kind: Pod
metadata:
name: pod-0
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: pod-0
volumeMounts:
- mountPath: /test-pd
name: vol-0
volumes:
- name: vol-0
flexVolume:
driver: rexray/flexrex
fsType: ext4
options:
volumeID: test-vol-1
forceAttach: "true"
forceAttachDelay: "15"
Notice in the section under flexVolume
the name of the driver attribute
driver: rexray/flexrex
. This is used by the FlexVolume plug-in to launch REX-Ray.
Additional options can be provided in the options:
as follows:
Option | Desription |
---|---|
volumeID | Reference name of the volume in REX-Ray |
forceAttach | When true ensures the volume is availble before attahing (optinal, defaults to false) |
forceAttachDelay | Total amount of time (in sec) to attempt attachment with 5 sec interval between tries (optional) |
REX-Ray PersistentVolume¶
The next example shows a YAML definition of Persistent Volume (PV) managed by REX-Ray.
apiVersion: v1
kind: PersistentVolume
metadata:
name: vol01
spec:
capacity:
storage: 32Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
flexVolume:
driver: rexray/flexrex
fsType: xfs
options:
volumeID: redis01
forceAttach: "true"
forceAttachDelay: "15"
The next YAML shows a Persistent Volume Claim
(PVC) that carves out 10Gi
out of
the PV defined above.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vol01
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
The claim can then be used by a pod in a YAML definition as shown below:
apiVersion: v1
kind: Pod
metadata:
name: pod-1
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: pod-1
volumeMounts:
- mountPath: /test-pd
name: vol01
volumes:
- name: vol01
persistentVolumeClaim:
claimName: vol01
Mesos¶
In Mesos the frameworks are responsible for receiving requests from consumers and then proceeding to schedule and manage tasks. While some frameworks, like Marathon, are open to run any workload for sustained periods of time, others are use case specific, such as Cassandra. Frameworks may also receive requests from other platforms in addition to schedulers instead of consumers such as Cloud Foundry, Kubernetes, and Swarm.
Once a resource offer is accepted from Mesos, tasks are launched to support the associated workloads. These tasks are eventually distributed to Mesos agents in order to spin up containers.
REX-Ray enables on-demand storage allocation for agents receiving tasks via two deployment configurations:
-
Docker Containerizer with Marathon
-
Mesos Containerizer with Marathon
Docker Containerizer with Marathon¶
When the framework leverages the Docker containerizer, Docker and REX-Ray should both already be configured and working. The following example shows how to use Marathon in order to bring an application online with external volumes:
{
"id": "nginx",
"container": {
"docker": {
"image": "million12/nginx",
"network": "BRIDGE",
"portMappings": [{
"containerPort": 80,
"hostPort": 0,
"protocol": "tcp"
}],
"parameters": [{
"key": "volume-driver",
"value": "rexray"
}, {
"key": "volume",
"value": "nginx-data:/data/www"
}]
}
},
"cpus": 0.2,
"mem": 32.0,
"instances": 1
}
Mesos Containerizer with Marathon¶
Mesos 0.23+ includes modules that enable extensibility for different portions of the architecture. The dvdcli and mesos-module-dvdi projects are required to enable external volume support with the native containerizer.
The next example is similar to the one above, except in this instance the
native containerizer is preferred and volume requests are handled by the
env
section.
{
"id": "hello-play",
"cmd": "while [ true ] ; do touch /var/lib/rexray/volumes/test12345/hello ; sleep 5 ; done",
"mem": 32,
"cpus": 0.1,
"instances": 1,
"env": {
"DVDI_VOLUME_NAME": "test12345",
"DVDI_VOLUME_DRIVER": "rexray",
"DVDI_VOLUME_OPTS": "size=5,iops=150,volumetype=io1,newfstype=xfs,overwritefs=true"
}
}
This example also illustrates several important settings for the native method. While the VirtualBox driver is being used, any validated storage platform should work. Additionally, there are two options recommended for this type of configuration:
Property | Recommendation |
---|---|
libstorage.integration.volume.operations.mount.preempt |
Setting this flag to true ensures any host can preempt control of a volume from other hosts |
libstorage.integration.volume.operations.unmount.ignoreUsedCount |
Enabling this flag declares that mesos-module-dvdi is the authoritative source for deciding when to unmount volumes |
Please refer to the libStorage documentation for more information on Volume Configuration options.
note
The libstorage.integration.volume.operations.remove.disable
property can
prevent the scheduler from removing volumes. Setting this flag to true
is
recommended when using Mesos with Docker 1.9.1 or earlier.
libstorage:
service: virtualbox
integration:
volume:
operations:
mount:
preempt: true
unmount:
ignoreusedcount: true
remove:
disable: true
virtualbox:
volumePath: $HOME/VirtualBox/Volumes