Adventures with Pydio Cells and Kubernetes

Ross Beazley
2 min readDec 8, 2018

--

I’m no expert; I’m not even an amateur; I’m a novice Kubernetes user, this is a recording of the steps I took to “get Pydio Cells working” on my home mixed architecture Kubernetes five node cluster.

I may re-write into something understandable at some point!

I deployed Maria DB to a pi3 running a 64 bit hypriot os, I could only get Pydio to run against 10.1. As far as I could tell Mysql wasnt available on arm64 past version 5.5. I pulled the official image from dockerhub.

NFS was used to give storage for the pydio cells deployment.

The pydio cells docker image wasnt available for arm so I had to use an old intel core i3 laptop.

This was my deployment file:

apiVersion: apps/v1
kind: Deployment
metadata:
name: pydio
spec:
selector:
matchLabels:
app: pydio
template:
metadata:
labels:
app: pydio
spec:
nodeSelector:
role: pydio
containers:
- image: pydio/cells:latest
name: pydio
env:
- name: CELLS_BIND
value: "0.0.0.0:30010"
- name: CELLS_EXTERNAL
value: "0.0.0.0:30010"
command: ["/bin/sh"]
args: ["-c", "/root/.config/pydio/cells/migrate_ips.sh"]
volumeMounts:
- name: pydio-persistent-storage
mountPath: /root/.config
resources:
limits:
memory: "2048Mi"
cpu: "2.0"
requests:
memory: "1024Mi"
cpu: "1.0"
volumes:
- name: pydio-persistent-storage
persistentVolumeClaim:
claimName: pydio-claim

In the first pass I have punched a whole out of the cluster using a NodePode to expose the pydio cells service.

I have ha-proxy running on my broadband router terminating the TLS traffic (thanks to lets encrypt) and this proxies the traffic to the pydio cells service that is listening on a clear http endpoint.

Getting clear http tripped me up a bit and i had to “jump through some hoops” to get it working.

Start off with TLS enabled, but then edit the pydio.json config file to disable:

"cert": {
"proxy": {
"self": true,
"ssl": false
}
}

It was important to still have a cert, things didnt work with out it, go figure?

I also move from 0.0.0.0 to the public dns entry specifed in the deployment, again by editing the pydio.json file.

"defaults": {
"database": "dcf9c3b7607c942aab9e2b0a0db91bb448f7b1cf",
"datasource": "pydiods1",
"url": "https://some.public.url.co.uk",
"urlInternal": "http://0.0.0.0:30010"
},

Pydio storage was a real fiddle. When the cluster restarted or the POD was restarted it got a different IP. So for now I use a script to update the config file with the ipaddress of the POD on startup.

#!/bin/shIP=`ifconfig eth0 | grep inet | awk '{print $2}' | sed 's/addr://'`sed -i “s/PeerAddress\”:\ \“.*\”,/PeerAddress\":\ \“$IP\”,/g" /root/.config/pydio/cells/pydio.json/bin/docker-entrypoint.sh cells start

And that’s it… I deployed collabora online to another node in the cluster. Bizarrely I couldn’t get it to edit files (always file not found) until I enabled debug logging…. go figure.

Next up, im going to work out how I can run each micro service in its own POD. For now i’m happy with a google free online docs system.

--

--