When using Openshift Origin to deploy software, you often have your containers execute a database migration as part of their deployment, e.g. in your Dockerfile:
<br /> CMD ./manage.py migrate --noinput && \<br /> gunicorn -w 4 -b 0.0.0.0:8000 myapp.wsgi:application<br />
This works great until your migration won’t apply cleanly without intervention, your newly deploying pods are in crashloop backup, and you need to understand why. This is where the `oc debug` command comes in. Using `oc debug` we can ask for a shell on a pod running or newly created.
Assuming we have a deployment config `frontend`:
oc debug dc/frontend
will give us a shell in a running pod for the latest stable deployment (i.e. your currently running instances, not the ones that are crashing).
However let’s say deployment #44 is the one crashing. We can debug a pod from the replication controller for deployment #44.
oc debug rc/frontend-44
will give us a shell in a new pod for that deployment, with our new code, and allows us to manually massage our broken migration in (e.g. by faking the data migration that was retroactively added for production).