×


Deploy PHP application with Kubernetes on Ubuntu

Are you looking for the steps to deploy the PHP application with Kubernetes on Ubuntu?

This guide will help you.


Kubernetes is an open-source container orchestration platform that enables the operation of an elastic web server framework for cloud applications. 

Kubernetes can support data center outsourcing to public cloud service providers or can be used for web hosting at scale.

Here at Ibmi Media, as part of our Server Management Services, we regularly help our Customers to perform Kubernetes related queries.

In this context, we shall look into how to deploy the PHP application with Kubernetes on Ubuntu.


Steps to deploy PHP application Kubernetes ubuntu ?

Here, you will learn how to deploy the PHP with Kubernetes.


1. Creating the PHP-FPM and Nginx Services

First, we shall create the PHP-FPM and Nginx services. 

The PHP-FPM service will allow access to the PHP-FPM pods whereas the Nginx service will allow access to the Nginx pods.

i. In order to create a service, we need to create an object definition file.

For that, we SSH to master node and create definitions directory that will hold your Kubernetes object definitions.

$ mkdir definitions

ii. Then we navigate to the newly-created directory.

$ cd definitions

iii. To make a PHP-FPM service, we create a php_service.yaml file:

$ nano php_service.yaml

Here is the code that we will be adding to the php_service.yaml file where we set kind as Service to specify that this object is a service. Also, we name the service php since it will provide access to PHP-FPM.

iv. After adding the code, we hit CTRL + O to save the file, and then CTRL + X to exit nano. Then we create service by running the below command.

$ kubectl apply -f php_service.yaml

v. We run the below command to verify if the service is running or not.

$ kubectl get svc

vi. Now, we have PHP-FPM service ready.

So, we can now move to create Nginx service. For that, we create and open a new file called nginx_service.yaml with the editor:

$ nano nginx_service.yaml

We add the below code in nginx_service.yaml file:

apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
tier: backend
spec:
selector:
app: nginx
tier: backend
ports:
- protocol: TCP
port: 80
externalIPs:
- your_public_ip

vii. Then we save and close the file.

Here is the command that we run to create the Nginx service:

$ kubectl apply -f nginx_service.yaml

2. Installing the DigitalOcean Storage Plug-In to deploy the PHP application

Here, we shall first configure a Kubernetes Secret object to store the DigitalOcean API token. Using these secret objects, we can share sensitive information, like SSH keys and passwords, with other Kubernetes objects within the same namespace.

So, we open a file named secret.yaml with the editor:

$ nano secret.yaml

Then we add the below code in it:

apiVersion: v1
kind: Secret
metadata:
name: digitalocean
namespace: kube-system
stringData:
access-token: your-api-token

Then we create a secret

$ kubectl apply -f secret.yaml

Here is the command that we run to view the secret object.

$ kubectl -n kube-system get secret digitalocean

Once we have our Secret in place, we can now install the DigitalOcean block storage plug-in:

$ kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v1.1.0.yaml

 

3. Creating the Persistent Volume with Kubernetes on Ubuntu

We can access the Persistent Volume using a PersistentVolumeClaim, or PVC, which mounts the PV at the required path.

We first, open a file named code_volume.yaml:

$ nano code_volume.yaml

Then we add the below code in it:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: code
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: do-block-storage

Then we save and exit the file.

After that, we create the code PVC using kubectl:

$ kubectl apply -f code_volume.yaml

 

4. Creating a PHP-FPM application Deployment

i. In order to create a deployment, we create a file named php_deployment.yaml

$ nano php_deployment.yaml

ii. Then we add the below code in it:

apiVersion: apps/v1
kind: Deployment
metadata:
name: php
labels:
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: php
tier: backend
template:
metadata:
labels:
app: php
tier: backend
spec:
volumes:
- name: code
persistentVolumeClaim:
claimName: code
containers:
- name: php
image: php:7-fpm
volumeMounts:
- name: code
mountPath: /code
initContainers:
- name: install
image: busybox
volumeMounts:
- name: code
mountPath: /code
command:
- wget
- "-O"
- "/code/index.php"
- https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php

iii. We then save the file and exit it.


iv. After that, we create the PHP-FPM Deployment with kubectl:

$ kubectl apply -f php_deployment.yaml

v. Here is the command that we run to view the Deployment.

$ kubectl get deployments

vi. To view the pods that this Deployment started with, we run the following command:

$ kubectl get pods

 5. Creating the Nginx Deployment on Ubuntu to deploy a PHP application

Now, we shall use a ConfigMap to configure Nginx. ConfigMap will hold the configuration in a key-value format that we reference in other Kubernetes object definitions.

i. We create a nginx_configMap.yaml file for your ConfigMap with your editor:

$ nano nginx_configMap.yaml

ii. Then we add the below code in it:

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
labels:
tier: backend
data:
config : |
server {
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}

iii. We save the file and exit it.


iv. Then we create the ConfigMap:

$ kubectl apply -f nginx_configMap.yaml

Now we are done creating the ConfigMap and now we can build the Nginx Deployment.


v. We start by opening a new nginx_deployment.yaml file in the editor:

$ nano nginx_deployment.yaml

Here is the code that we add:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: nginx
tier: backend
template:
metadata:
labels:
app: nginx
tier: backend
spec:
volumes:
- name: code
persistentVolumeClaim:
claimName: code
- name: config
configMap:
name: nginx-config
items:
- key: config
path: site.conf
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- name: code
mountPath: /code
- name: config
mountPath: /etc/nginx/conf.d

vi. We save the file and exit the editor.


vii. Now, we shall create the Nginx Deployment:

$ kubectl apply -f nginx_deployment.yaml

viii. We can list the Deployments with this command:

$ kubectl get deployments

Here is the command that we run to list the pods managed by both of the Deployments:

$ kubectl get pods

Now we have all of the Kubernetes objects are active. 

So we can visit the Nginx service on the browser.


ix. We run the below command to list the running services:

$ kubectl get services -o wide

Here, we get the External IP for our Nginx service.

On the browser, we can visit the server by typing in http://your_public_ip. 

We can see the output of php_info() and have confirmed that the Kubernetes services are up and running.


[Need urgent assistance with DigitalOcean queries? We can help you with it. ]


Conclusion

This article will guide you on steps to deploy #PHP application with Kubernetes on #Ubuntu. 

Kubernetes, at its basic level, is a system for running and coordinating containerized applications across a cluster of machines. 

It is a platform designed to completely manage the life cycle of containerized applications and services using methods that provide predictability, scalability, and high availability.

#Kubernetes really shines when your #application consists of multiple services running in different containers.

Kubernetes, also referred to as K8s, is an open source system used to manage Linux Containers across private, public and hybrid cloud environments. 

In other words, Kubernetes can be used to manage microservice architectures and is deployable on most cloud providers.