Building a Vue SPA with Laravel Part 5

We left off in Part 4 with the ability to edit users and learned how to use v-model to track changes to the view component user property. Now we’re ready to look at deleting users and how to handle the UI after the delete action succeeds.

Along the way, we’re going to look at building an Axios client instance to enable greater flexibility in how we configure API clients.

Updating the API to Handle Deleting Users

The first thing we are going to work on is defining the API route for deleting an individual user. Thanks to route model binding this is only a couple of lines in the UsersController:

public function destroy(User $user)
{
    $user->delete();

    return response(null, 204);
}

Next, define the new route at the bottom of the Api group in the routes/api.php file:

Route::namespace('Api')->group(function () {
    Route::get('/users', 'UsersController@index');
    Route::get('/users/{user}', 'UsersController@show');
    Route::put('/users/{user}', 'UsersController@update');
    Route::delete('/users/{user}', 'UsersController@destroy');
});

Deleting Users on the Frontend

We’re going to add the delete functionality to our /users/:id/edit view component, by adding a delete button to the UsersEdit.vue component under the “Update” button:

<div class="form-group">
    <button type="submit" :disabled="saving">Update</button>
    <button :disabled="saving" @click.prevent="onDelete($event)">Delete</button>
</div>

We copied the :disabled attribute from the update button that we can use to prevent an inadvertent update or delete when an action is taking place.

Next, we need to hook up the onDelete() callback to handle deleting the user:

onDelete() {
  this.saving = true;

  api.delete(this.user.id)
     .then((response) => {
        console.log(response);
     });
}

We call the delete() function on our API client and then chain a callback to log out the response object in the console. The update and delete buttons are disabled if click “delete” because we’re setting this.saving = true —we will come back to this point in a second. If you have the console open, you will see a 204 No Content response object indicating the deletion worked.

How to React to a Successfully Deleted User

One thing that’s a little different from updating a user is that we don’t have a user in the database once we delete the record. In a traditional web application, we would likely delete the record and then redirect the user back to the list of all users.

We can do this very thing in our SPA by programmatically navigating the user back to the /users page:

this.$router.push({ name: 'users.index' })

Applying the this.$router.push() call to our event, the most basic version would look like this:

onDelete() {
  this.saving = true;
  api.delete(this.user.id)
     .then((response) => {
        this.$router.push({ name: 'users.index' });
     });
}

If you refresh the application and delete a user, you will notice a brief flash of disabled buttons, and then the browser navigates to the /users page without any feedback.

We could handle notifying the user with a dedicated toast/notification mechanism. I’ll leave the approach up to you, but here’s a basic idea of what I’m talking about:

onDelete() {
  this.saving = true;
  api.delete(this.user.id)
     .then((response) => {
        this.message = 'User Deleted';
        setTimeout(() => this.$router.push({ name: 'users.index' }), 2000);
     });
}

The above code sets the this.message data property we set up in Part 4 and waits two seconds before navigating to the /users index page.

You could also use something like portal-vue or a component in your layout that flashes the message temporarily (or with a mandatory close button) to indicate an action has succeeded (or failed) to give the user some feedback.

Four Oh Four

You might have noticed that if our Vue route matches the pattern /users/:id/edit, we still might have a 404 response from the API if the user id is not found:

With a server-side Laravel application, we could render a 404.blade.php from a ModelNotFoundException easily. A SPA is a little bit different though. The above route is valid, so we need our component to either render the error component instead of or redirect the user to a dedicated 404 route.

We will add a few new routes to the Vue router configuration in resources/assets/js/app.js with a dedicated 404 view and a catch-all component that redirects routes that don’t match to the 404 route:

{ path: '/404', name: '404', component: NotFound },
{ path: '*', redirect: '/404' },

We’ll create a simple NotFound component at resources/assets/js/views/NotFound.vue:

<template>
  <div>
    <h2>Not Found</h2>
    <p>Woops! Looks like the page you requested cannot be found.</p>
  </div>
</template>

Because we have a catch-all route on the backend in Laravel, that means that the frontend also needs a catch-all route to respond with a 404 page if the path doesn’t match a defined route. Here’s the backend route as a refresher that catches all routes and sends renders the SPA template:

Route::get('/{any}', 'SpaController@index')
    ->where('any', '.*');

If you enter an invalid URL like /does-not-exist, you will see something like the following:

The Vue router hits the wildcard route which redirects the browser to /404.

Our previous example with an invalid user id still isn’t working yet, because technically the route is valid. We need to update the UsersEdit component to catch failed requests in the create() callback and send the user to the 404 route:

created() {
  api.find(this.$route.params.id)
     .then((response) => {
         this.loaded = true;
         this.user = response.data.data;
     })
     .catch((err) => {
       this.$router.push({ name: '404' });
     });
}

Now if you make a request directly to a URI like /users/2000/edit you should see the app redirect to the 404 page instead of hanging on the “Loading…” UI in the UsersEdit component.

API Client Options

Although our dedicated users.js HTTP client might be considered overkill in a small application, I think the separation has already served us well as we use the API module in multiple components. I discuss this idea at great length in my article Building Flexible Axios Clients if you want to learn all the details of what a flexible client affords.

Without changing the external API of our client, we can change how the client works under the hood. For example, we can create an Axios client instance with customizable configuration and defaults:

import axios from 'axios';

const client = axios.create({
  baseURL: '/api',
});

export default {
  all(params) {
    return client.get('users', params);
  },
  find(id) {
    return client.get(`users/${id}`);
  },
  update(id, data) {
    return client.put(`users/${id}`, data);
  },
  delete(id) {
    return client.delete(`users/${id}`);
  },
};

Now I can swap out the baseURL with some configuration later if I want to customize the way the entire module works without affecting the methods.

What’s Next

We learned how to delete users and respond to a successful deletion on the frontend via Vue router. We’ve introduced programmatic navigation via the this.$router property by registering the Vue router with Vue.use(VueRouter) in our main app.js file.

Next, we will turn to build the user creation to wrap up learning how to perform basic create, read, update, and delete (CRUD) actions. At this point you should have all the tools you need to complete creating new users on your own, so feel free to try building this functionality before the publication of the next article in this series.

Building a Vue SPA with Laravel Part 4

We left off building a real users endpoint and learned about a new way to fetch component data with Vue router in part 3. Now we’re ready to move our attention to creating CRUD functionality for our users—this tutorial will focus on editing existing users.

Along with working through our first form, we will get a chance to look at defining a dynamic Vue route. The dynamic part of our route will be the user’s ID which matches his or her database record. For editing a user, the Vue route will look like this:

/users/:id/edit

The dynamic part of this route is the :id parameter, which will depend on the user’s ID. We are going to use the id field from the database, but you could also use a UUID or something else.

The Setup

Before we focus on the Vue component, we need to define a new API endpoint to fetch an individual user, and then later we’ll need to specify another endpoint to perform the update.

Open the routes/api.php routes file and add the following route below the index route that fetches all users:

Route::namespace('Api')->group(function () {
    Route::get('/users', 'UsersController@index');
    Route::get('/users/{user}', 'UsersController@show');
});

Using Laravel’s implicit route model binding, our controller method is straightforward. Add the following method to the app/Http/Controllers/Api/UsersController.php file:

// app/Http/Controllers/Api/UsersController

public function show(User $user)
{
    return new UserResource($user);
}

Requesting a user at something like /api/users/1 will return the following JSON response:

{
    "data": {
        "name": "Antonetta Zemlak",
        "email":"znikolaus@example.org"
    }
}

Our UserResource from Part 3 needs updated to include the id column, so you should update the app/Http/Resources/UserResource.php file to include the id array key. I’ll paste the entire file from Part 3 here:

<?php

namespace App\Http\Resources;

use Illuminate\Http\Resources\Json\Resource;

class UserResource extends Resource
{
    /**
     * Transform the resource into an array.
     *
     * @param  \Illuminate\Http\Request  $request
     * @return array
     */
    public function toArray($request)
    {
        return [
            'id' => $this->id,
            'name' => $this->name,
            'email' => $this->email,
        ];
    }
}

Now our /api/users and /api/users/{user} routes will respond with the id field, which we need to identify the users in our routes.

Defining the UsersEdit Vue Component

With the show route in place, we can turn our attention to defining the frontend Vue route and the accompanying component. Add the following route definition to the resources/js/app.js routes. Here’s a snippet of importing the UsersEdit component—which we have yet to create—along with the entire route instance:

import UsersEdit from './views/UsersEdit';

// ...

const router = new VueRouter({
    mode: 'history',
    routes: [
        {
            path: '/',
            name: 'home',
            component: Home
        },
        {
            path: '/hello',
            name: 'hello',
            component: Hello,
        },
        {
            path: '/users',
            name: 'users.index',
            component: UsersIndex,
        },
        {
            path: '/users/:id/edit',
            name: 'users.edit',
            component: UsersEdit,
        },
    ],
});

We’ve added the users.edit route to the end of the routes configuration.

Next, we need to create the UsersEdit component at resources/assets/js/views/UsersEdit.vue with the following component code:

<template>
  <div>
      <form @submit.prevent="onSubmit($event)">
        <div class="form-group">
            <label for="user_name">Name</label>
            <input id="user_name" v-model="user.name" />
        </div>
        <div class="form-group">
            <label for="user_email">Email</label>
            <input id="user_email" type="email" v-model="user.email" />
        </div>
        <div class="form-group">
            <button type="submit">Update</button>
        </div>
    </form>
  </div>
</template>
<script>
export default {
  data() {
    return {
      user: {
        id: null,
        name: "",
        email: ""
      }
    };
  },
  methods: {
    onSubmit(event) {
        // @todo form submit event
    }
  },
  created() {
      // @todo load user details
  }
};
</script>

Let’s focus on the template portion first: we render a <form> around a closing div because soon we’ll need to conditionally show the form after loading the user’s data.

The <form> tag has a placeholder @submit event, and we’ve defined an onSubmit() method handler that takes an event object. The last thing I’ll mention is the v-model attributes on the <input> elements, which maps to accompanying data.users Object literal. We’ve stubbed out the default values for id, name, and email.

At this point if you load up /users/1/edit you’ll see an empty form rendered:

We intend on editing existing users, so our next step is figuring out how to grab the dynamic :id property from the route and loading the user’s data from the UsersEdit.vue component.

Loading User Details with a Dedicated Client

Before we load the user data in the component, we’re going to go on a side-quest to extract the /api/users resource into a dedicated API module that we can use to query all users, individual users, and update users.

First, we’re going to create a new folder and file to house the API modules for our backend. You can create these files in any way you please. We’ll demonstrate from the command line on a `Nix command line:

mkdir -p resources/assets/js/api/
touch resources/assets/js/api/users.js

The users.js component is going to expose some functions we can call to do operations on the /api/users resource. This module is going to be relatively simple, but later can allow you to do any mapping, data manipulation, etc. before or after the API request. This file serves as a repository of reusable API operations:

import axios from 'axios';

export default {
    all() {
        return axios.get('/api/users');
    },
    find(id) {
        return axios.get(`/api/users/${id}`);
    },
    update(id, data) {
        return axios.put(`/api/users/${id}`, data);
    },
};

Now we can use the same module to get all users, as well as find and update individual users:

// Get all users
client.all().then((data) => mapData);

// Find a user
client.find(userId);

For now, the all() method doesn’t accept any pagination query params, but I’ll leave it up to you to implement pagination and replace what we have on the UsersIndex.vue component with our new all() client function.

Loading the User from the UsersEdit Component

Now that we have a reusable—albeit very basic—API client, we can put it to work to load the user data when the edit page is rendered.

We originally stubbed out a created() function on our component, which is where we’ll request the user’s data now:

// UsersEdit.vue Component
<script>
import api from '../api/users';

export default {
  // ...
  created() {
      api.find(this.$route.params.id).then((response) => {
        this.loaded = true;
        this.user = response.data.data;
      });
  }
}
</script>

Our created() callback calls the users.js client find() function which returns a promise. In the Promise callback, we set a loaded data property (which we haven’t created yet) and set the this.user data property.

Let’s add the loaded property to our data key and set it to false by default:

data() {
  return {
    loaded: false,
    user: {
      id: null,
      name: "",
      email: ""
    }
  };
},

Since our component loads up the data inside of created() we’ll show a conditional “loading” message on the component initially:

<div v-if="! loaded">Loading...</div>
<form @submit.prevent="onSubmit($event)" v-else>
<!-- ... -->
</form>

At this point if you refresh the page, the component will briefly flash a Loading... message:

And then the user’s data should populate the form:

The API is very quick, so if you want to verify that the condition is working, you can call setTimeout to delay the setting of the this.user data property:

api.find(this.$route.params.id).then((response) => {
    setTimeout(() => {
      this.loaded = true;
      this.user = response.data.data;
    }, 5000);
});

The above timeout will show the loading message for five seconds and then set the loaded and user data properties.

Updating the User

We’re ready to hook up the onSubmit() event handler and update the user via the PUT /api/users/{user} API endpoint.

First, let’s add the onSubmit() code and then we’ll move to the Laravel backend to make the backend perform the update on the database:

onSubmit(event) {
  this.saving = true;

  api.update(this.user.id, {
      name: this.user.name,
      email: this.user.email,
  }).then((response) => {
      this.message = 'User updated';
      setTimeout(() => this.message = null, 2000);
      this.user = response.data.data;
  }).catch(error => {
      console.log(error)
  }).then(_ => this.saving = false);
},

We’ve called the api.update() function with the current user’s ID, and passed the name and email values from the bound form inputs.

We then chain a callback on the Promise object to set the success message and set the updated user data after the API succeeds. After 2000 milliseconds we clear the message which will effectively hide the message in the template.

For now, we are catching any errors and logging them to the console. In the future, we may go back and cover handling errors such as server failure or validation errors, but for now, we’ll skip over this to focus on the success state.

We use this.saving to determine if our component is in the process of updating the user. Our template ensures the submit button is disabled when a save is in progress to avoid double submissions with a bound :disabled property:

<div class="form-group">
  <button type="submit" :disabled="saving">Update</button>
</div>

Once the API request is finished, the last thing we’re doing here is setting the this.saving to false by chaining on another then() callback after catch. We need to reset this property to false so the component can submit the form again. Our last then() chain uses the _ underscore variable as a convention you’ll find in some languages indicating that there’s an argument here, but we don’t need to use it. You could also define the short arrow function with empty parenthesis:

.then(() => this.saving = false);

We’ve introduced two new data properties that we need to add to our data() call:

data() {
  return {
    message: null,
    loaded: false,
    saving: false,
    user: {
      id: null,
      name: "",
      email: ""
    }
  };
},

Next, let’s update our <template> to show the message when it’s set:

<template>
  <div>
      <div v-if="message" class="alert">{{ message }}</div>
      <div v-if="! loaded">Loading...</div>
      <form @submit.prevent="onSubmit($event)" v-else>
        <div class="form-group">
            <label for="user_name">Name</label>
            <input id="user_name" v-model="user.name" />
        </div>
        <div class="form-group">
            <label for="user_email">Email</label>
            <input id="user_email" type="email" v-model="user.email" />
        </div>
        <div class="form-group">
            <button type="submit" :disabled="saving">Update</button>
        </div>
    </form>
  </div>
</template>

Finally, let’s add a few styles for the alert message at the bottom of the UsersEdit.vue file:

<style lang="scss" scoped>
$red: lighten(red, 30%);
$darkRed: darken($red, 50%);
.form-group label {
  display: block;
}
.alert {
    background: $red;
    color: $darkRed;
    padding: 1rem;
    margin-bottom: 1rem;
    width: 50%;
    border: 1px solid $darkRed;
    border-radius: 5px;
}
</style>

We’ve finished updating our frontend component to handle a submitted form and update the template accordingly after the API request succeeds. We now need to turn our attention back to the API to wire it all up.

Updating Users in the API Backend

We’re ready to connect all the dots by defining an update method on our User resource controller. We are going to define necessary validation on the server side. However, we aren’t going to wire it up on the frontend yet.

First, we will define a new route in the routes/api.php file for a PUT /api/users/{user} request:

Route::namespace('Api')->group(function () {
    Route::get('/users', 'UsersController@index');
    Route::get('/users/{user}', 'UsersController@show');
    Route::put('/users/{user}', 'UsersController@update');
});

Next, the UsersController@update method will use the request object to validate the data and return the fields we intend to update. Add the following method to the app/Http/Controllers/Api/UsersController.php file:

public function update(User $user, Request $request)
{
    $data = $request->validate([
        'name' => 'required',
        'email' => 'required|email',
    ]);

    $user->update($data);

    return new UserResource($user);
}

Just like the show() method, we are using the implicit request model binding to load the user from the database. After validating the required fields, we update the user model and return the updated model by creating a new instance of the UserResource class.

A successful request to the backend will return the user’s updated JSON data, which we then, in turn, use to update the this.user property in the Vue component.

{
  "data": {
    "id": 1,
    "name":"Miguel Boyle",
    "email":"hirthe.joel@example.org"
  }
}

Navigating to the Edit Page

We’ve been requesting the /users/:id/edit page directly, however, we haven’t added it anywhere in the interface. Feel free to try to figure out how to dynamically navigate to the edit page on your own before seeing how I did it.

Here’s how I added the edit link for each user listed on the /users index page in the UsersIndex.vue template we created back in Part 2:

<ul v-if="users">
    <li v-for="{ id, name, email } in users">
        <strong>Name:</strong> {{ name }},
        <strong>Email:</strong> {{ email }} |
        <router-link :to="{ name: 'users.edit', params: { id } }">Edit</router-link>
    </li>
</ul>

We restructure the user object in our loop to give us the id, name and email properties. We use the <router-link/> component to reference our users.edit named route with the id parameter passed in the params key.

To better visualize the <router-link> properties, here’s the route definition from the app.js file we added earlier:

{
  path: '/users/:id/edit',
  name: 'users.edit',
  component: UsersEdit,
},

If you refresh the app or visit the /users endpoint, you’ll see something like the following:

Putting it All Together

If you edit a user now, the backend should save it and respond with a 200 success if all went well. After the PUT request succeeds you should see the following for two seconds:

Here’s the final UsersEdit.vue component in full for your reference:

<template>
  <div>
      <div v-if="message" class="alert">{{ message }}</div>
      <div v-if="! loaded">Loading...</div>
      <form @submit.prevent="onSubmit($event)" v-else>
        <div class="form-group">
            <label for="user_name">Name</label>
            <input id="user_name" v-model="user.name" />
        </div>
        <div class="form-group">
            <label for="user_email">Email</label>
            <input id="user_email" type="email" v-model="user.email" />
        </div>
        <div class="form-group">
            <button type="submit" :disabled="saving">Update</button>
        </div>
    </form>
  </div>
</template>
<script>
import api from '../api/users';

export default {
  data() {
    return {
      message: null,
      loaded: false,
      saving: false,
      user: {
        id: null,
        name: "",
        email: ""
      }
    };
  },
  methods: {
    onSubmit(event) {
        this.saving = true;

        api.update(this.user.id, {
            name: this.user.name,
            email: this.user.email,
        }).then((response) => {
            this.message = 'User updated';
            setTimeout(() => this.message = null, 10000);
            this.user = response.data.data;
        }).catch(error => {
            console.log(error)
        }).then(_ => this.saving = false);
    }
  },
  created() {
      api.find(this.$route.params.id).then((response) => {
          setTimeout(() => {
            this.loaded = true;
            this.user = response.data.data;
          }, 5000);
      });
  }
};
</script>
<style lang="scss" scoped>
$red: lighten(red, 30%);
$darkRed: darken($red, 50%);
.form-group label {
  display: block;
}
.alert {
    background: $red;
    color: $darkRed;
    padding: 1rem;
    margin-bottom: 1rem;
    width: 50%;
    border: 1px solid $darkRed;
    border-radius: 5px;
}
</style>

Homework

After the user update succeeds, we just reset the message after two seconds. Change the behavior to set the message and then redirect the user back to the previous location (i.e., the /users index page).

Second, add a “Back” or “Cancel” button to the bottom of the form that discards the for updates and navigates back to the previous page.

If you are feeling adventurous, display validation errors when the UsersEdit component sends an invalid request to the API. Clear the error messages after successfully submitting the form.

What’s Next

With updating users out of the way, we will move our attention to deleting users. Deleting a user will be helpful to demonstrate programmatically navigating after successful deletion. We will also look at defining a global 404 page now that we have dynamic routing for editing users.

If you’re ready, move on to Part 5.

Tips for Using Laravel’s Scheduler

Laravel’s task scheduling features are well documented and give you the full power of cron in a fluent API. The documentation covers most of what you need to get up and running with the scheduler quickly, however, there are a few underlying concepts I’d like to cover related to cron that will help solidify your understanding of how Laravel determines which scheduled tasks should run.

Understanding Cron

At the foundation of Laravel’s scheduler, you need to understand how to schedule tasks on a server through Cron’s somewhat confusing syntax.

Before we dive into understanding cron better and resources you can use to familiarize yourself with cron, let’s look at the essential pieces of the scheduler.

First, you define scheduled tasks through your Laravel application’s App\Console\Kernel::schedule() method:

/**
 * Define the application's command schedule.
 *
 * @param  \Illuminate\Console\Scheduling\Schedule  $schedule
 * @return void
 */
protected function schedule(Schedule $schedule)
{
    // $schedule->command('inspire')
    //         ->hourly();
}

You can use this method to define all of the scheduled tasks that need to run. The Schedule instance method command() returns an instance of the Illuminate\Console\Scheduling\Event class.

If you want to tinker/debug with an instance of the event class, you can dump like the following example:

$event = $schedule->command('inspire')
                  ->hourly();

dd($event->expression); // "0 * * * *"

To trigger this method, run artisan:

> php artisan
"0 * * * *"

The event instance has an expression property that stores the cron representation of the task after the fluent API calls.

Keep this example’s value in mind while we talk about cron.

Laravel shields you from cron with the scheduler’s fluent API—in our example the hourly() method—but understanding cron will help you understand better how to troubleshoot what is going on under the hood.

Here’s a text representation that should clarify how cron works if you’re not familiar (even if you are I bet this is still useful):

# Use the hash sign to prefix a comment
# +---------------- minute (0 - 59)
# |  +------------- hour (0 - 23)
# |  |  +---------- day of month (1 - 31)
# |  |  |  +------- month (1 - 12)
# |  |  |  |  +---- day of week (0 - 7) (Sunday=0 or 7)
# |  |  |  |  |
# *  *  *  *  *  command to be executed
#-----------------------------------------------------------

Using the above example of “0 * * * *” this task will run at the zero minute mark every single hour of every day of every month on every day of the week.

Cron also has some other formats that might feel weird, such as the expression generated using Laravel’s quarterly() method:

0 0 1 1-12/3 *

Running a task quarterly means it will run at 00:00 on the first day of the month in every third month from January to December. The weird 1-12/3 syntax is called a “step value” which can be used in conjunction with ranges. The crontab – Linux manual page describes step values as follows:

Step values can be used in conjunction with ranges. Following a range with “” specifies skips of the number’s value through the range. For example, “0-23/2” can be used in the hours’ field to specify command execution every other hour (the alternative in the V7 standard is “0,2,4,6,8,10,12,14,16,18,20,22”). Steps are also permitted after an asterisk, so if you want to say “every two hours”, just use “*/2”.

I’d encourage you to read through the man-page, or at least keep it handy if you run into a situation where you need to understand the underlying cron schedule for a task better.

Understanding the Scheduling Event API

Laravel has some excellent fluent APIs that allow you to chain multiple method calls together. The scheduling Event class is no different. However, there are some nuances with some of the combinations you might use.

Take the following as an example to better illustrate: let’s say that we want a command to run hourly, but only on Monday, Wednesday, and Friday:

$schedule->command('inspire')
         ->hourly()
         ->mondays()
         ->wednesdays()
         ->fridays();

You might think the above command achieves the correct cron, but that’s not how it works. In the above example, the last “day” method called is fridays(), thus, here’s what the cron looks like:

0 * * * 5

The above task will run hourly, but only on Friday.

Before I show you the correct method call to achieve what we want, let’s look at the Event::fridays() method. The fridays() method (and many others) come from Laravel’s ManagesFrequencies trait:

/**
 * Schedule the event to run only on Fridays.
 *
 * @return $this
 */
public function fridays()
{
    return $this->days(5);
}

The method calls another method on the same trait days() which looks like the following at the time of writing:

/**
 * Set the days of the week the command should run on.
 *
 * @param  array|mixed  $days
 * @return $this
 */
public function days($days)
{
    $days = is_array($days) ? $days : func_get_args();

    return $this->spliceIntoPosition(5, implode(',', $days));
}

You can look at the details of how spliceIntoPosition() works, but all of the “day” methods overwrite each other, so the last one called sticks.

Here’s how you’d write the correct schedule using Laravel’s fluent API:

$schedule->command('inspire')
         ->hourly()
         ->days([1, 3, 5]);

Debugging this Task instance yields the following expression:

0 * * * 1,3,5

Bingo!

Using Cron Directly

Most of the time I think most people prefer to use Laravel’s fluent API. However, the Event task includes a cron() method to set the expression directly:

$schedule->command('inspire')
         ->cron('0 * * * 1,3,5');

I’d argue that Laravel’s fluent API is a more readable way to define the command, but you can get the full power of cron directly with this method if you’d rather use cron syntax.

Crontab Guru

For advanced use-cases and better understanding how your scheduled tasks are going to run, consider debugging the underlying cron expression and using a tool like crontab.guru – the cron schedule expression editor.

Building a Laravel Translation Package –Launching the Package

With the pre-launch checklist completed, it’s time to go ahead and make our package available for others to use.

Chances are, the consumers of the package will be using Composer to manage the dependencies in their project. To make the package compatible with composer, there are a few steps we need to follow.

Tagging a Release

To allow our users to manage their dependencies effectively, it’s important to properly release new versions of the package.

The most common approach to versioning code is to follow Semantic Versioning. This defines a set of ‘rules and requirements that dictate how version numbers are assigned and incremented’. From the website, these are defined as:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

If you are interested, the full definition can be found on the website.

Deciding which version to tag your initial release can be tricky and I recently saw an interesting thread on Twitter discussing the issue.

Semantic Versioning suggest if you are are using the package in production, you should you go straight to 1.0.0, but if not and the package is still in development, the initial release should be 0.1.0.

There is more than one way to tag a release. For the purposes of this article, I’m going to show you how to do so on GitHub.

From the root of your repository, click on ‘Releases’ followed by ‘Draft a new release’.

There, enter your desired version number in the ‘Tag version’ field and select the target you want to reference. This can be a branch or an individual commit. If you wish, you can also provide an appropriate title for which, typically, I use the version number.

You can also provide release notes, which can be a nice way to let you users know exactly what has changed and maybe even thank your contributors.

Submitting to Packagist

Now, to allow users to easily install the package using Composer, it’s common to publish it to Packagist.

To do this, login to your Packagist account and click ‘Submit’ in the main navigation. Enter the URL of your git repository when prompted.

Packagist will pull in all the relevant information from the composer.json file and publish the package to the repository, ready for people to use. The package will now have its own page on the site providing users with details such as the number of installations, versions and latest activity.

Summary

With the package published and ready for people to use, we are at the end of this series of articles.

We now move in to business as usual, releasing new versions of the package and dealing with issues and pull requests submitted from the users.

I really hoped you enjoyed this series and have picked up some useful tips along the way. As usual, should you have any questions or comments, please send them across on Twitter.

Tips to Speed up Your Phpunit Tests

Having a fast test suite can be just as important as having a fast application. As a developer, getting feedback quickly about the state of your code allows for a much quicker development turnaround. Here we are going to run through some tips you can implement today to make your tests run faster.

The example test suites have been made intentionally slow to simulate a broader set of tests and also to emphasize the improvements possible. Your real-world mileage may vary.

ParaTest

This package is a PHPUnit extension that runs your test suite, but instead of running each test case in series (one after the other) like PHPUnit does, it can utilize your machine’s CPU cores to run them in parallel.

To get started with ParaTest you will want to install it as a dev dependency via composer.

composer require --dev brianium/paratest

Now, all we need to do is call ParaTest, just like we would call PHPUnit. It will automatically determine how many processes to utilize based on the number of cores available on your machine.

You can see in the console output above that it has determined that five parallel processes will be used to run the test suite. Comparatively, below is the same test suite run in series with PHPUnit.

1.49 seconds versus 6.15 seconds!

Although ParaTest does determine the number of processes to spin up by itself, you may want to try playing around with this number to find the optimal setup for your machine. To specify the number of processes you can use the —processes option. You should try adding and removing processes, as more does not always result in faster tests.

./vendor/bin/paratest --processes 6

Caveat: Before using ParaTest with a test suite that is hitting a database, you need to consider how you are preparing the database. If you are using Laravel’s RefreshDatabase trait, you will run into issues as a test may be rolling back or migrating the database as another is trying to write to it. Instead, skip persisting data by utilizing the DatabaseTransactions trait, which also does not attempt to change the database structure during the test suite run.

Re-running failed tests

PHPUnit has a handy feature that allows you to re-run only the tests that failed in the previous run. If you are doing red green TDD style development, this is going to speed up your development cycle. Let’s take a look at this feature by starting with a test suite that is passing all the existing tests.

Next, you add a new test that, going by the red-green-refactor model, fails as expected:

After making the changes to your codebase that you believe will make this new test pass, you want to re-run the suite to verify it is functioning as expected. The problem is that this test suite already takes 1.3 seconds to run, so as we continue to add more tests the time spent waiting to verify your code increases.

Wouldn’t it be great if we could run only the failed test we are trying to address? Luckily for us PHPUnit v7.3 added the ability to do this.

To get this working add cacheResult="true" to your phpunit.xml configuration. This tells PHPUnit always to remember which tests previously failed.

<?xml version="1.0" encoding="UTF-8"?>
<phpunit cacheResult="true"
         backupGlobals="false"
         ...>

Now when we run our test suite, PHPUnit will remember which tests are failing and using the following options we can re-run only those that failed.

./vendor/bin/phpunit --order-by=defects --stop-on-defect

We no longer need to wait around for the entire suite to run to see if the one test we are attempting to address is passing.

It is also a good idea to add the cache file .phpunit.result.cache to your .gitignore so that it does not end up being committed to your repository.

Group slow tests

PHPUnit allows you to add tests to different “groups” with the @group annotation. If you have a bunch of tests that are particularly slow, it might be good to add them all to the same group.

class MyTest extends TestCase
{
    public function test_that_is_fast()
    {
        $this->assertTrue(true);
    }

    /**
     * @group slow
     */
    public function test_that_is_slow()
    {
        sleep(10);

        $this->assertTrue(true);
    }

    /**
     * @group slow
     */
    public function test_that_is_slow_2_adrians_revenge()
    {
        sleep(10);

        $this->assertFalse(false);
    }
}

In this example, we have two tests that are going to take 10 seconds to run. The last thing we want is to be running these tests during our development cycle, especially if you are doing test driven development you need your test suit to be snappy.

As the two slower tests are both in the same group, you can now exclude them from a test run by using PHPUnit’s --exclude-group option.

./vendor/bin/phpunit --exclude-group slow

This command will run all your tests except for those in the slow group which will make your tests run much faster. Another benefit of grouping your tests like this is that you are documenting the slow tests so hopefully you can come back and improve them.

It is important however to have some checks in place to ensure that all your tests, including the slow tests, are run before deploying to production. A good way of doing this is having a CI pipeline setup that runs all your tests.

Filtering tests

PHPUnit has a --filter option which accepts a pattern that determines which tests are run. If, for example, you have all your tests namespaced, you can run a specific subset of tests by specifying a namespace. The following command will only run tests in the Tests\Unit\Models namespace and exclude all others.

./vendor/bin/phpunit --filter 'Tests\\Unit\\Models'

The --filter option is flexible and allows filtering by methodName, Class::methodName, and even by file path with /path/to/my/test.php. You should review the PHPUnit docs for this option and check out what is possible.

Password hash rounds

Laravel uses the bcrypt password hashing algorithm by default, which is by design slow and expensive on system resources. If your tests verify user passwords, you could potentially trim more time off your test run by setting the number of rounds the algorithm uses, as the more rounds it performs, the longer it takes.

If you keep your app in sync with the latest changes in the laravel/laravel project you will find that the number of hashing rounds is customizable with an environment variable and is already set to 4, the minimum bcrypt allows, in the phpunit.xml file.

However, if you have not kept up with the latest changes, you can set it in the CreatesApplication trait with the Hash facade.

public function createApplication()
{
    $app = require __DIR__.'/../bootstrap/app.php';

    $app->make(Kernel::class)->bootstrap();

    // set the bcrypt hashing rounds...
    Hash::rounds(4);

    return $app;
}

You can see some pretty neat results of this change in the comments of this tweet from Taylor.

In-memory database

Utilizing an in-memory SQLite database is another way to increase the speed of your tests that hit the database. You can get started with this quickly by adding these two environment keys to your phpunit.xml configuration.

<php>
    ...
    <env name="DB_CONNECTION" value="sqlite"/>
    <env name="DB_DATABASE" value=":memory:"/>
</php>

Caveat: Although this might seem like an easy win, you should consider having database parity with your production environment. If you are using something like MySQL in production, then you should be aware of the potential issues that could be introduced by testing with a different database, such as SQLite. I go into more detail on the differences presented in my feature test suite setup. tl;dr; I believe having parity with your production environment during testing is more important than gaining a small speed increase.

Disable Xdebug

If you are not using Xdebug regularly, you might consider disabling it until you need it. It slows down PHP execution, and as a result, your test suite. If you are using it for debugging on the daily, disabling it for test runs probably isn’t a great option – but it is something to keep in mind when it comes to the speed of your test suite.

You can see in this test suite a substantial speed increase once we disable Xdebug. This is the suite running with Xdebug enabled:

and the same test suite with Xdebug disabled:

Fix your slow tests

The best tip in all of this is of course: fix your slow tests! If you are struggling to pinpoint which tests are causing your test suite to be slow, you might want to look at PHPUnit Report. It is an open-source tool that allows you to visualize your test suite’s performance by generating a cloud, shown below, with the bigger bubbles representing slow tests. This will enable you to find the slowest tests in your suite and incrementally improve their performance.

Building a Laravel Translation Package – Pre-launch Checklist

In the last part of the series, we finished up building the Laravel Translation package. With this completed, we are ready to start thinking about releasing the it to the world. However, before we do, there are few important steps we need to take.

Documentation

It is really important to give users of the package a clear set of instructions of how it should be used. The scope of this documentation will depend on the size and complexity of the package you are creating. At the very minimum, I would suggest adding a README.md file to the root of the project detailing how to install and get started with the project. You can see the README.md for the Laravel Translation package here.

Good documentation will ease your user’s barrier to entry which means they are more likely to use it and recommend it to others. It also comes with the added benefit of reducing the number of issues being added to your repository as users will be more likely to find the answer to their issue.

I cannot highlight enough the importance of good documentation. If you have issues arising as a result of missing documentation, make a point of adding it to prevent the issue arising again.

Contribution guidelines

Once you have released your package, you will very likely get helpful members of the community wanting to assist in making the package even better. This might be fixing open issues, adding to documentation, or even adding brand new features.

These contributions will come in the form of pull requests which you will need to review to decide if you want to accept or reject, or inform the contributor of changes you would like made before it is ready to merge.

To make your life easier, it is a good idea to add a CONTRIBUTING.md file which outlines a set of guidelines a contributor should follow.

This can include things like the code style which should be adopted, branching and git strategy, testing requirements and anything else you would like your contributors to follow.

Issue templates

I recently found out about issue templates on GitHub which provide a nice way for you to outline to your users the information you require from them when they submit an issue to your project.

Out of the box, GitHub has support for bugs reports and feature requests, but you are free to add your own custom templates.

I have applied the default options setup on Laravel Translation project. Now when a user attempts to raise an issue, they are asked to choose whether it’s a bug report or new feature request.

When they select one of these options, they are presented with a template detailing the information they should provide.

For a bug report, this includes steps to reproduce the issue along with details such as screenshots and affected browser and operating system.

For a feature request, it’s more about what the feature is and why you want it.

I’m not suggesting you have to use this feature. However, making you have all the information you need to understand the issue without a lengthy back and forth with the user is always helpful.

License

An open source package needs an open source license to protect its maintainers, contributors, and users.

There are many different license types from which to choose. Which one you end up picking will depend on how you want your package to be used, modified, and shared.

GitHub have created a site to help open source maintainers choose an open source license for their project. By their own admission, it’s not a fully comprehensive directory but highlights those most likely to be suitable and points you in the direction of others should their recommendation not be sufficient.

Continuous integration

There is a huge number of continuous integration tools out there that can help with things like ensuring your code is not deployed when tests are failing, automatically fixing code style issues, measuring the quality of your code, and a myriad of others.

These tools integrate directly with GitHub and can be triggered to automatically run when code is committed to the repository.

This means when a contributor issues a new pull requests to the project, you already have insight about its overall quality before even looking at the code.

What’s more, most of these tools have a free tier for open source projects.

Below are some that I personally recommend.

Travis CI

When code is committed to your repository, GitHub triggers Travis to build and test your code. You’ll be notified within minutes whether or not everything looks healthy. It works with pull requests too, so you can see if contributors have added any issues to your codebase.

Scrutinizer

Scrutinizer will analyse your code to see if you are using any unsafe language features. It also has features such as code browsing which provides IDE-like features when reviewing code.

StyleCI

StyleCI allows you to define your preferred code style. When code is committed to the repository, it will automatically check it against your guidelines for errors and automatically fix them by providing a pull request to the repository with recommended changes. This gives you the peace of mind that your code will never be breaking the style guidelines of the project.

Integration is usually straight forward and involves logging into the service, integrating with GitHub, and adding a configuration file to the root of your project. Once you are setup, they will just run as required when code is committed to the repository.

With all these things in place, we are just about ready to launch. Join me in the next article where I talk you through the process of making your package available for others to use. As always, if you have any questions, contact me on Twitter.

Tracking Vue Errors With Honeybadger in Laravel

I feel like Vue and Laravel were almost made to go together and I’ve been using Vue for quite a while. I’m super stoked that Honeybadger recently shipped an official Vue.js integration. Let’s walk through setting everything up so you so if your Vue app catches fire, you can get it handled!

Set up Vue Error Reporting

Install the Honeybadger Vue package

# npm
npm add @honeybadger-io/vue --save

# yarn
yarn add @honeybadger-io/vue

Configure Honeybadger

To set this up, hop on over to assets/js/app.js and import Honeybadger’s Vue library.

import HoneybadgerVue from '@honeybadger-io/vue'

Next, we need to add it to Vue.

Vue.use(HoneybadgerVue, {
    apiKey: process.env.MIX_HONEYBADGER_API_KEY,
    environment: process.env.MIX_HONEYBADGER_ENVIRONMENT,
    revision: 'master'
});

I’m sure you’ve noticed all the references to local environment variables. Laravel mix will pass environment variables that are prefixed with MIX_. This helps us make this integration super smooth. So we’ll have to go update the .env file. Laravel Mix Docs

If you are using honeybadger-io/honeybadger-laravel you’ll want to set up a new project for your Vue app.

MIX_HONEYBADGER_API_KEY="GCDWEMt9vhhpDsUT"
MIX_HONEYBADGER_ENVIRONMENT="${APP_ENV}"

All unhandled exceptions will now be reported to Honeybadger. Take a look at the official Vue Integration Guide for more detailed instructions and documentation.

Don’t Forget Your Source Maps!

Getting good stack traces for front-end errors can be tricky. A good stack trace points to the line number in the original source file where the error occurred in your Vue project.

Unfortunately, modern build processes have multiple stages where your original source code gets chopped up, optimized, and compressed to squeeze the best performance from it. This is great for performance, but bad for debugging (read: error reporting). Source maps are the answer.

Honeybadger provides a pretty slick Webpack plugin that will push your source maps to Honeybadger and make tracking errors a breeze.

Install the Honeybadger Webpack package

# npm
npm add @honeybadger-io/webpack --save-dev

# yarn
yarn add @honeybadger-io/webpack

Add the plugin to your Webpack config:

Inside webpack.mix.js we’ll start by importing the source map library.

const HoneybadgerSourceMapPlugin = require('@honeybadger-io/webpack')

Next, we need to enable source maps for Laravel Mix.

mix.js('resources/js/app.js', 'public/js')
   .sass('resources/sass/app.scss', 'public/css')
   .sourceMaps()

We also have to extend Laravel Mix’s Webpack config to use the Honeybadger source map plugin.

mix.webpackConfig({
    plugins: [new HoneybadgerSourceMapPlugin({
    apiKey: process.env.MIX_HONEYBADGER_API_KEY,
    assetsUrl: process.env.MIX_HONEYBADGER_ASSET_URL,
    revision: 'master'
  })]
})

We’ve added a new variable MIX_HONEYBADGER_ASSET_URL , so lets update the .env file.

MIX_HONEYBADGER_ASSET_URL=https://cdn.example.com/assets

Every time you build your project with npm run prod or yarn prod, the source map will be uploaded to Honeybadger. Check out the Honeybadger Webpack Plugin on GitHub for additional documentation.

Versioning Your Project

Notice that in the above examples, we set the revision: 'master' option in both the Vue and Webpack config. In Honeybadger, the revision is a unique version for your project; it’s what is used to match up the correct source map with the correct error. While master will technically work for your first build, you’ll want to come up with a better way to version your project in Honeybadger—every build should be unique.

Personally, I prefer Git commit SHAs, since they are always unique for the current build. According to Stack Overflow, here’s a quick way to snag your project’s current commit SHA.

In your webpack.mix.js let’s add the following environment variable.

process.env.MIX_HONEYBADGER_REVISION = require('child_process')
  .execSync('git rev-parse HEAD')
  .toString().trim()

Now we can update the Webpack config to use the new variable.

mix.webpackConfig({
    plugins: [new HoneybadgerSourceMapPlugin({
    apiKey: process.env.MIX_HONEYBADGER_API_KEY,
    assetsUrl: process.env.MIX_HONEYBADGER_ASSET_URL,
    revision: process.env.MIX_HONEYBADGER_REVISION
  })]
})

We can also circle back and update the Vue integration to use this new revision variable.

Vue.use(HoneybadgerVue, {
    ...
    revision: process.env.MIX_HONEYBADGER_REVISION
    ...
})

Wrapping Up

I’m stoked to see official Vue support for Honeybadger. It’s their mission to make Honeybadger the best way to monitor your production Vue applications. To unlock the full power of Honeybadger, use it to monitor Laravel, Vue, scheduled tasks, and external uptime.

Be the Honeybadger on Your Team — Start Your 15-day Free Trial


Many thanks to Honeybadger for sponsoring Laravel News

Building a Laravel Translation Package – Handling Missing Translation Keys

In the last instalment of this series, we talked about building the frontend translation management tool. In this article, we are going to move away from the frontend and follow the process of building another backend feature.

One of the most frustrating things about translation management in a Laravel application is forgetting to add translations to the corresponding language file. This has the undesirable result of either the translation key or the default language being rendered on the page rather than the translation for the current language.

To mitigate this issue, the Laravel Translation package provides a way to scan a project for translations that don’t exist in the translation files for all languages supported by the application.

We achieve the scanning part of this process by recursively looking through all files of the configured directories for instances of any of the Laravel translation retrieval methods (e.g. __(), trans()). These methods are captured using regular expressions and the translation strings extracted. These strings can be compared against the existing translations to see whether or not they already exist or need to be created.

Configuration

In the package configuration, there are two keys defined to make the scanner more flexible and efficient.

'translation_methods' => ['trans', '__']

The translation_methods option allows the user to provide an array of translation retrieval methods which the scanner uses to look for keys.

'scan_paths' => [app_path(), resource_path()]

The scan_paths key allows the user to define an array of directories for the scanner to look through when searching for translation keys.

Of course, it’s possible to use base_path() here to search through the whole project, but defining only the directories where you expect to find translations will add significant speed improvements to the scanning process.

Implementation

The scanning functionality is handled by a single class. This class accepts an instance of Laravel’s Illuminate\Filesystem\Filesystem, which it uses to traverse directories and interact with files, along with the array of translation methods and scan paths outlined above.

The class contains a single method findTranslations, which is responsible for carrying out the task. It utilises the following regular expression which is influenced by a combination of those found in Barry vd. Heuvel’s Laravel Translation Manager package and Mohamed Said’s Laravel Language Manager package.

$matchingPattern =
    '[^\w]'. // Must not start with any alphanum or _
    '(?<!->)'. // Must not start with ->
    '('.implode('|', $this->translationMethods).')'. // Must start with one of the functions
    "\(". // Match opening parentheses
    "[\'\"]". // Match " or '
    '('. // Start a new group to match:
    '.+'. // Must start with group
    ')'. // Close group
    "[\'\"]". // Closing quote
    "[\),]";  // Close parentheses or new parameter

The method recursively iterates over all of the files in the provided directories using the regular expression to find instances of the provided translation retrieval methods, returning an array of all of the matches.

On each match, an additional check is performed to determine whether the match is a group translation (php array style) or single translation (JSON style). This is done by simply checking whether or not the match contains a period. If so, everything before the period is the file and everything after is the key (e.g. you could find the translation for validation.accepted by getting the accepted key from the validation.php file).

In the end, we have an array which looks similar to the following:

[
    'single' => [
        'Welcome' => 'Welcome'
    ],
    'group' => [
        'validation' => [
            'accepted' => 'The :attribute must be accepted.',
            ...
        ],
    ],
];

Of course, doing this will give us every translation found in the configured paths, so how do we go about determining those which are missing? It’s simple really, we now have all the tagged translations in the app in an array and we can use our file driver created earlier in the series to get all the translations in the language files in an array format.

Not only that, but the format of the two arrays will be the same, so all we need to do is diff the two. We can conclude that anything in the array of translations gathered from scanning the app which doesn’t appear in the language file translations needs to be added to the relevant language file.

To do this, we simply iterate over the missing translations and utilise the methods already built in the file driver to add them.

$missingTranslations = $this->findMissingTranslations($language);

if (isset($missingTranslations['single'])) {
    foreach ($missingTranslations['single'] as $key => $value) {
        $this->addSingleTranslation($language, $key);
    }
}

if (isset($missingTranslations['group'])) {
    foreach ($missingTranslations['group'] as $group => $keys) {
        foreach ($keys as $key => $value) {
            $this->addGroupTranslation($language, "{$group}.{$key}");
        }
    }
}

This article brings us to the end of the file-based functionality of the translation package. Next time, we will utilise the groundwork we have in place to start our database driver. See you next time, and as usual, if you have any questions, please feel free to contact me on Twitter.

Speeding Up PHP with OPcache in Docker

If you’re on Docker for Mac or Docker for Windows, you might see some noticeable slowness and time to the first byte (TTFB) depending on your application’s setup. One of the most important things you can do to improve performance is enabling the OPCache module (regardless of the development environment). There are other things like volume caching (if possible), but OPcache is a win that you want in any environment you’re running PHP applications.

OPcache Settings

When you enable the OPCache module, you need to consider a few things so that your configuration is development-friendly, yet, can be ready for production if you plan on using Docker in production.

Here’s the rough configuration you’ll end up with in development:

[opcache]
opcache.enable=1
; 0 means it will check on every request
; 0 is irrelevant if opcache.validate_timestamps=0 which is desirable in production
opcache.revalidate_freq=0
opcache.validate_timestamps=1
opcache.max_accelerated_files=10000
opcache.memory_consumption=192
opcache.max_wasted_percentage=10
opcache.interned_strings_buffer=16
opcache.fast_shutdown=1

Note that we’ve hard-coded these values, which isn’t very flexible between environments. We’ll come back and make it more flexible in a minute!

The most important setting for development is the opcache.validate_timestamps=1 which allows us to make changes to our code. If you’re using a Docker volume, it means that OPcache will respect file timestamps and your changes will reflect immediately. In a production environment that’s not ideal, and that’s where our dynamic configuration will come into play shortly.

You shouldn’t copy/paste these settings verbatim without understanding what they do. The configuration primarily comes from the article Best Zend OpCache Settings/Tuning/Config by Steve Corona and is an excellent resource on understanding each of these values. Another excellent resource on performance (including OPcache) is Scaling Laravel by Chris Fidao.

Copying INI Settings in the Dockerfile

Here’s a rough Dockerfile for installing the OPcache module and copying in an INI file to configure OPCache:

FROM php:7.2-apache-stretch

RUN docker-php-ext-install opcache

COPY docker/php/conf.d/opcache.ini /usr/local/etc/php/conf.d/opcache.ini
COPY . /var/www/html

I’m not showing a complete working example of a PHP application. Laravel’s document root needs to be /var/www/html/public, but I am merely trying to demonstrate setting up OPcache in this article. For a complete example you should check out my course Docker for PHP Developers.

The Dockerfile assumes the following folder structure for organizing your Docker files:

├── app
├── bootstrap
├── config
├── database
├── docker
│   └── php
│       └── conf.d
├── public
├── resources
├── routes
├── storage
├── tests
└── vendor

Note that in a real project you’d probably have a base image instead of having all this in your project, but I’m showing this so you can follow along with the OPcache-specific configuration.

Building the Dockerfile

Here’s the build command you can run to experiment with configuring OPcache:

docker build --pull -t opcache-demo -f docker/Dockerfile .
docker run --rm -it opcache-demo bash
# In a running container:
/var/www/html# php -m | grep OPcache
Zend OPcache

Flexible Configuration with Environment

We have OPcache enabled, but if we want to make this configuration flexible we can use environment variables to configure INI settings:

[opcache]

opcache.enable=1
opcache.revalidate_freq=0
opcache.validate_timestamps=${PHP_OPCACHE_VALIDATE_TIMESTAMPS}
opcache.max_accelerated_files=${PHP_OPCACHE_MAX_ACCELERATED_FILES}
opcache.memory_consumption=${PHP_OPCACHE_MEMORY_CONSUMPTION}
opcache.max_wasted_percentage=${PHP_OPCACHE_MAX_WASTED_PERCENTAGE}
opcache.interned_strings_buffer=16

opcache.fast_shutdown=1

Now that we have an environment-powered INI file, let’s provide some defaults for our project in the Dockerfile:

FROM php:7.2-apache-stretch

ENV PHP_OPCACHE_VALIDATE_TIMESTAMPS="0" \
    PHP_OPCACHE_MAX_ACCELERATED_FILES="10000" \
    PHP_OPCACHE_MEMORY_CONSUMPTION="192" \
    PHP_OPCACHE_MAX_WASTED_PERCENTAGE="10"

RUN docker-php-ext-install opcache

COPY docker/php/conf.d/opcache.ini /usr/local/etc/php/conf.d/opcache.ini
COPY . /var/www/html

Note that by default we’ll disable timestamps, so we need to override this environment value in development. In this post, we’re not going to cover using Docker Compose to set environment, but this is the rough command you can run to make sure timestamps are validated in development:

# Rebuild the image first
docker build --pull -t opcache-demo -f docker/Dockerfile .

docker run --rm -d \
  -p 8080:80 \
  -e "PHP_OPCACHE_VALIDATE_TIMESTAMPS=1" \
    opcache-demo

With the Apache container running in the background, you can validate that the OPcache timestamp setting is 1 by verifying in the container:

# get the container id
docker ps 
docker exec -it 6002d83c6d24 bash

# In a running container:
/var/www/html# php -i | grep validate_timestamps
opcache.validate_timestamps => On => On

As you can see, our configuration is now powered dynamically by environment variables! In Docker, your code will be cached with OPCache by default and will not update due to timestamps validation being disabled. Please note that if you’re using Nginx + PHP-FPM, you’ll need to either ensure that clear_env = no is in your FPM pool (probably www):

[www]
clear_env = no

You can also manually add environment variables to the pool if you don’t want to keep the entire environment available to PHP.

Learn More

While the ideas presented in this article aren’t exclusive to Docker, the extra bit of help from OPcache in development is helpful, without sacrificing the ability to update your code.

If you want to learn more about developing PHP applications with Docker and PHP, including Laravel, check out my course Docker for PHP Developers.


The links included are affiliate links which means if you decide to buy Laravel News gets a little kickback to help run this site.

Unwrapping array_wrap()

Laravel has a wrap method and array_wrap() helper to normalize values into an array. Raul @rcubitto shared this nice tip about it on Twitter and before seeing his Tweet I wasn’t aware of this method:

I noticed that some people asked in response to Raul’s tweet why that was needed vs. casting to an array:

$value = (array) $value;

Typecasting works for primitive values, but “iterables” get treated differently. For example, let’s say you want to allow a user to pass either one Eloquent model or an array of models. Here’s what happens when you try to cast a single model to an array with (array):

>>> $u = \App\User::create([
    'name' => 'Admin',
    'email' => 'admin@example.com',
    'password' => bcrypt('secret')
]);
>>> (array) $u
=> [
     "\0*\0fillable" => [
       "name",
       "email",
       "password",
     ],
     "\0*\0hidden" => [
       "password",
       "remember_token",
     ],
     "\0*\0connection" => "mysql",
     "\0*\0table" => "users",
     "\0*\0primaryKey" => "id",
     "\0*\0keyType" => "int",
     "incrementing" => true,
     "\0*\0with" => [],
     "\0*\0withCount" => [],
     "\0*\0perPage" => 15,
     "exists" => true,
     "wasRecentlyCreated" => true,
     "\0*\0attributes" => [
       "name" => "Admin",
       "email" => "admin@example.com",
       "password" => "$2y$10$LtI7hHc.eZQi9BcU61Qp3eTXliFrBq03Lav1QpLlDFvBNbsPYklYS",
       "updated_at" => "2018-11-28 23:14:40",
       "created_at" => "2018-11-28 23:14:40",
       "id" => 1,
     ],
     ....
   ]

Here’s how array_wrap() treats the same value:

>>> array_wrap($u)
=> [
     App\User {#2897
       name: "Admin",
       email: "admin@example.com",
       updated_at: "2018-11-28 23:14:40",
       created_at: "2018-11-28 23:14:40",
       id: 1,
     },
   ]

The helper documentation states “If the given value is not an array and not null, wrap it in one.” and looks like this at the time of writing:

/**
 * If the given value is not an array and not null, wrap it in one.
 *
 * @param  mixed  $value
 * @return array
 */
public static function wrap($value)
{
    if (is_null($value)) {
        return [];
    }

    return is_array($value) ? $value : [$value];
}

How is this helper useful?

As stated, the helper takes care of null values and returns an empty array when the value is null. Laravel has various places in the framework where you can pass an array of values or a single value. Normalizing like this makes for a nice API and Laravel takes care of creating a consistent array behind the scenes.

Here’s an example of setting the model on the ModelNotFoundException class using Arr::wrap:

/**
 * Set the affected Eloquent model and instance ids.
 *
 * @param  string  $model
 * @param  int|array  $ids
 * @return $this
 */
public function setModel($model, $ids = [])
{
    $this->model = $model;
    $this->ids = Arr::wrap($ids);

    $this->message = "No query results for model [{$model}]";

    if (count($this->ids) > 0) {
        $this->message .= ' '.implode(', ', $this->ids);
    } else {
        $this->message .= '.';
    }

    return $this;
}

The wrap method is found in the Arr class (Illuminate\Support\Arr) which has an accompanying array_wrap helper function you can use in Laravel apps.