Case Study: Migrating away from AngularJS

A Case Study: Migrating away from AngularJS using a Monorepo and Bootstrapping

Many Repo v Mono Repo

Image credit: jaeyow@dev.to

Introduction

The Scrappy Dev was recently contracted to help a client that was leveraging an AngularJS frontend application to update their frontend calls as they switched cloud providers. Unfortunately nobody at the company was able to build the AngularJS locally, and all attempts to run it in local were failing. This would mean that the only way to test the application was to push it to the cloud and test it there. This would be incredibly inefficient to develop the application, and therefore impractial for a client on a budget.

The choice then came down to whether to update the AngularJS application or to migrate to a new framework. If we migrated to a new framework, we would have to ensure a path to migrate the application in a way that would not disrupt the business. We also wanted to ensure that we could continue to develop the application in a way that would be efficient for the developers.

Migration Goals

The goals of the migration were as follows:

  1. Ensure that the business was not disrupted during the migration.
  2. Ensure that the client could QA the application as it was migrated, allowing for quick turnaround times to fix any issues that were found.
  3. If the client could not QA the application, ensure that the client could continue to use the old application while the new application was being developed.
  4. Allow the client to migrate the new application in pieces if they ran out of budget.

Bootstrapping the Navigation

The first thing we needed to do was to ensure that we could continue to develop the application without business disruption.

The initial challenge was building consistent navigation across multiple technology stacks--some served by AngularJS and Pyramid, others served by React, and others served by Zope. There could be no conflicts between any global variables or CSS classes, and the navigation package had to be able to bootstrap onto the served page without disrupting the served page.

To ensure consistency, we created a navigation package, built in React and isolated using Styled Components, that could be served independently of the technology stack.

Ideally each stack would simply add a <script> tag to their HTML which would bootstrap the navigation onto the served page. Once the new navigation was loaded, it would take over the navigation of the page, and the old navigation would be nullified.

The loaded script looked something like this:

// Load the navigation package
;(function fetchNavigationManifest() {
  const loadScript = (url) => {
    return new Promise((resolve, reject) => {
      const script = document.createElement('script')
      script.src = url
      script.async = true
      script.onload = resolve
      script.onerror = reject
      document.body.appendChild(script)
    })
  }

  // Fetch the asset-manifest.json file
  fetch('/your-server-path/asset-manifest.json') // Loads the asset-manifest.json file for the React navbar
    .then((response) => response.json())
    .then((manifest) => {
      // Load all the CSS files from the manifest
      const cssFiles = Object.values(manifest.files).filter((file) =>
        file.endsWith('.css')
      )

      cssFiles.forEach((file) => {
        const link = document.createElement('link')
        link.href = file
        link.rel = 'stylesheet'
        document.head.appendChild(link)
      })

      // Load all the JavaScript files from the manifest
      const jsFiles = Object.values(manifest.files).filter((file) =>
        file.endsWith('.js')
      )

      const loadPromises = jsFiles.map((file) => loadScript(file))

      // When all the JavaScript files are loaded, start your React application
      Promise.all(loadPromises)
    })
})()

Walking through this function step by step, we can see that it does the following:

  1. The asset-manifest.json fetches all of the assets from the server. This file is generated by Create React App and contains a list of all the files that are generated when you build your React application.
  2. We then filter the files to get all the CSS files and load them into the head of the document.
  3. We then filter the files to get all the JavaScript files and load them into the body of the document.
  4. Once all the JavaScript files are loaded, we can start our React application.
  5. We execute the function immediately after the script tag is loaded using an IIFE (Immediately Invoked Function Expression) so that the function can be encapsulated.

The JS above would be stored on a server and served to the client as a script tag. The server would serve the script tag to the client, and the client would execute the script tag, which would load the navigation package and bootstrap the navigation onto the served page.

Here's a sample of what the original HTML page would look like:

...
<body>
  ...
  <head>
    ...
    <script src="/your-server-path/asset-manifest.json"></script>
    ...
  </head>
  ...
</body>

The order of operations on the client would look something like this:

  1. The server serves the original page with the old navigation.
  2. When the client loads the page, the client executes the script tag.
  3. The script tag fetches the asset-manifest.json file from the server.
  4. The script then fetches all the CSS and JavaScript files from the asset-manifest.json file and loads them into the head and body of the document.
  5. Once all the JavaScript files are loaded, the React application starts and takes over the navigation of the page.

The end result: the client may not even notice that the navigation has changed, and the developers can continue to develop the application without disrupting the business.

Choosing the Framework and Stack

The next step was to migrate the AngularJS application to a new framework. Since React was the most familiar and solved the use cases, we went with the following stack:

  1. React
  2. TypeScript
  3. Tailwind CSS for utility-based styling
  4. MUI (Material-UI) for quick turnover of pages
  5. Styled Components for isolated styling when needed
  6. React context for state management, since none of the pages really had a need for a global state management solution
  7. Lerna for managing the monorepo
  8. Craco for customizing the Create React App build process
  9. ESLint and Prettier for code quality

Since there was not enough budget for a designer, MUI was chosen to reduce the need for a designer to create custom components. MUI has a lot of components that can be used out of the box so we could focus on the business logic and not the design.

Migrating the Application

Migration would need to be done on a page-by-page basis, as we wanted to ensure the client could QA the application as it was migrated. Since the client's time was limited, we wanted to ensure that the client could QA pieces of the application and have quick turnaround times to fix any issues that were found.

Once we realized that the navigation would need to be consistent between the React pages and bootstrapped pages, we decided to use a monorepo to manage the migration. This would allow the same code to be used for both the main React pages and the bootstrapped pages, and would allow us to manage the migration in a single repository.

We used Lerna to manage the monorepo, and we used Create React App to bootstrap the React application. We also used Craco to customize the Create React App build process, and we used TypeScript to ensure type safety across the monorepo.

The structure of the monorepo looked something like this:

monorepo/
  packages/
    common/
      .eslintrc.js
      craco.config.js     <-- used to customize Create React App's build process
      tsconfig.json
      package.json
      ...

    navigation/
      src/
        components/
          Navigation.tsx
        index.ts

      .eslintrc.js          <-- imports the common .eslintrc.js
      craco.config.js       <-- imports the common craco.config.js
      tsconfig.json         <-- imports the common tsconfig.json
      package.json
      ...


    app/
      src/
        pages/
          Page1.tsx
          Page2.tsx
          ...
        index.ts

      .eslintrc.js          <-- imports the common .eslintrc.js
      craco.config.js       <-- imports the common craco.config.js
      tsconfig.json         <-- imports the common tsconfig.json
      package.json
      ...

  package.json
  lerna.json
  ...

Login and Authentication

There was an interesting predicament as moved to the new application. While both the old and new stacks were using header cookies for authentication (in Zope and Pyramid), Zope did not have RESTful endpoints for authentication. This meant that we could not simply use the same authentication endpoints for the new application.

Now, normally you would just use a POST request to authenticate the user, but Zope had a quirk where the server would not recognize the user unless the server had already established a header cookie with the client.

To get around this, we did the following:

  1. When the page loads, we make a GET call to the base Zope URL to establish a baseline header cookie for the server to recognize. Without this initial call, the POST call is considered the establishing call instead.
  2. When the user logs in, we make a POST call to the Zope URL with the user's credentials. If the user is authenticated, the server will return a 200 status code and a header cookie that will be used for the rest of the session.
  3. If the user is not authenticated, the server will return a 401 status code.

Since we were also doing a page refresh while we were working on this problem, this is the change to the user interface:

Login: Before

Legacy Login

Login: After

React Login

While the important part was to ensure the header remained the same, we took the opportunity to update the header with improved consistency, and the login page for a better user experience. The header was previously narrow on the login page, and fully extended on the rest of the pages of the application. The login page in general felt like a frame within the application, and we wanted to make it feel like a part of the application.

On a note of optimization, whenever the user clicked the "Forgot Password" button, it would require a round trip to the Zope server. In the updated React application, the client would simply go to a locally cached page, so it would not require a round trip to the server. This resulted in a much faster user experience.

Forgot Password: Before

Legacy Forgot Password

Forgot Password: After

React Forgot Password

You may notice that the updated "Forgot Password" screen has fewer inputs. Namely, the "User name" and "Service Number" fields were dropped.

When I inquired about the page, I asked the client how the "User name" and "Service Number" fields would be used differently than an "Email" field. It was discovered that there could be a very small subset of users that may use the additional fields, but the majority of users would simply need the "Email" field.

As a result, we decided to drop the "User name" and "Service Number" fields and simply use the "Email" field. This would simplify the user experience and reduce the amount of data that the user would need to input.

Change Password: Before

Legacy Change Password

Change Password: After

React Change Password p1 React Change Password p2

You may notice that the legacy "Change Password" screen doesn't provide a way for the user to change their password. This can be problematic for a few reasons:

  1. The user remembers their password prior to receiving the email. This could potentially lock them out of their account until they receive the email.
  2. Similar to the first reason, if the server fails in its delivery, but still changes the user's password they will be locked out until the email is received.
  3. The user may wish to set their own password.
  4. You could lock someone out of their account if you change their password without their consent.

As such, I advocated for a change to the "Change Password" screen. The new "Change Password" screen would allow the user to set their own password once they received the email. This would allow the user to set their own password, and would mitigate the risk of locking the user out of their account.

Routing

We considered replacing the old routes with new routes, but we since the old routes were already in use, we decided to keep the old routes and use the new routes for the new pages. This would mitigate the risk of disruption to the business, and would allow us to QA the new pages as they were migrated. If any issues were found, we could simply roll back to the old pages and fix the issues.

Limited Monorepo

A good use case for a monorepo is when you have a lot of shared code between different packages--especially when you have many teams all working with the code. In our case, however, the only shared code was the navigation package, which was used by both the bootstrapped pages and the React pages.

As such, there was no reason to have many packages in the monorepo; therefore, we limited the monorepo to the following packages:

  1. the common package
  2. the navigation package
  3. the app package

The common package contained shared configurations and components (i.e., <Navigation />) that were used by both the navigation and app packages.

The navigation package contained the navigation package that was bootstrapped onto the served pages.

The app package contained the React application that was used to serve the new pages.

Both the navigation and app packages could be deployed independently of each other, and the navigation package could be bootstrapped onto the served pages without disrupting the served pages.

Versioning

A fun problem that we experienced in building the applications was versioning. Anytime a problem was encountered, we needed to ensure that we were on the "same page" so-to-speak.

We upticked the package.json version of each respective deployable package (i.e., navigation and app), and wrote a script that would append the version to the window object.

When we encountered issues (which were inevitable) we would have the client check which deployed version of the app or navigation they were using by inspecting the window object in the console of their browser.

As an example, the window object would look something like this:

window.clientNameApp = {
  app: {
    version: '1.2.3',
  },
  navigation: {
    version: '1.1.2',
  },
}

The script used to append the version to the window object looked something like this:

// @common/utils/appVersion.ts
import { getEnv } from './env'

type AppVersion = {
  version: string
  env: string
}

interface ClientApp {
  [key: string]: AppVersion
}

interface ClientAppWindow extends Window {
  clientApp?: {
    [key: string]: {
      version: string
    }
  }
}

// Append the version to the window object
export function setAppVersion(appName: string, version: string) {
  const appVersion: AppVersion = {
    version: version || 'unknown',
    env: getEnv(), // get the environment from a @common/utils/env.ts utility file
  }

  const clientApp: ClientApp = {}

  if (!(window as MSAppWindow).clientApp) {
    ;(window as MSAppWindow).clientApp = clientApp
  }

  msapp[appName] = appVersion
}

The script would be used in the app and navigation packages to append the version to the window object. An example of how it would be used in the app package would look something like this:

// @app/src/index.ts
import { setAppVersion } from '@common/utils/appVersion'
import { version } from '../package.json'

setAppVersion('app', version)

The script would be used in the navigation package in the same way.

Conclusion

The migration was a success.

  1. The business had little disruption during the migration.
  2. The client was able to QA the application as it was migrated, allowing for quick turnaround times to fix any issues that were found.
  3. When the client could not QA the application right away (due to time constraints), the client could continue to use the old application.
  4. The client was able to migrate the critical pages of the application in pieces when they ran out of budget.