Small Changes to a Web Application That Make Big Differences in Performance

Any web developer will tell you that initial programming comes with hurdles. When you’re stuck in a problem and can’t figure out why your code won’t work, you go through every possible change to figure out just how to make it work. Standards and clean code go out the window, and you bang away at the keyboard until your code works. Once it works, you don’t want to make any changes for fear that it won’t work again.

It’s a common habit to then leave messy code in place with the attitude that “it works, so let’s not mess with it.” Messy code means messy performance, and the right thing to do is go back and clean it up. If you leave the messy code in place, it turns into “spaghetti code” and makes it difficult for other programmers to maintain it. Eventually, a mole hill really does turn into a mountain when performance wanes due to poorly constructed code.

At some point, it comes time to review the backend structure and make minor changes to it. You don’t have to overhaul the entire code base to make minor tweaks, but you do need to test and QA it before deployment. Even minor changes can make a difference, and here are some that shouldn’t take too much time.

Note: these changes are general and can be applied to any web development language.

Reduce Calls to the Server

Whether you’re redirecting the user, making calls to a database, or just refreshing the browser window, all of these events tax the web server. When you’re in the midst of a problem that you can’t resolve, you often make calls or trigger unneeded events as you figure out why your code won’t work. Usually, you can combine multiple calls into one and reduce resource usage.

This problem is often created from database calls. For instance, you call a record set from the database and then loop through each record and make another call to the database to find additional information. If you have a hundred records in your data set, then you’re making 101 calls to the database for just one user. Combine them into one, and you could see an immediate improvement on speed with just this one change.

Compress Images

It’s hard to imagine that people still use dialup Internet, but as of 2015 CNN reported that 2.1 million users still used AOL dialup. If you remember the days of dialup, you know that anything over a few kilobytes was a terrible pain to download. Most developers think that only broadband users exist today, but plenty of people use dialup, so you need to consider these users when you create image-rich sites.

One way to combat the issue is to compress images. The images are compressed on your server and transfer to the user’s computer in a compressed state. Once it reaches the user’s computer, the browser extracts the images. This lets you have high-quality images without slowing down your website.

Don’t Override Caching

You can force the user to download your pages every time they visit, but it’s a strain on their bandwidth as well as your server resources. It makes sense to force the user to download dynamic pages that update regularly, but certain components of your site rarely change. Think of the footer, header and navigation sections. Do they change often? If not, then you can leave them cached.

Most of the time, you don’t need users to download CSS and JavaScript files either. New sites can have dozens of JS and CSS files for the user to download. It’s much different than older standards where a few JS files and one CSS file were all it took to stylize a site. Now, sites have several components that make up one site page.

You can also utilize a CDN for these components. You transfer the load times to the CDN and reduce the overhead on your internal servers.

Minimize Components

Just like compression of images, you can minimize your external scripts. You can either use a tool for this or go through the code yourself. Google PageSpeed Insights gives you some good suggestions for minimizing the size of your code files.

Some libraries already come with minimized versions. JQuery is one example, but you will need a way to minimize your custom scripts. Cssmin.js is one tool for CSS and JavaScript.  JSMin will help you with JavaScript files.

Resize Images. Don’t Just Upload Huge Files

A common mistake among new developers is to upload a large 100MB image file and then use the image HTML tag to resize it when displayed in the browser. When this happens, the user downloads the large file and then resizes it during the rendering process. It doesn’t matter how small you resize the image, the user is always forced to download the larger file.

If you need to display a thumbnail image, use a thumbnail-sized file. Resize it in your favorite image editor and upload it to your site. It means you need several files for different sizes that you display, but it also speeds up load times.

Prioritize Important Content Above-the-Fold

Anything at the top of your page that doesn’t require scrolling is considered “above the fold.” Lazy loading is also an option. Lazy loading uses Ajax to load components of your site, so loading the top first would take priority while slower loading modules loaded asynchronously from the others.

Your users won’t even see the bottom section loading since they normally take a few seconds to digest the top. This method also loads the top section faster, so your perceived speed is much better than actual speed.

Most of these suggestions take little effort from your developers. Once you clean up the code, you might see an improvement without any other changes. Just make sure to test the site before releasing it to the public.

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy

5 Application Tools for Rapid Application Deployment

Deploying updates to you application is initially easy. It takes a few minutes to deploy a few files and create backups. When your code base grows, it’s no longer feasible to manually deploy it. Ask any developer and they’ll tell you the disasters that happen when you manually deploy files from a large code base. Some files are accidentally forgotten and bugs are introduced, backups are forgotten and rollback is complicated, or developers forget certain directories and deployment is incomplete. You can fix these problems by scripting your deployment.

Before we introduce tools, you should decide what kind of deployment you need. Desktop applications take a different approach than web deployment. This article covers web deployment, which is basically backing up files and updating them to the latest version. With the right tools, however, you can accomplish a one-click deployment solution. It’s great for large applications where developers and operations people are stuck for an entire day deploying various upgrades across the network.

Some solutions also depend on the operating system you use, but most of these tools will work cross-platform.


Jenkins is one of the most versatile tools on the market. Jenkins lets you script your deployment, save it as a specific configuration, and then it supports one-click deployment. You can deploy to almost any kind of server, but Jenkins heavily relies on your own scripts. It’s written in Java, so it’ supports almost any platform as long as it supports Java applications.

Visual Studio

If you develop in Windows, you already work with Visual Studio. What you might not have worked with yet is its deployment tools. You can use Visual Studio with cloud services and deploy directly from your development environment. If you create Windows applications, the easiest way to deploy applications is using Visual Studio and Team Foundation Server.


If you have a large enterprise or a complex deployment infrastructure, Chef is the best way to go. Chef has a larger learning curve than the other tools, but it’s well worth the effort if you have a large application to deploy. Many times, enterprise solutions include an application that spans multiple servers. Miss just one of these servers and your entire suite of productivity tools crash. Chef ensures that this won’t happen, and your operations team can deploy software along with other critical components across the cloud.


If you’re an Agile shop, you probably use one of the common Agile tools to manage your project. If you use JIRA, then Bamboo is a good deployment tool. It integrates directly into JIRA, so you can manage your project and deploy in one common interface. If you haven’t heard of JIRA, it’s a good Agile project manager tool that’s common on the market. It’s far more flexible and versatile than free tools such as Trello.

UrbanCode Deploy

If you need to integrate testing into your deployment procedures, then UrbanCode Deploy is your best option. IT’s designed and distributed by IBM, so you know the application is well designed and works well. This tool integrates testing and deployment into more than just software applications. It also helps you test and deploy configurations, middleware, and database changes.

Any one of these solutions will take some time for your team to learn, unless you’re already using Visual Studio. However, the manpower, hours and effort for deployment are greatly reduced. All of them give you detailed errors should deployment fail, but your teams can one-click deploy and return to what they were doing while they wait. It’s complete automation for deployment.

If your application and its resources are growing, using automation tools is the only way to go to reduce the chance of deploying your application with bugs and forgotten backups.

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy

5 Ways to Tune Database Performance

It’s common for developers without any database experience to throw together a design without understanding the critical role it plays in application performance. Unless you have a DBA to review stored procedures and table design, it can become a major bottleneck for your application. Relational databases are workhorses, but if you feed them junk, you get junk back. Here are some ways to feed your workhorse to get the best performance in return.

1) Limit Functions in the WHERE Clause

Functions in the development world make code much more efficient, but it’s not always the case with database programming. With database programming, functions run on every record, and they can be a huge bottleneck on performance.

Take the following query as an example:

SELECT * FROM Orders WHERE FirstOrder(CreateDate) = ‘1/1/2016’ AND LastOrder(CreateDate) = ‘2/1/2016’

In the above query, the two functions return a date, and the record is returned if the function returns the correct date. What if you have a million records? This query would run very slowly. A better way to work with this data set would be to just use the BETWEEN clause.

SELECT * FROM Orders WHERE CreateDate BETWEEN ‘1/1/2016’ and ‘2/1/2016’

2) Avoid Using Cursors at All Costs

Web developers can’t help but add loops to their code. There are ways to optimize loops, but with databases the standard practice is to avoid them as much as possible. Cursors are the database equivalent of a loop. You grab some records and loop through them. The only time a DBA will possibly allow a cursor is if it runs on a non-critical database that runs complex reports and a cursor can’t be avoid.

Most cursors can be replaced with a better query. Evaluate any cursor and find a way to run it with a better UPDATE or SELECT query.

3) Drop Indexes on Large INSERT Procedures

Most developers know that indexes are important for speed, but what they don’t know is that these indexes slow down INSERT procedures. If you need to import thousands of records, it’s better to drop indexes, perform the import, and then replace the indexes.

Of course, you only drop indexes on large INSERT procedures. Only when you import data from another table or a flat file would you drop indexes. You should also perform the import during off-peak hours, because once you drop the indexes you affect queries running on the application.

4) Do Your Constraints in Application Code Rather Than the Database

Foreign key constraints are nothing new in relational databases, and the stop orphaned records or duplicate data. Constraints are one of the main features of a relational database, but they also take a toll on performance. You can leave your foreign keys active, but it’s better to perform the logic for data storage in the application.

When a foreign constraint rule is violated, the database needs to roll back the transaction and send the error back to your application. It’s better to stay a step ahead and do any logic needed in the application, send the data for storage, and then get confirmation from the database. It eliminates the overhead of rollbacks, and you still stick to your foreign constraint rules.

5) Use NOCOUNT on Application Queries

When you run a query, the database takes a count of the records affected and returns the count to the application. If you’re running a query that’s later used on the frontend to view data, you can take a count of the records using the local application language. You don’t need the count returned from the database.

The NOCOUNT directive tells the database not to bother with the count and just collect the data for the record set. It’s a small tweak to your queries that will add a performance boost to the frontend application.

It often takes some analysis and sometimes table redesign before you get your database to optimal performance. Depending on your platform, you can use query analyzers to identify any queries that could be the crux of your bottleneck. Just remember that your database should always be maintained and reviewed. Any queries should be reviewed before you promote them to production or you could find yourself performing emergency maintenance.

Now that you’ve tuned your database, lets move on to improving your PHP application performance and optimizing that javascript.

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy

5 Reasons Why Your Site is Too Slow

Most webmasters don’t think about scalability in terms of performance until their users start complaining. These users could be internal employees using a cloud application for productivity or customers browsing your site for products. Poor performance is one of the biggest website killers, so you should do everything you can to ensure that you monitor your site for speed.

Most site owners start off with a fast site, but then load times get worse as they build traffic. Speed issues can work against you, and it’s one of those issues that slowly catches up to you. Before you know it, you’re losing customers based on your server’s poor performance. There are the obvious reasons such as a slow server or low bandwidth, but here are some reasons you possibly didn’t think of.

1) You’re Using Shared Hosting

Shared hosting is great for small sites with little traffic, but you’re sharing a server with sometimes thousands of other site owners. You can use tools such as Domain Tools to find out how many other sites are hosted on your server.

While most hosts limit the amount of resources one site can use, you don’t know what other people are doing on their sites. Just one site can ruin it for everyone else on the same server, so it’s best to avoid shared hosting unless you host a very small site where speed will never be an issue. This means any non-critical site where you don’t make any money off of it staying online and maintaining high performance.

2) Your Server’s Location

Data across fiber travels at the speed of light, but distance is still a factor in website performance. If you own a site with mainly American users but host your site in China, this could be a problem with your performance.

This issue is completely eliminated with CDN server. CDNs provide you with servers across the globe, and data is transferred from a data center that is geographically close to your users. Data still must travel across a wire, but shedding thousands of miles between server and user will greatly improve your website’s performance.

3) You’re Using Old, Bulky Image Formats

BMP and JPEG image formats are fine for storing images that you share on social media, but not for a website that relies on performance for traffic and user engagement. PNG is a better format, because you can scale an image without losing image quality. BMP files are too large, and changing the size of a JPEG image often means the loss of quality.

If you still decide to use large images, you should compress them. Compressing images will keep the file small as it transfers to your user’s browser, and then the browser expands the image when it displays.

If you’re using WordPress, here are a few plugins to consider for image compression and optimization:


4) Using Images for Text Instead of Cloud Fonts

It used to be that the only way you could ensure that special fonts showed up in the user’s browser was to display messages in an image. With the advent of cloud fonts such as Google Fonts, it’s no longer necessary.

If your site has been around for at least a decade, it’s possible that your site is using images to display specialized text. It’s time to change these images to cloud fonts for better performance. With cloud fonts, you no longer need to worry that the user won’t see the content the same way that you see it in your browser.

5) Unused, Bulky Plugins

It’s not uncommon to install WordPress plugins that look interesting. If you don’t use them, you leave them active just in case you change your mind. The problem with leaving all of these plugins active is that they can cause slowness issues on your site. Even worse, they can leave your site vulnerable to cyber attacks if they are poorly coded and aren’t maintained and patched by the developer.

At the very least you should disable plugins that you don’t use, but the better route is to delete them from your dashboard. Remember that these plugins must load each time someone opens your site, so too many of them can affect performance. You should only install plugins where the developer regularly provides updates each time WordPress upgrades its platform.

Always Monitor Your Site

The only way to identify performance issues is to monitor your site. Whether it’s a bad plugin or your host server is overloaded, you can identify these issues by monitoring your site. Severe problems can be identified by just accessing the site once a day and checking on its speed.

If you feel that your performance is suffering, a CDN can eliminate many of the issues that cause speed issues. A CDN uses data centers around the world, so data transfer and shared hosting are no longer an issue for you.

Don’t just ignore performance issues, because it gets worse as your site grows. Your site might be slightly slow now, but it just gets worse as more users visit your site.

Bonus Tip: Have you tuned the Javascript code on your site? Not only can you improve the performance by hosting your javascript on a CDN, but here’s 6 ways you can make the javascript on your site fly!

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy

The Top 10 Chrome Extension for Site Testing

Your site is launched, but have you tested it yet? New website developers skimp on testing, because it adds time to development and let’s face it – you’re too excited to launch your site and start making money. In reality, you should take a step back and thoroughly test your site before deploying, but the temptation to launch quickly gets even the best of developers. If you’ve already launched your site, here are 10 Chrome extensions to help you test your application.

1) Lightshot

At some point, you will find something that doesn’t work right or doesn’t look right on your site. You need to communicate it with your developer. Lightshot is a handy tool for taking a screenshot and then adding annotations to it. Screenshots are the best way to communicate what you found to a developer, and Lightshot makes it easy to capture and send bug fix requests to your development team.

2) Edit This Cookie

While you’re testing, you need to emulate cookies and the different events they trigger. Edit This Cookie allows you to change a cookie’s value to review your site’s UI and other features it triggers. For instance, if you have A/B testing enabled, you can review the different layouts based on cookie values. This is an important test if you have different UI/UX layouts based on cookie values.

3) Cache Killer

As you test, your browser caches data. You know that this can affect the way your application handles processing. Each time you make changes to code, you should clear cache in the browser. Cache Killer makes it much more convenient for developers to control cache while they test code.

4) Resolution Test

It used to be that there were only a few screen resolutions for a developer to worry about. Now, there are dozens. Resolution Test lets you change the resolution and test your application in each one. Don’t forget that you have more than just desktops to test. You also need to test for smaller screens including smartphones and tablets.

5) Bug Magnet

When testing forms, you have to enter data into each form element and submit for testing. This can be tedious and time consuming if you have several data set permutations. With Bug Magnet, you can store form values and have the extension automatically enter them when you open the form. It’s similar to Chrome’s auto-fill except it’s specific for testing.

6) Advanced RESTClient

Any developer knows that testing APIs is cumbersome. Advanced RESTClient lets you test APIs and their different input and output directly from your browser instead of using a testing wrapper. It makes it much easier to understand how an API will work with your end users.

7) Ghostery

What happens to your API and applications that connect to it when the network goes down? Developers don’t often account for complete network failure when they code for errors and fault tolerance. Ghostery helps you understand what users will see should your network fail and the API is no longer functional.

8) XSS Rays

Have you tested your site for vulnerabilities? XSS is one of the most common attacks on the web. XSS Rays is a penetration tester that helps you identify JavaScript XSS vulnerabilities on each of your pages. It checks for XSS vulnerabilities for both GET and POST events.

9) Site Spider

Once your site grows bigger, you lose track of pages and 404s. You need something to find broken links, so you can either correct the link or remove it. Site Spider goes through your site and finds any broken links. This extension makes it much more convenient to find 404s from broken internal links.

10) Todoist

QA and testing is a long, tedious part of development. You can’t fix everything at once, and each bug needs a priority assigned. Todoist can help you mark the tasks that need to be done and prioritize your bugs, so your developer can start on the most critical ones first.

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy

Testing Your WordPress Site for Scalability

Scalability is the term given to your site’s ability to handle small and large loads of traffic. For the most part, your site handles small amounts of traffic just fine, but what would happen if you suddenly had a spike in visitors? Would your site fail? Would it suffer performance issues? Maybe the spike in traffic is completely invisible to your users — which is what you want to happen. What do you do to ensure your site is at its top performance even with unforeseen spikes in traffic?

Even small sites can have scalability issues. Usually, the owner has a small WordPress blog with little traffic, and then one of their posts go viral. It could also be a product that goes viral in an ecommerce store. You can’t predict this type of traffic if you aren’t actively promoting, so it isn’t until your visitors start complaining of performance issues that you notice the problem.

The main issue is that you don’t want your visitors reporting performance issues. By this time, you’ve already lost potential revenue and readers due to the issue. Most users just bounce from the site and find your competitor rather than take the time to send you a message. The only way to avoid this issue is to test and plan for it in the beginning.

Creating Scalability Tests

The concept of scalability is more intuitive than creating the tests themselves. Before you decide to test your site, you need to determine the kind of tests you want and the sections of your application you want to test, should you have multiple sections such as an internal and external application.

Since scalability generally focuses on load times, you need to create tests that emulate high volume traffic. Luckily, there are tools for testing scalability, which we list later in this article. To get a general idea of what you need to do to test your site, here are a few tips:

  • Determine your highest volume. This isn’t always so easy, but a small blog likely won’t have a million visitors in a day, but it’s conceivable that you would one day have 1,000 or several thousands of visitors in a day. You need this number for your scalability testing tools.
  • Create a testing environment. This environment should emulate production. This means that bandwidth and server power should be similar. The same applications should be loaded on the server. Tests will emulate users connecting to and using the application.
  • Be prepared to fully analyze reports and make changes to your system. If you haven’t thought of scalability until now, then chances are that you’ll need to upgrade your system.


Finding the Right Tools

You can get a simple ping tool that detects if your site crashes, but these tools don’t support scalability functions. You need a tool to test for high-volume traffic. These tools are usually scripting tools with a GUI to help you configure your tests.

Since you will generally run them on your testing environment, you can configure your tools for any amount of volume. If you decide to use the tests on your production environment, remember that it could crash your site. Only use them on production during off-peak hours and expect some possible stability issues.

Here are some scalability tools to help you. Note that these tools are for web applications, but you can use tools for other services such as email.


What Happens When You Don’t Scale?

If your site fails testing, then you’ll need to do one of three things – vertical scaling, horizontal scaling, or code optimization. Code optimization can be done in conjunction with the other two, but you need your developers to run through the code to determine if slow load times are an effect from poor coding.

When you scale vertically, you add resources to existing servers. You add another CPU, more memory, faster hard drive, or maybe you even swap out a motherboard, which is uncommon but not unheard of. For the most part, when you scale vertically, you add more memory or CPU power to compensate.

Horizontal scaling adds more servers to your web farm. It’s the most expensive, but it’s sometimes necessary.

You can add resources to your site’s backend, but at some point adding resources has no positive effect. If you find that you add resources only to have the same traffic crash your servers, then it’s likely that your code is inefficient. If you have WordPress, it’s likely a poorly performing plugin. You can go through these steps to see if something in your WordPress site is the crux of your problem.

Don’t forget an affordable way to help scaling is using cloud resources. The advantage of the cloud is that you scale automatically and pay only for the resources that you use. With the cloud, you aren’t limited to one server in one location. You have the advantage of data centers that span across the globe.

In summary, scalability should never be overlooked especially if you ever plan to grow. Your goals should always focus on the growth of your business, but you need technology to keep pace with your growth. With a CDN, you can instantly improve your site’s performance in case of traffic spikes, but you must first test your site to determine how much resources you need to keep up with traffic increases.

Tested your site for scalability? Better ensure it looks good for each of your visitors – here’s 10 Chrome Extensions for Site Testing.

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy

Perceived vs Actual Performance: Your Website’s First Impression is Everything

You’ve heard the cliché that perception is everything, and it’s the same for websites. Your website’s first impression can either push users away or turn them into customers. Speed is a large factor in first impressions, but your website’s perceived speed isn’t always the actual speed. You can make small changes to your site’s code to give it the perception that it loads much faster than it actually does and hide some of its speed imperfections. The end result is a user’s first impression of a fast-loading site even if test actual speeds remain the same.

You might think your site is fast enough, but a survey conducted by Search Engine Land showed that even a half a second load time delay affected business metrics. One second impacts user engagement by almost 2% and a four-second delay impacts user clicks by over 4%. This may seem like a small number, but when you speak in terms of thousands or millions of visitors a month, this statistical value is a very high amount of potential customers.

So, what do you do when you’ve exhausted every measure to make your site’s performance as optimal as possible? Here are a couple of suggestions that will make load times seem faster to users.

Lazy Loading

Lazy loading is a trick using Ajax. It’s mainly used with images or parts of your site that require a high amount of calculation before content displays.

Take a page that shows reports. Reports usually take a lot of calculations on the backend, and then the database returns values to the frontend. Depending on the efficiency of your SQL, it can take an inordinate amount of time before the report values are returned to your frontend. Without lazy loading, your server waits for these values to return before displaying content.

The same issue can be said of images. Suppose you have a site that requires high-quality images. The better quality of your images, the more amount of space they use and the longer it takes to display in a browser.

If you’ve never worked with Ajax, you’ve still probably seen an Ajax loading screen. Ajax components load with the spinner most users recognize. The spinner indicates that content is loading. The main reason developers work with Ajax is that you don’t need to perform an entire page refresh when making calls to the server. You can send a request from a div container and have only that div’s content update.

With lazy loading, you have components of your page load in parallel with your main page content. This means that faster loading components such as plain text, backgrounds, colors, and smaller images will load quickly and your longer processing components will show a spinner until they finish loading.

If you use lazy loading for components that are below the fold (the section of your page where users must scroll down to see), your users might never even see the spinner. The perception is that your users can see a running page before components that slow down the page display. Users can start reading your page before large images display or reports calculate. This keeps users engaged even if other components take too long to load.

Image Compression

The image compression debate always comes with the predicament of image quality. Photography sites, traveling sites, and others that depend heavily on image quality without suffering performance issues has to determine if it’s worth the effort.

JPEG format offers smaller image sizes, but PNG has the better flexibility and quality. Luckily, there are several open-source plugins that help you reduce image size while keeping your quality. Here are a few to check out:


Move Web Font CSS to the Top of the Page

For people who follow Google’s PageSpeed suggestions to the letter, this one will be counterintuitive. Google suggests that people place these files at the footer of your page. This stops the page blocking issue that requires styles to load before the HTML renders. This might speed up the page, but it creates a problem called FOUT – flash of unstyled text.

You’ve probably seen FOUT before. It’s when text loads but remains unstyled for a second or two until the web font style loads. The issue here is that the browser might display a broken page until the font style loads.

The simple way to fix a “broken looking” page with FOUT is to move the web font styles to the top of the page.

This might affect your actual speed, but you can test your pages again after you make the change. If the change only makes a minor difference, then you’re probably safe to keep it especially if you believe you’re losing users based on web fonts loading improperly.

Don’t Ignore Your Actual Load Speeds

Your perceived speed is just as important as actual speed, but you shouldn’t forget about actual speed and compromise your site’s performance. If you make any changes with site speed, always test it before you commit it to production.

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy

A Few Considerations Before Moving Your Site from HTTP to HTTPS

If you still use HTTP, it’s time to move to HTTPS. With encryption, you build better trust with your readers. It used to be that having encryption on certain payment processing pages was enough, but now users expect to see the entire site encrypted. Once you decide to make the move, here are a few considerations to ensure that your site works smoothly. If you’re not sure if you should make the effort, here is why you should be using HTTPS.

 Create 301 Redirects

A 301 redirect tells search engines and browsers that site pages moved from one location to another. In this example, the new location is the HTTPS version of the page. Users and search engines have your old HTTP page saved. When they open the page, having a 301 redirect in place sends the user to the HTTPS version.

Update Links on Your Site

If you use  relative links, this won’t affect you as much. Absolute links include the protocol, and you need to go through each of them and change the HTTP portion of the link to HTTPS. This is a simple find and replace, but it’s tedious to find every link on your site.

You can use Screaming Frog to help find broken links on your site.

Update Links on Social Media

You have 301 redirects in place, but you should still update your social media links. It’s better to point directly to the landing page rather than point users to a page that redirects. If you have dozens and dozens of links to go through, you can prioritize the social media update by the links that bring in the most traffic.

Remember that when you move to HTTPS, the referrer (spelled REFERER in server logs) is preserved. When a user clicks an HTTPS link from an HTTP page, the referrer is lost. One advantage to moving to HTTPS on your own site is better referrer reports.

Check Your robots.txt File

Your robots.txt file controls the way search engines crawl your site. Indexing and crawling are not the same in the search engine world, but if you block URLs search engines won’t crawl the pages. You should put important pages behind a username and password protected area of your site, but you can block some unimportant pages from being crawled to save on resources.

In WordPress, you don’t need search engine bots crawling admin pages, so it’s common to block the admin folder. Any other folders that you don’t need indexed can be added to the robots.txt file.

When you move your site or make structural changes, always check the robots.txt file to make sure it’s still valid.

Review Canonical Tags

Canonical tags are also used by search engines, but not users. Canonical tags tell search engines which version of a page to index. They are used for HTTP versus HTTPS versions, and you can also use it to tell search engines to index one page when you have several with the same content.

With WordPress, you can use an plugin to work with canonical tags. Yoast and All-In-One SEO plugins both let you work with canonical tags. They make it easy to configure them with little input from the admin.

These are just a few considerations before moving your site. Don’t forget to review the site and test it before you finalize the move. Moving your site to HTTPS on WordPress is much simpler than complex sites. Most of the work can be done with plugins, and the process is smooth and intuitive.

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy

Why You Should Be Using HTTPS in the Cloud

Years ago, moving to HTTPS meant performance degradation on your web server. Today, with HTTP/2 and CDNs, moving to HTTPS has little to no difference in terms of performance. Not only do you keep site performance, but you also offer better security for your users. has free shared SSL and custom SSLs available too.  You know the basics, but here are some other reasons it’s time to move to HTTPS that you might not know.

Google Uses SSL as a “Tie Breaker” in Rankings

John Mueller, a Google Trends Analyst and “face” for Webmaster Central, said in one of his hangouts that Google uses SSL as a tie breaker. Gary Illyes confirmed in a tweet that “all things being equal, SSL is a deal breaker.”

For instance, suppose you’re having problems with a competitor in search. You both offer the same products or service, and the two of you compete for the same ranking spot. You both have HTTP configured and no SSL. If you see the two of your sites frequently change in rank, you could have a small benefit over your competitor using SSL. If both sites have equal quality signals, you would have just one more advantage to break the tie.

Search engines have hundreds of ranking factors, so while HTTPS won’t immediately shoot you to the first page, it will be a quality signal that helps your search engine position.

Better Analytical Data

Most site owners have some kind of tracking service on their site. For Google search and organic traffic tracking, you probably have Google Analytics. When you get referrer information from people who link to you or search engines that show you in a data set, any analytics data is lost such as the search term used to find you. This can be a disadvantage with your marketing.

Most of the web is moving towards SSL/TLS. If any of these encrypted sites send you data, you lose referral information since your site isn’t encrypted also. Moving your site to an encrypted version will give you better analytical data as the data is passed between sites.

If you find it difficult to synchronize your Google Web Search Tools data with Google Analytics, having referral data might make it a bit easier. Note, though, that usually these two tools won’t have exactly the same data, but it will help you with your reporting efforts.

Better Privacy for Your Users

You already know that encryption can’t be “read” without having the decryption key, but this is more important than just privacy. It’s recently been reported that ISPs have been selling web history. While this isn’t a new concept, it’s recently received the attention of several news outlets. ISPs take the information and browsing habits of its users and sell it to third parties. This information is used for marketing, but it could also be used for nefarious purposes.

Most users don’t appreciate a third party reading their browsing history, and encryption helps stop this activity. The ISP might be able to see sites accessed by the user, but they can’t see the information exchanged between two parties.

There are several other reasons to encrypt data. The wrong person can be reading data across a network wire. This includes administrators working in an organization where your users are employed. They can see usernames and passwords, email content, and other private data that should be kept from prying eyes.

Encrypted traffic protects all of your user’s data including their browsing habits, data sent to other sites and credentials used to log in to various websites.

Shared SSL with Your Service

SSL certificates are expensive, but moving to gives you free shared SSL certificates. If you’re currently on a slower, single-server solution and need a CDN solution but don’t have SSL installed, it’s worth it to not only move your site to our content delivery network but also take advantage of our SSL offer.

Because you’re working with a CDN, the old school problem of performance degradation is no longer true. You can have speed and security on your site regardless of size and traffic.

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy


Identifying Plugins That Could Be Hurting Your WordPress Site Performance

Site performance is one of the leading user engagement and retention factors for WordPress sites. Anything over a three-second load time is considered inefficient, and you could be losing readership and customers because of it. Unless your WordPress blog is completely customized including plugins, you probably have a few third-party plugins that could be causing your site to be sluggish.

Any third-party code allowed to run on your web server can have an adverse effect on your load times. We showed you how to test your site for performance issues, and in this post we’ll help you weed out WordPress plugins that could potentially harm  your performance.

First, Let’s Eliminate Some Myths

There are plenty of myths around PHP, WordPress and plugins. The first one is that each time you add a plugin, you hurt your performance. It’s not the number of plugins that causes an issue. It’s the way the plugins are written. A well written PHP plugin will perform smoothly. A poorly coded plugin will hurt your site’s performance and its security. You can have 1,000 well-written, streamlined plugins installed and your site will run fine. Just one poorly coded plugin makes a difference.

You don’t know if a plugin is coded well unless you’re familiar with PHP, the WordPress API and efficient code. You can install each of your plugins and test its performance using Google’s PageSpeed Insights.

Another common myth is that WordPress can’t scale and is only for small blogs. While that might have been true years ago, it’s not true at all anymore. WordPress scales well for sites that have a vast readership as long as you follow performance best practices and host the site on a CDN.

Finally, even popular plugins with thousands of downloads have errors. Always keep your plugins up-to-date with the latest patches. This will fix any major security and sluggishness issues. If you find a popular plugin is a main performance problem, it’s time to find an alternative or even have a developer customize one of your own.

Plugins with Heavy Amounts of Database Queries

Writing efficient, streamlined SQL queries is an art form. You can have the best PHP coders in the world, but this doesn’t mean that their SQL construction is the best for database performance. In most cases, you need a database expert to analyze database queries to ensure efficiency. Also, table design and indexing makes a huge difference in performance.

Most plugins must query WordPress tables at some point. If you download a plugin that relies on several database queries, you should take a look at the SQL code. It might not make much of a difference if your site has little traffic, but once your site gains some traction, poor SQL can have a tremendous effect on performance.

If you think SQL query performance is an issue, try Query Monitor.

Remote Calls or “Bandwidth Leeching”

Some developers pull code, images or files from other sites without ever pulling those resources to the local directory. Every time the plugin must make the call to another web server, you put your performance in the hands of another server administrator. It’s common to keep some resources in the cloud such as CSS and JS files, but pulling assets such as images off of other servers is called bandwidth leeching and it’s frowned upon among the webmaster community.

If you find that your plugin is making too many remote calls, you can re-code the plugin to pull assets from a local directory after you copy them to your local server. A CDN can also help improve performance of external files when you must include them in your pages.

Including Assets That Aren’t Needed

Each set of JavaScript and CSS files must be loaded by the browser each time the WordPress pages load. If your pages don’t need certain file assets, then they shouldn’t load. It’s common for developers to load all assets regardless if they are needed or not. It’s not always the main problem for site speed, but it can be a small component.

An easy way to identify assets that load on a page is to just view the page’s source code in the browser. You probably need to customize the plugin code to ensure that all assets are loaded, but some plugins make the mistake of loading old libraries or loading the same library multiple times. It’s this kind of mistake that can slow down your site and even cause bugs.

Disable Plugins One-by-One

If you’re not sure which plugin is causing performance issues, you can disable each one-by-one. Each time you disable one, run a speed test. This will allow you to identify which plugins are causing slowness issues.

After you identify slow plugins, you can either customize the code or choose an alternative. If you customize the code, you can no longer download updates from the developer. This means that you must maintain changes yourself. Some plugin developers will work with you to customize code, but in most cases it’s easier to use an alternative.

You can also make changes to your overall site to fix performance issues unrelated to plugins. Here are some tips for better performance overall:

  • Host your WordPress site on a CDN
  • Use a WordPress caching plugin (See our W3TC Integration guide)
  • Minimize image size or use compression (plugins are also available for compression)
  • Make your site responsive for mobile
  • Ensure your theme is well coded and efficient
  • Monitor any hotlinking and disable it
  • Use penetration tools to check for any vulnerabilities

Get the latest in Web Performance
Subscribe now to receive the latest news, tips and tricks in web performance!
The information you provide will be used in accordance with the terms of our privacy policy