How Search Engines Work To Rank Cleveland Businesses.

Chapter 2: Giving Search Engines What They Want

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

Last Updated: October 17, 2022

By: Brian McCracken

Imagine the internet is a huge, never ending brick driveway. Now imagine there is a spider at the end of your driveway that wants to get to your garage.

It’s going to start at the apron, out by the street, and one little spidey-step at a time walk towards your garage. Pretty soon Mr. Spider is going to have to leave the brick he’s currently on in order to get to the next brick in front of him. He’s going to reach out way out and step onto that next brick, and then do it again and again until he’s where he wants to be.

That’s exactly how search engines work, except they have lots of spiders.

These spiders go out onto the internet and find a webpage. They go all the way through the webpage, reading every word, looking at every image, until they find a link to another site. Then they use that link to get to the next page, and do the same thing all over again. This process is what is known as crawling.

In this analogy, each webpage is a brick, and the links are how the spiders get from one place to the next, discovering new content that they they then determine if they should add to the search index.

Search engine spiders use links to crawl the 'World Wide Web' in order to find new websites and webpages to index and feature in their search results.
Search engine spiders use links to crawl the ‘World Wide Web’ in order to find new websites and webpages to index and feature in their search results.

What’s a Search Index?

A search index is a big library of information about websites and webpages on the internet. It includes information like:

  • The Address To Get To The Website or Webpage
  • The Title Of The Page
  • The Content That’s On The Page
  • And Much More…

As we talked about above, these search indexes are built by search engines when they send their spiders (or crawlers) out into the internet to find new content.

It’s a never ending process, as the search engines want to find everything they can so that when someone goes to them to search for an answer or a product, they can return the best result.

So what is a Search Engine?

A search engine are computer programs that evaluate their search indexes (the list of webpages the spider built) and rank those pages as results to show your customers when they are searching for services or products that your company offers. These computer programs are commonly referred to as search algorithms.

The goal of every search engine is to provide their users with the best, most accurate, most helpful list of websites for a given search query. This is otherwise known as search relevancy.

The more relevant your website is to answering a given search query, the more likely it is to be shown at the top of search results. A large part of SEO is trying to improve your webpage’s search relevancy for terms that you want your customers to find you listed under.

Wait, what’s a Search Query?

Oh, did we skip that part? Sorry.

A search query is the phrase or combination or words that people type into the search bar on a search engine’s homepage. That combination of words people search with are referred to as keywords.

Things have changed over time and now people will often search just by typing what they are looking for right into the browsers search bar, or asking their phone to find the answer for them. The concept still remains the same no matter how people get their question to Google, or Bing, or whomever they may be using as their preferred search engine.

The list of websites that the search engine shows the user after they’ve submitted their query are known as the search results, which are ranked by relevancy as we mentioned above.

What is Search Engine Ranking, or Relevancy?

When someone searches using a search engine, that search engine goes through their index of webpages and delivers what they think are the most relevant results, in order of descending relevancy. That order of relevancy is what is known as ranking.

Relevancy, or ranking is determined by looking a key factors of a webpage. Those are known as ranking factors.

What are the Ranking Factors search engines use?

Unfortunately, the search engines don’t tell us all of the factors they use when deciding a webpages relevancy. if they did everyone could cheat and unfairly have their website ranked higher than it deserved to be.

With that said, we do know what some of the ranking factors are.

Content Quality

Content is probably the single most important ranking factor. The quality of your content can be looked at as the usefulness of your webpage in answering a given search query. Quality can be determined by the depth of information on your webpage, or even if the content is very well written, how well it matches what the user is really hoping to find.

Content depth can be looked at objective by asking “How well does this answer someone’s question, or explain a given topic?”

If someone wanted to know how to get a quote from a roofer, and your roofing website explained the entire quote process step by step, that might be considered to be a webpage with very good content depth in comparison to a webpage that just had your phone number on it.

Content matching can be looked at objectively by asking “Does this webpage match the need of the user?”

Again, if you had a roofing business in Cleveland, Ohio a search engine wouldn’t want to show your webpage as a result to someone looking for a roofer in Las Vegas, Nevada. Even though they are looking for a roofer, and you’re a roofer, the distance between them and your business is too far apart for you to be a useful result for the user to see. it wouldn’t be a good match.

Content Freshness

Content freshness essentially looks at how recent your content was added to the site, or when it was last updated in a meaningful way. Freshness is a search query dependent ranking factor, meaning that how important it is can change based on what the user searches for.

Freshness would matter if someone was searching for “Things to do in Cleveland this weekend” because they need recent information, but it wouldn’t be an important factor for a search such as “How to change my car’s oil.”

Page Speed

Page speed is a term used to measure and describe how quickly a web page loads when someone visits your website. The speed of your website is measured separately for desktop users and mobile (cell phone) users.

The faster your webpage loads, the better perceived user experience someone will have with your website, and because search engines want their users to have good experiences, the more eager they will be to feature your webpage over your competitors in the search results.

User Experience

User experience is a set of criteria and factors that can be best described as measuring the overall impression someone will be left with after visiting a page on your website.

If your website is slow, or has a bunch of spam and ads all over it, users typically won’t view it favorably. However, if your website loads quickly and has useful content right where people are expecting to find it they will likely view their experience with your website as a positive one.

Mobile Friendliness

This goes hand in hand with user experience. Mobile friendliness can be described as how well you website looks and operates for people viewing it from their cell phones.

When designing and developing websites, we put a great amount of priority on page speed, user experience, and mobile friendless for our clients because of how important they are for your SEO and ultimate success.

Expertise, Authority, and Trustworthiness – EAT

EAT is a shorthand term to describe how accurate your website’s information is, how knowledgeable you are, and how well people should trust what they read or watch while on your site.

Search engines want to give their users results that are accurate, helpful, and meaningful.

If your website is small, that may hurt your potential for ranking in some industries because it is hard to develop authority and display expertise with very few pages of content. If the pages you have on your website have incorrect information on them, it won’t be viewed as trustworthy.

Before a Search Engine can rank a website, it has to be able to Crawl It.

Now that we have looked at ranking factors, we have to discuss your website’s crawl accessibility.

In order for your site to be ranked in search results, it has to make it into the search index first. that process is known as indexing.

The most important factors in indexing are making sure that you site is always online, something quality managed hosting can help ensure, and making sure that it does not have a robots noindex meta tag on it’s pages, something your web developer can help you with.

Robots Meta Tags

Robots meta tags are little lines of code in your webpage’s header that tell search engines how to treat that page. There’s noindex, so that search engines don’t index the page, noarchive, so that they don’t cache the page, nofollow so that they don’t follow the links on your page, and many more.

For instance, if you had a private client portal, you wouldn’t want just anyone on Google to find that. This is often done with the use of the noindex, noarchive, nofollow meta tag, as well as the Robots.txt Disallow File.

What’s a Robots.txt File?

Your robots.txt file is a simple text file located in the root of your website (ex. It tells robots like search engines how to treat your site, where they are allowed to go, and where they aren’t, as well as how fast they should crawl your website.

Those rules are called directives, but are beyond the scope of this article and should be handled by your trusted web developer.

If your website does not have a robtos.txt file, search engines will crawl the entire site, or all areas of it that it can access. If search engines find a robots.txt file, they will follow it’s rules. If they find a robots.txt file, but there’s a technical problem with it, they will not crawl the site – so it’s important that this is modified and handled by someone qualified to work on websites.

A robots.txt file let’s let’s search engines know what parts of your site they can crawl and possibly index for users to find in their search results.

Site Navigation’s importance to Search Engines

When a search engine spider (or crawler) finds your website, one of the first things they are going to look at is your website’s navigation. It’s important to have all of your high priority pages listed in your website’s navigation, both desktop AND mobile.

If pages are missing from one device or another, the search engine may not index those missing pages assuming that they aren’t meant for all people and all devices.

It’s also important to be sure that your web developer puts the entire contents of your site’s navigation in plain HTML, the language used to build websites. Some developers will put navigation in JavaScript, and while Google has gotten much better at rendering and reading JavaScript files, they still aren’t perfect at it.

In addition to having your important pages in plain HTML, you will want to be sure that your site has a clean, concise information architecture.

Clear and easy to understand site navigation will help both users and search engines understand where to find information on your site, and how different pages on your website are related to each other.
Clear and easy to understand site navigation will help both users and search engines understand where to find information on your site, and how different pages on your website are related to each other.

Information Architecture: For people who aren’t nerds

Information architecture can simply be described as how your have content laid out and interlinked on your website.

As simple way to think about it is this. If you own a sports store here in Cleveland, all of your baseball bats should be under a baseball bats category. All of your baseball balls should be in their own category. And all of your baseball equipment should be in it’s own master baseball category, separate from football or hockey equipment for instance.

That way, when a customer or a search engine visits your site to find all of your baseball gloves, it makes sense where to find them. It helps people use your site easily, and it helps search engines know where to find content on your site so that it can index and rank it to show users.

Sitemaps – everyone needs a map!

Imagine a map that had every city on it. That’s kind of what your website’s sitemap it.

It’s essentially a giant list of every page on your website that you want search engines to find, index, and rank so that your customers can find them.

Sitemaps have priorities on them, so you can tell search engines which pages are the most important. better yet, you can even submit your website’s sitemap directly to search engines so that they know about any new pages you have created or posts that you’ve published.

Sitemaps are important for helping search engines find and index your website's content.
Sitemaps are important for helping search engines find and index your website’s content.


Wow, that was a lot! Good job if you made it this far! hopefully now you know how search engines work and what they do.

if you’re ready to keep going, now would be a great time to go on to the next chapter.

Stumped by something you read here? Want to do it, but don’t know how? We can probably help. 🙂