SEO Fight Club – Episode 165: Stages of Search and SEO Q & A
SEO Fight Club / By
“Searching The Four Stages And Beyond”
– Today we discussed the four stages of search: crawling, rendering, indexing and ranking.
– We recognized that there is an additional stage of search called discovery which wasn’t included in the article.
– Crawling is a complicated process with various parts including mobile and desktop crawlers, user agents (with or without Chrome designation), text file sitemaps, XML file sitemaps, and RSS feeds.
– With big updates from Google Search Console it’s possible to have disparate reports and signals regarding new pages versus already indexed content.
– Additionally, requesting indexing through Google Search Console for a time was found to be a bad thing to do if you want your page quickly indexed by Google.
==========
“Website Discovery: Knowing What Google Knows”
-I recently read an article about different methods for website discovery and indexing.
-One point it makes is the distinction between what Google’s desktop bot can grab, and what its mobile bot can grab.
-The article suggests testing a safe search page to see if it gets discovered by Google.
-It also states that some sites may be treated differently than others, especially those with greater authority or longer histories with Google.
-Google has mentioned multiple passes when they render pages; this was another point in the article that caught my attention.
-Site maps have both text and XML versions available, as well as being hosted on your server or uploaded directly to Search Console.
-Uploading site maps to Search Console does offer diagnostic value but tends to take more time for results compared with other options such as robots.txt files referencing them on your server
==========
Breaking The Search Barriers With Google Rendering
– Google uses a two pass process to render webpages, the first being a fast pass that looks at simple HTML source and the second taking longer to gather external images, scripts, and style sheets.
– The time between the Document Ready event and when content is snapshotted by Google is 20 seconds.
– Tests conducted from multiple global locations yield similar results; while it appears 20 seconds are needed for content to be indexed, Google accelerates server clock time in JavaScript to get there much faster.
– When using JavaScript framework websites, if an issue arises with indexing due attention must be paid as all product/category pages could fall out of search due to lack of content in HTML source code (when rendering phase turns off).
– With proper tests however, it can be proven that any issues come from Google’s broken state rather than user error.
==========
“Getting Caught Up In Google’s Web: Navigating Q4 SEO Updates”
– I understand that SEO strategies should be set in place by mid to late September, and not touched afterward to avoid a lengthy wait for indexing.
– My experience with the Q4 updates have been pretty frustrating when it comes to SEO.
– Indexing is like the card catalog at a library
– only finding topics makes your page properly indexed and findable.
– Carolyn’s research has shown that there can be intermittent issues with pages being crawled properly even when everything seems alright.
– Google may recognize an “opportunity” of demand by recognizing keywords on my page, even if that wasn’t my intention
– as was seen when two mentions of Sem Rush caused impressions and clicks on broad match sem rush keywords instead of the target keyword I was tuning for.
==========
“The Library Of The Web: Uncovering Rank And Relevance”
– Google might be forcing pages into the uncommon space due to a couple mentions on page.
– The only way to diagnose this is through search console.
– Ranking is an interesting beast because it limits what can be found without any special operators.
– For example, if you type in ‘games’ as your keyword, only a couple hundred things will come up.
– A card catalog analogy of this is that books need to have topics assigned for them and all cards must be placed in the card catalog for anything to be findable in the library.
– Even if there are new books not yet classified or put physically in their section, they are still indexed but may not show up without relevant keywords or author/title searches.
==========
“Positioning Your Website For Improved Visibility Through Keywords”
-I’m exploring the relationship between the keyword and the page address in terms of search engine algorithms.
-Once Google identifies a topic, it has to look at its associated keywords and index them.
-The title or url can be used as appropriate keywords for topics, regardless of what they are.
-Search engines need to have relevant keywords in order to diagnose problems with pages not showing up in searches.
-Google knows about relationships between a keyword and a URL when assessing indexing versus ranking issues.
-It’s important to understand where on the page certain keywords should appear for accurate results from search engines (e.g., h1 tags vs javascript variables).
-Google is able to differentiate topics from people/entities based on their associated topics/keywords.
-By understanding how search engines view content, websites can better position themselves within rankings when searching through specific categories or topics online – leading to improved visibility online!