Categories
Digital Marketing

How to Explain the Value of SEO to Executives — Whiteboard Friday

In today’s episode of Whiteboard Friday, Seer Interactive’s Larry Waddell discusses how you can translate the SEO work you do for your clients into how executives think of value — specifically, business value.

How to explain the value of SEO to executives

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hello, Moz community, and welcome to another edition of Whiteboard Friday. I’m Larry Waddell, EVP of Business Strategy for Seer Interactive, and today I want to talk to you about how to translate the great work you do for your clients into how they think of value, specifically business value.

So let’s jump right in. Now to do it, I’m going to review two frameworks or two ways of thinking. One is the value pyramid. That’s where we’re going to start. But then we’re going to move from there to the four forms of business value, and I’ll walk you through that.

So starting with the value pyramid, and this is something I’ve used at Seer for years, and I’ve had the great privilege of leading the Analytics team at one point in time, the SEO team, thank you Wil, and the Paid Media team, thank you, Crystal, and through all of that, I’ve used a very simple construct. As you can see, there’s nothing terribly fancy here, but it’s a way to help our teams understand the work that they do and to understand the work the clients are asking us to do.

So to start, it’s a pyramid like any other, four different layers, and for SEO you can think about the bottom layer as something like link building. It’s something very important to do, extremely important, but perhaps at the bottom of the pyramid. Link building we get rankings. With rankings, we get traffic.

Traffic gets us engagement on the website. But we don’t stop there of course. Engagement on the website and traffic, now we want to focus on conversions. Now we get those conversions and depending on what our conversion value is, that gets us to revenue. So really nothing earth-shattering there. But I want to introduce a couple other concepts. So you might want to think about titles at your clients.

So if you’re down here, perhaps at the bottom of the value pyramid, maybe it’s link building, maybe it’s other low-level but very important tasks, you might be dealing with somebody who’s a specialist or a manager at your client day to day. As you move up this pyramid, where you’re talking about things like revenue and you’ve been in those meetings, you’ve had these people join your QBRs or they pop into a weekly call or a monthly call, and they have titles like director or VP or CMO.

Every once in a while, you might get a director of finance or a CFO in those meetings. You might also have noticed that those meetings tend to be more strategic. They tend to be focusing on things other than the nitty-gritty tactical that you might be grinding out day to day with your manager at the client.

These folks are worried about business problems. They’re worried about things that are impacting the trajectory of the overall business, of which SEO plays a very important part. Down here, perhaps less visibility into those things at the client, but down here we’re focused more on SEO problems, rankings, how much traffic are we getting, what are our conversions, content on the site, load speeds, and those sorts of things.

It’s not that these folks don’t care about that. They’re just perhaps removed from it, and they don’t necessarily understand how any of this stuff down here impacts the stuff that they care about unless we have a lot of explanation and we’ve all been there. So what do we do with something like this? Well, we can start to think about a hypothesis around value.

What if there’s greater perceived value on the part of these folks at the client, the higher up this pyramid you happen to be? So down here, again, important but perhaps less perceived value than strategic conversations that relate to business problems at your client. So what are the things to worry about down here?

There’s actually a lot to worry about down here. So in my experience, if you have engagements that tend to focus mostly on talking about, reporting on, and reviewing activity at the bottom of the value pyramid, you can get locked into just doing that for your client.

The way I like to think about it is this is the big kids table or the adults table perhaps. How do we get a seat at that table? Again, in my experience, if these folks don’t see a way to translate what you do into what they care about, you kind of get relegated down here.

Again, there’s nothing wrong with it. But the problem is, if we look at fee competition, it tends to be the case that there’s higher fee competition down here than there is up here. There tends to be more competitors down here. There’s less differentiation between providers down here. You’re more susceptible to being ChatGPTed, for example.

You’re more susceptible to getting big data out or automated out. You’ve seen some nibbling around the edges around things that you might characterize as tactical. More and more of it is becoming automated or good enough automation. Maybe it’s not perfect, but it’s good enough. Then I can save some money because down here there’s high fee competition.

You can see the robot there coming to get us. So the thing that we need to do is think about how we move up this pyramid, both in the work that we do and how we talk about the work that we do with clients. So, again, that’s moving up the value pyramid and tying more of our work to the business problems that these folks, VPs, CMOs, directors, CFOs care about.

Another way to think about this too is career progression. So if you start off in SEO, you might start down here. You might start doing content audits, and you might start doing link building and things like that. Again, very important things to do. But then over time, this also creates a path for you to sort of think about where you can go as you start to unpack a little bit more of how progressively more sophisticated work you can be doing for clients translates to value for these people, because it turns out these people control the budget.

So you might also have discovered that when it’s time for renewals or pitching work, that oftentimes there’s a big boss and that big boss sits up here, typically not down there. So that gets us to value. Now, when we think about conversions and we’re thinking about revenue generated from our work on a client website, we tend to focus on the revenue of that, and revenue is extremely important.

We can’t get anywhere if we’re not generating revenue or more revenue for our clients. So revenue for the win. You can see it right there. Well, yes and no, and that gets us to the four types of business value. So there are at least four.

There’s a fifth that I’ll touch on briefly in a moment. But for the most part we can increase revenue, but ah, not so fast, gross profit. We can reduce cost. We can do something which is called cost avoidance. We can help our clients avoid cost. Then there’s insurance value.

In my time at Seer, I’ve found us in projects that do one or more of these things, oftentimes without our team even realizing that they’re creating this additional value. Now down here there’s a little bit of an economics lesson, and this is called a utility curve.

I’ll get to why that’s important in a second, and it’ll help me explain why some of these things over here create value, how that actually happens. But let’s go back to increasing revenue, gross profit. So questions you can think about if you’re focused here. Are you generating more profits for your client, or are you stopping at revenue?

I suggest we take a pause to really consider that. We’ve had situations where we might be generating more revenue for the client, but it turns out that the unit economics are such that at the gross profit level, the client might be losing money on everything we help the client sell. So even though everything over here has been geared towards up and to the right, better rankings, more traffic, higher engagement, more conversions, more revenue, all for naught if the thing we’re selling has crazy shipping costs and the shipping costs are turning out to be a loser for the client.

This person might not be aware of that, but these people might. So you kind of see how understanding a little bit more of what happens at this level can help you put what you’re doing here in greater context. So when I talk about gross profit, gross profit is basically revenue minus cost of goods sold or COGS generally speaking.

That’s gross profit. The thing to bear in mind is that gross profit pays the rent, not revenue. You still have to pay for the cost of the thing that your client produces, and what’s left over is what actually covers other expenses. We don’t often think that way when we’re doing SEO projects, and we’re strictly focused on revenue.

Might want to think about gross profit or asking these kinds of questions, which is a signal. So that’s the other thing about this. We don’t have to become experts necessarily in our clients’ businesses, but maybe asking better questions once we’re in the meetings up talking about this stuff with these people that signals, “Hey, I know your business. I understand that there’s more going on than just the SEO.”

Next is reducing cost. So reducing cost increases value, and that’s where this comes in. It’s a very simple concept and well-proven that the gain that one might get from a little bit more of something doesn’t feel as good as losing a bunch really stinks.

So that’s called loss aversion. So it works in such a way that adding a little bit more profit makes the client better off. Losing profit can hurt a lot. It’s a concept that’s going to be helpful and let me sort of step you through it in more detail.

So if you reduce cost and we make recommendations to our clients all the time. We don’t always realize that those clients necessarily are saving more money from what we do. It’s a useful thing, but all of our charts over here have to go up and to the right. We don’t necessarily report on how much money I saved you.

We tend to report on how much revenue I generated. But there are sometimes conversations and recommendations that we can have with the clients that could be either new tools they can procure, or maybe there are things that we can be doing for a client, like content creation, that we can do at a lower cost than the client can do at another agency or even with an internal team.

So I gave a quick example here. So imagine you have a client that happens to be valued at 10 times their net income let’s say. If we save that client $1,000, that might be, “Oh, great.” One little thing I threw into an update at the end of the month as I’m talking about the revenue I’m generating, but the CFO will immediately recognize, “Oh, I’m valued at 10x. They just increased the value of my business by $10,000.”

That might not be something that ever enters into the conversation if we’re just thinking about the SEO work that we do, but we just created a lot of value for somebody. The other one is cost avoidance. This is a tricky one. But this basically means what are the recommendations we can make for a client that will allow them to not spend money on something in the future.

Quick example, some time ago we had a client that was faced with a choice. They were running a m-Dot site and their dot-com for desktop, and they maintained them both. But it came time to upgrade their CMS, and they had two options. One, they could upgrade both systems and continue to maintain a separate mobile and desktop experience.

But if they did that, they would have to hire additional developers and another agency to maintain both sites. The other option was to have a fully fluid site, a dynamic site that allowed them to avoid all that cost. We helped them understand which might be better for their SEO, but using this framework, also which might be better from an expense standpoint.

They chose to do option number two. They avoided all that extra expense, and that was cost avoidance. The reason why that’s tricky is because it’s hard to prove the counterfactual. We don’t know what they literally would have spent had they done both sites, but we know what they budgeted and we said, “Hey, by following this other recommendation, you avoided having to pay this.”

That’s actually budget savings. Budget savings means, back to our utility curve, that their profit went up, made them better off, possibly also allowed them to increase the value of their business. Lastly, there’s insurance, and this is one that is not terribly intuitive until you think about it.

So imagine you have a client that is contemplating a site relaunch. They have an internal SEO team, and they’re very confident in their abilities, but they might not have a lot of experience or maybe not as much experience as you in a migration or the site relaunch. So your client might engage you to have a short-term project to help augment the capabilities of their internal team, maybe check on things, maybe be there the night of the conversion, and those sorts of things.

Now it’s an additional expense. So this person here might say, “Oh, I don’t know if I can get another agency through procurement because we already have an internal team helping us on our migration.” But the CFO might say, “Oh no, this is insurance. I will gladly pay an insurance premium to avoid a massive loss if this website is generating a lot of revenue or gross profit for me,” because they do that all the time. If you think about all the insurance that a business might spend money on, explaining that SEO project or that technical SEO support project in the context of insurance might make a ton of sense to them if you’ve been invited to that table and you can explain what you do within that context.

So here, I won’t get into the details, but what I will show you is that you will gladly pay this little bit of premium, in other words, your contract to support their internal team. The client might gladly pay that to avoid the possibility of a large loss. So you just have to convince the client that your presence will actually prevent that loss.

That’s insurance and that’s value for your client. So that’s about it. To recap, think about the work that you do and how to translate that into the business challenges that these folks are grappling with. So they might just pop into your QBR or pop into your monthly meeting. They might not understand Penguin and BERT and ChatGPT or any of that stuff, but they do understand what’s going to make me more money, gross profit, what’s going to save me money, what’s going to help me avoid some nasty costs I would really rather not pay, and what’s going to insure me.

How are you going to watch my back and prevent some larger loss later? So basically back to where I started, how do you translate your SEO expertise into the language of your client executives? Thank you very much.

Learn more about creating value for clients here

Video transcription by Speechpad.com

source https://moz.com/blog/explain-seo-value-whiteboard-friday

Categories
Digital Marketing

B2B SEO in 2023: What’s New and How to Adapt Your Strategy for Success

In the fast-paced digital landscape of 2023, having a strong online presence is crucial for B2B companies to drive traffic, generate leads, and stay competitive. SEO is pivotal in achieving these goals. This blog post (and its accompanying comprehensive guide) aims to provide B2B marketers, SEO specialists, and business owners with the knowledge and tools necessary to create a successful B2B SEO strategy in 2023. From understanding the latest trends and challenges to implementing effective keyword research, on-page optimization, backlink building, result analysis, and staying up-to-date with SEO trends, let’s discuss what actually “moves the needle” in B2B SEO.

Understand the B2B SEO landscape in 2023

The SEO landscape is constantly evolving, driven by updates to search engine algorithms, changes in user behavior, and the increasing influence of voice search and AI. To create an effective B2B SEO strategy, staying informed about the latest trends is essential. Some key trends in 2023 include:

Mobile-first indexing

With the majority of internet users accessing websites through mobile devices, search engines like Google prioritize mobile-friendly websites in their rankings. This was rolled out years ago, but it is the case across all industries. The B2B industry usually does have a slightly larger audience that views content and websites on desktops (due to the target audience usually being at work when they are researching companies or vendors). However, many still do check their email, conduct research, and view websites on their phones and tablets just as often.

Voice search optimization

As voice search is still widely used with smart devices and now some vehicles (such as Toyota’s new operating system for their lineup, which allows drivers and passengers to look up questions, businesses, and other information from their vehicle’s infotainment system), B2B companies need to optimize their content for voice queries. This involves incorporating natural language, long-tail keywords, and structured data markup to increase visibility in voice search results.

AI in search and marketing

ChatGPT has blossomed in popularity over the last year, reaching a new record for the fastest-growing user base in February 2023, according to Reuters. It now has over 1.16 billion users, according to DemandSage. OpenAI, the owners of ChatGPT, are said to be rolling out a business/enterprise level for organizations who want to make ChatGPT’s offerings available to employees via an encrypted platform (so they can share proprietary information that remains secure), and Microsoft plans to use its technology to let enterprise organizations “create their own” ChatGPT so information stays secure.

Additionally, Google announced at Google I/O in May 2023 that it plans on adding more AI experiences in user’s search journey on Google. This is likely the biggest development with search engine results pages (SERP) changes we’ve seen in a while.

User experience and core web vitals

Search engines increasingly focus on user experience metrics, such as page load speed, mobile responsiveness, and interactivity. Optimizing these factors improves both search rankings and user satisfaction. In 2023 and beyond, a user is much more likely to exit out of a slow page load experience within seconds, figuring they will just find the information they need elsewhere.

Continuous Google algorithm updates

Luckily for those in the SEO industry, Google has started announcing some of their bigger algorithm changes and updates, including when they are going to be taking place. To stay updated with Google changes, be sure to bookmark our Google Algorithm Update History page.

SEO, no matter the industry, is always evolving, so it’s important to regularly read SEO publications (like the Moz Blog), learn from subject matter experts in the space, and continue to stay on top of updates so your strategy can pivot accordingly

Conduct keyword research

Keyword research forms the foundation of a successful B2B SEO strategy. It involves identifying the keywords and phrases potential customers use to find products or services in your industry. To conduct effective B2B keyword research in 2023, consider the following steps:

Understand your target audience

Develop buyer personas and identify their pain points, needs, and search intent. This insight helps you choose keywords that align with your audience’s interests. It’s important to pay attention to the “curse of knowledge” and don’t assume your audience has the same level of knowledge about your product that you do. Just because you know how your products work (or that they even exist) doesn’t mean that your audience does. This is a unique opportunity for SEOs to identify the operating knowledge of their target audience so they can best produce content that answers their search queries.

Utilize keyword research tools

Tools like Moz Keyword Explorer provide valuable data on search volume, keyword difficulty, and related keywords. Leverage these tools to identify high-potential keywords. It’s also important to look at your own data in Google Search Console or Google Analytics 4 (GA4). Today’s keyword research is becoming more and more accurate when compared to search engines, and these are all invaluable tools forSEO and keyword-related research.

Focus on long-tail keywords

Long-tail keywords are longer and more specific search queries that tend to have lower competition. Targeting these keywords can help you reach niche audiences and generate high-quality leads. Most B2B product offerings serve a niche purpose, so try to go after keywords that explain the problem or solution of your product or service instead of its name.

For instance, if your company was an “iPaaS” (integration platform as a service), going after keywords around integration, data architecture, and application integration would likely get more traction than repeatedly building content around the term “iPaaS”.

In order to complete effective keyword research, you have to know where to start. Better target audience identification, high-quality tools, and a focus on keywords that users are actually searching for (which are usually problem- or solution-oriented) can help B2B SEOs get the right phrases they need to bring in more users and potential leads.

Optimize on-page content

On-page optimization involves making your website and its pages search engine-friendly. Here are some best practices to optimize your on-page content:

Meta title tags

Craft compelling, concise, and keyword-rich title tags and meta to briefly describe your page’s content and entice users to click within 70 characters. The advice on whether or not to include your business name in a meta title tag still isn’t concrete, but if you have the character space, include it at the end after a pipe: |.

Meta descriptions

It’s best practice to write compelling meta descriptions, because that first paragraph on your page not only tells the reader what your content is about, search engines also pull it into the search snippet in a SERP. It is known that Google frequently rewrites meta descriptions, but it’s still worthwhile to spend about 180 characters describing the page so search engines, and search engine users have a good idea of what it’s about.

Header tags

Use header tags (H1, H2, H3, etc.) to structure your content logically and improve readability. Include relevant keywords in your headers to signal the topic of each section. This can also serve as a table of contents if your blog article formatting allows it, improving readability for longer pieces of content (usually over 2000 words). Header tags also get pulled into the SERP and can be used in SERP features such as the ‘People Also Ask’ feature, if they are used in a question-answer format.

Image optimization

Optimize images by compressing their file sizes (for a better page load experience), using descriptive file names, and adding alt text that includes relevant keywords. This helps search engines understand and index your visual content. It also helps make images more accessible to users with visual impairments.

Site architecture

Good site architecture is essential for SEO success because it helps search engines and users find your website pages easier. By doing this, effective site architecture improves user experience, facilitates efficient crawling and indexing by search engines, distributes page authority effectively, and contributes to website speed and performance.

Meta titles and descriptions, headers, and site architecture may seem like SEO 101, but they are still valuable cornerstones to properly optimized content that is going to get indexed faster by search engines and have a longer time on-site for users. Google has preached time and time again about always doing what’s best for users and making sure content is fast, findable, and easy to read checks all the boxes.

Build quality backlinks

Backlinks remain a critical factor in B2B SEO, as they signal the credibility and authority of your website. However, it is essential to focus on quality rather than quantity. Consider the following strategies for building quality backlinks:

Create link-worthy content

Produce high-quality, informative content that provides value to your target audience. This increases the likelihood of other websites linking to your content as a valuable resource. Consider running your own research studies for new industry data that others will want to share, or create infographics, white papers, and other guides.

Split content into separate areas (when it makes sense)

This strategy won’t work for everyone, but if you are at a large organization, it might make sense from a site architecture standpoint to separate different types of content.

For example, Moz has the SEO Learning Center and Blog, and the strategy (and the types of content we produce for each) varies. Many large corporations also have a press mentions section, as well as a media/PR blog, where they release company announcements or press releases.

This helps news outlets and other organizations parse and subscribe to whatever type of content section they’d like. You can see Moz’s “News & Press” page for an example of this type of content area.

When it’s easier for news outlets and others to find your company announcements, they are much more likely to find and link to them more quickly and easily. It’s all about getting users the information they need quickly.

Partnerships

If your executive leadership team agrees to it, working with other organizations that cater to your same target audience but aren’t competitors can be a great way to get more exposure (and traffic) to your brand. Partnerships can entail sending a dedicated email about the other brand to your email list (and they do the same), or collaborating on a promotion through other marketing channels (such as blog posts, white papers, or videos) to get more leads and engagement.

Many organizations still buy backlinks, but in my experience, this is a risky and low ROI strategy. Companies that offer this can’t promise backlinks from high-quality places, and the ones that do may be using nefarious tactics (such as not fully disclosing links in the content they are sharing with the other website to get a link). It’s usually best to think of link building as an inbound strategy, rather than outbound.

Partnerships can be fruitful, but it takes it a lot of planning to make them reputable and pay off for both sides of the deal.

The end game: Optimization to drive results

From on-page optimization to working on your backlink strategy, SEO is truly a sum of its parts: it’s only as good as each component. To see where you’re making the most headway, all of the above efforts need to be tracked properly with accurate revenue attribution so you can see where SEO is moving the needle for your B2B organization. To learn more about measuring and analyzing results, visit the measuring success chapter in Moz’s ‘Beginner’s Guide to SEO’ and learn more about measuring organic search traffic quality from Adriana Stein.

Once you have a good understanding of where SEO is making the most impact, you can choose what to prioritize in upcoming quarters and long-term future planning. This can help your B2B SEO efforts compound over time, as most parts of SEO utilize one another to work more effectively. For example, a better site architecture and experience will likely lead to more users linking to your content. Make sure you have a well-rounded program to ensure better results over time.

source https://moz.com/blog/adapt-b2b-seo-strategy-for-success

Categories
Digital Marketing

The Moz Links API: Touch Every Endpoint in Python

The purpose of this Jupyter Notebook is to introduce the Moz Links API using Python. This should work on any notebook hosting environment, such as Google Colab.

If you’re looking at this on Github, the code snippets can be copy/pasted into your own notebook environment. By the time you’ve run this script to the bottom, you will have used every Moz Links API endpoint, and can pick the parts you want for your own project. The official documentation can be found here.

Confused? Be sure to check out my intro to the Moz Links API.

Do global imports

The import statements at the top of a Python program are used to load external resources that are not loaded by default in the Python interpreter. These resources may include libraries or modules that provide additional functionality to the program.

Import statements are usually placed at the top of a program, before any other code is executed. This allows the program to load any necessary resources before they are needed in the program.

Once the resources have been loaded using import statements, they can be used anywhere in the program, not just in the cell where the import statement was written. This allows the program to access the functionality provided by the imported resources throughout its execution.

The libraries here not part of the standard Python library are requests and sqlitedict. You can install the with pip install requests and pip install sqlitedict in your terminal or a Jupyter cell. If you’re using Anaconda, requests is pre-installed.

import json
import requests
from headlines import *
from pprint import pprint
from sqlitedict import SqliteDict as sqldict

Load login values from external file

The code below reads a file named “linksapi.txt” from the “assets” directory, which contains the login credentials, including the access ID and secret key needed to access the Moz API. These credentials are extracted from the file and assigned to two variables named ACCESSID and SECRETKEY. The with statement is used to ensure that the file is properly closed after it’s been read. Create a file whose contents look like this with your credentials manually retreived from moz.com:

ACCESSID: mozscape-1234567890
SECRETKEY: 1234567890abcdef1234567890abcdef

Once the credentials are extracted from the file, they are stored in a tuple named AUTH_TUPLE. This tuple can be used as an argument to the Moz API functions to authenticate and authorize access to the data.

The purpose of this approach is to avoid hard-coding sensitive login credentials directly in the program, which could pose a security risk if the code was shared or published publicly. Instead, the credentials are kept in a separate file that is not included in the repository, and can be easily created and updated as needed. This way, the code can be shared without exposing the credentials to the public.

with open("../assets/linksapi.txt") as fh:
    ACCESSID, SECRETKEY = [x.strip().split(" ")[1] for x in fh.readlines()]

AUTH_TUPLE = (ACCESSID, SECRETKEY)  # Don't show contents

Configure variables

In this code, there are several configuration variables that are used to set up the API call to the Moz Links API.

The first variable, COMMON_ENDPOINT, is a constant that stores the endpoint URL for the Moz API. The second variable, sub_endpoint, is a string that represents the endpoint subpath for the anchor text data, which will be appended to the COMMON_ENDPOINT URL to form the complete API endpoint URL.

The fourth variable, data_dict, is a dictionary that contains the parameters for the API request. In this case, the data_dict specifies the target URL for which we want to retrieve anchor text data, the scope of the data (in this case, page-level), and a limit of 1 result.

Finally, the json_string variable is created by converting the data_dict dictionary into a JSON-formatted string using the json.dumps() function. This string will be used as the request body when making the API call.

These variables are used to configure and parameterize the API request, and can be modified to perform any data_dict request against any Moz Links API sub_endpoint.

COMMON_ENDPOINT = "https://lsapi.seomoz.com/v2/"
sub_endpoint = "anchor_text"
endpoint = COMMON_ENDPOINT + sub_endpoint
data_dict = {"target": "moz.com/blog", "scope": "page", "limit": 1}
json_string = json.dumps(data_dict)

Actually hit the API (ensure success)

In JupyterLab, the last line of a code cell is automatically printed to the output area without requiring an explicit print() statement. The code you provided is using the requests module to send a POST request to a URL url with data in the form of a JSON string json_string. The authentication details are passed using the AUTH_TUPLE variable.

After sending the request, the response object r is printed using the print() statement. This will print the HTTP status code, such as 200 for success, 404 for not found, etc., along with the response headers.

Finally, the .json() method is called on the response object response to parse the response data as JSON and return it as a Python dictionary. This dictionary can be assigned to a variable, used for further processing, or simply printed to the output area without requiring an explicit print() statement due to JupyterLab’s automatic printing behavior for the last line of a code cell.

response = requests.post(endpoint, data=json_string, auth=AUTH_TUPLE)
pprint(response.json())

Outputs:

{'next_token': 'JYkQVg4s9ak8iRBWDiz1qTyguYswnj035nqjRF0IbW96IGJsb2e58hGzcmSomw==',
 'results': [{'anchor_text': 'moz',
              'external_pages': 7183,
              'external_root_domains': 2038}]}

List Sub-endpoints

This code defines a list of different sub-endpoints that can be appended to a common URL prefix to make different API endpoints. An API endpoint is a URL where an API can be accessed by clients. It is a point of entry to the application that acts as a gatekeeper between the client and the server. Each endpoint is identified by a unique URL, which can be used to interact with the API.

In this code, the list of sub-endpoints is defined in the sub_endpoints variable, and each endpoint is represented as a string. The for loop iterates over the list, prints the index number and name of each sub-endpoint using the print function, and increments the index by 1. The enumerate function is used to generate a sequence of pairs consisting of an index and a value from the list.

This code is useful for exploring the available endpoints for a particular API and for selecting the endpoint that corresponds to the desired functionality. By changing the sub-endpoint in the URL, clients can access different resources or perform different operations on the server.

sub_endpoints = [
    "anchor_text",
    "final_redirect",
    "global_top_pages",
    "global_top_root_domains",
    "index_metadata",
    "link_intersect",
    "link_status",
    "linking_root_domains",
    "links",
    "top_pages",
    "url_metrics",
    "usage_data",
]
for i, sub_endpoint in enumerate(sub_endpoints):
    print(i + 1, sub_endpoint)

Outputs:

1 anchor_text
2 final_redirect
3 global_top_pages
4 global_top_root_domains
5 index_metadata
6 link_intersect
7 link_status
8 linking_root_domains
9 links
10 top_pages
11 url_metrics
12 usage_data

Human-friendly labels

This code defines two lists: names and descriptions. The names list contains human-friendly labels for the set of sub-endpoints, while the descriptions list provides a brief description of each endpoint. The two lists are kept in the same order as the points list defined earlier in the code.

By keeping the three lists in the same order, they can be “zipped” together into a single list of tuples using the zip function. This produces a new list where each tuple contains the name, endpoint, and description for a particular API endpoint. This makes it easy to display a user-friendly summary of each API endpoint with its name and description.

The zip function combines the elements of the three lists element-wise, creating a tuple of the first elements from each list, then a tuple of the second elements, and so on. The resulting list of tuples can be iterated over, and each tuple unpacked to access the individual name, endpoint, and description elements for each API endpoint.

names = [
    "Anchor Text",
    "Final Redirect",
    "Global Top Pages",
    "Global Top Root Domains",
    "Index Metadata",
    "Link Intersect",
    "Link Status",
    "Linking Root Domains",
    "Links",
    "Top Pages",
    "URL Metrics",
    "Usage Data",
]

descriptions = [
    "Use this endpoint to get data about anchor text used by followed external links to a target. Results are ordered by external_root_domains descending.",
    "Use this endpoint to get data about anchor text used by followed external links to a target. Results are ordered by external_root_domains descending.",
    "This endpoint returns the top 500 pages in the entire index with the highest Page Authority values, sorted by Page Authority. (Visit the Top 500 Sites list to explore the top root domains on the web, sorted by Domain Authority.)",
    "This endpoint returns the top 500 pages in the entire index with the highest Page Authority values, sorted by Page Authority. (Visit the Top 500 Sites list to explore the top root domains on the web, sorted by Domain Authority.)",
    "This endpoint returns the top 500 pages in the entire index with the highest Page Authority values, sorted by Page Authority. (Visit the Top 500 Sites list to explore the top root domains on the web, sorted by Domain Authority.)",
    "Use this endpoint to get sources that link to at least one of a list of positive targets and don't link to any of a list of negative targets.",
    "Use this endpoint to get information about links from many sources to a single target.",
    "Use this endpoint to get linking root domains to a target.",
    "Use this endpoint to get links to a target.",
    "This endpoint returns top pages on a target domain.",
    "Use this endpoint to get metrics about one or more urls.",
    "This endpoint Returns the number of rows consumed so far in the current billing period. The count returned might not reflect rows consumed in the last hour. The count returned reflects rows consumed by requests to both the v1 (Moz Links API) and v2 Links APIs.",
]

# Simple zipping example
list(zip(names, sub_endpoints, descriptions))

Outputs:

[('Anchor Text',
  'anchor_text',
  'Use this endpoint to get data about anchor text used by followed external links to a target. Results are ordered by external_root_domains descending.'),
 ('Final Redirect',
  'final_redirect',
  'Use this endpoint to get data about anchor text used by followed external links to a target. Results are ordered by external_root_domains descending.'),
 ('Global Top Pages',
  'global_top_pages',
  'This endpoint returns the top 500 pages in the entire index with the highest Page Authority values, sorted by Page Authority. (Visit the Top 500 Sites list to explore the top root domains on the web, sorted by Domain Authority.)'),
 ('Global Top Root Domains',
  'global_top_root_domains',
  'This endpoint returns the top 500 pages in the entire index with the highest Page Authority values, sorted by Page Authority. (Visit the Top 500 Sites list to explore the top root domains on the web, sorted by Domain Authority.)'),
 ('Index Metadata',
  'index_metadata',
  'This endpoint returns the top 500 pages in the entire index with the highest Page Authority values, sorted by Page Authority. (Visit the Top 500 Sites list to explore the top root domains on the web, sorted by Domain Authority.)'),
 ('Link Intersect',
  'link_intersect',
  "Use this endpoint to get sources that link to at least one of a list of positive targets and don't link to any of a list of negative targets."),
 ('Link Status',
  'link_status',
  'Use this endpoint to get information about links from many sources to a single target.'),
 ('Linking Root Domains',
  'linking_root_domains',
  'Use this endpoint to get linking root domains to a target.'),
 ('Links', 'links', 'Use this endpoint to get links to a target.'),
 ('Top Pages',
  'top_pages',
  'This endpoint returns top pages on a target domain.'),
 ('URL Metrics',
  'url_metrics',
  'Use this endpoint to get metrics about one or more urls.'),
 ('Usage Data',
  'usage_data',
  'This endpoint Returns the number of rows consumed so far in the current billing period. The count returned might not reflect rows consumed in the last hour. The count returned reflects rows consumed by requests to both the v1 (Moz Links API) and v2 Links APIs.')]

Show an example request for each endpoint

This is a list of API requests in Python dict format, where each dictionary represents a request to a specific endpoint. Don’t hurt your brain too much trying to read it. Just know that I lifted each example from the original Moz documentation and listed them all here in order as nested Python dicts.

You could call the format is a dict of dicts, where each sub-dictionary corresponds to a specific endpoint, same order as the sub_endpoints, names, and descriptions lists for easy combining. The output of running the below cell is doing that list-combining to document every sub_endpoint.

dict_of_dicts = {
    "anchor_text": {"target": "moz.com/blog", "scope": "page", "limit": 5},
    "links": {
        "target": "moz.com/blog",
        "target_scope": "page",
        "filter": "external+nofollow",
        "limit": 1,
    },
    "final_redirect": {"page": "seomoz.org/blog"},
    "global_top_pages": {"limit": 5},
    "global_top_root_domains": {"limit": 5},
    "index_metadata": {},
    "link_intersect": {
        "positive_targets": [
            {"target": "latimes.com", "scope": "root_domain"},
            {"target": "blog.nytimes.com", "scope": "subdomain"},
        ],
        "negative_targets": [{"target": "moz.com", "scope": "root_domain"}],
        "source_scope": "page",
        "sort": "source_domain_authority",
        "limit": 1,
    },
    "link_status": {
        "target": "moz.com/blog",
        "sources": ["twitter.com", "linkedin.com"],
        "source_scope": "root_domain",
        "target_scope": "page",
    },
    "linking_root_domains": {
        "target": "moz.com/blog",
        "target_scope": "page",
        "filter": "external",
        "sort": "source_domain_authority",
        "limit": 5,
    },
    "top_pages": {"target": "moz.com", "scope": "root_domain", "limit": 5},
    "url_metrics": {"targets": ["moz.com", "nytimes.com"]},
    "usage_data": {},
}

for i, sub_endpoint in enumerate(sub_endpoints):
    h1(f"{i + 1}. {names[i]} ({sub_endpoint})")
    print(descriptions[i])
    h4("Example request:")
    pprint(dict_of_dicts[sub_endpoint])
    print()

Outputs:

# 2. Final Redirect (final_redirect)

Use this endpoint to get data about anchor text used by followed external links to a target. Results are ordered by external_root_domains descending.
Example request:

{'page': 'seomoz.org/blog'}

[...]

Write a function that hits the API

If we’re going to hit an API over and over in mostly the same way, we want to spare ourselves re-typing everything all the time. That’s why we define functions. That’s the def in the below cell. Once that cell is run, the moz() function can be used anywhere in this Notebook. You need only feed it the sub_endpoint you want to use and a Python dict of your request. It will return the API’s response.

def moz(sub_endpoint, data_dict):
    """Hits Moz Links API with specified endpoint and request and returns results."""
    json_string = json.dumps(data_dict)
    endpoint = COMMON_ENDPOINT + sub_endpoint
    # Below, data is a string (flattened JSON) but auth is a 2-position tuple.
    response = requests.post(endpoint, data=json_string, auth=AUTH_TUPLE)
    return response

This does not output anything to the screen. It just defines the function.

Conditionally hit the API

The code uses a Python package calledb which provides a persistent dictionary-like object that can be stored on disk using the SQLite database engine. The with statement in the code sets up a context manager for the SqliteDict object, which automatically handles opening and closing the database connection. The database file is stored at ../dbs/linksapi.db

The code iterates through each sub-endpoint in the sub_endpoints list, and checks if that data has already been retrieved. If it hasn’t, the API is called using the moz() function and the result is saved in the SqliteDict. The db.commit() statement ensures that any changes made to the dictionary during the iteration are saved to the database.

The SqliteDict serves as a local cache to prevent the API from being hit every time the code block is run if the data has already been collected. By using this cache, the code reduces the number of API requests required, which is useful when working with APIs that have quota limits. Congratulations, you’re using a database!

with sqldict("../dbs/linksapi.db") as db:
    for sub_endpoint in sub_endpoints:
        if sub_endpoint not in db:
            print(sub_endpoint)
            result = moz(sub_endpoint, dict_of_dicts[sub_endpoint])
            db[sub_endpoint] = result
            db.commit()
            print("API hit and response saved!")
            print()
h2("Done")

This does not output anything to the screen. It saves the results of the API-calls to a local database.

Show the locally-stored API responses

This code uses the sqldict context manager to open the SQLite database containing the previously retrieved API data. It then iterates over the keys in the database, which correspond to the endpoints that were previously retrieved.

For each key, the code prints the endpoint name, description, and the data retrieved from the API. The pprint function is used to print the JSON data in a more human-readable format, with indentation and line breaks that make it easier to read.

with sqldict("../dbs/linksapi.db") as db:
    for i, key in enumerate(db):
        h1(f"{i + 1}. {names[i]} ({key})")
        print(descriptions[i])
        print()
        pprint(db[key].json())
        print()

Outputs:

1. Anchor Text (anchor_text)
Use this endpoint to get data about anchor text used by followed external links to a target. Results are ordered by external_root_domains descending.

{'next_token': 'KIkQVg4s9ak8iRBWDiz1qTyguYswnj035n7bYI0Lc2VvbW96IGJsb2dKBcCodcl47Q==',
 'results': [{'anchor_text': 'moz',
              'external_pages': 7162,
              'external_root_domains': 2026},
             {'anchor_text': 'moz blog',
              'external_pages': 15525,
              'external_root_domains': 1364},
             {'anchor_text': 'the moz blog',
              'external_pages': 7879,
              'external_root_domains': 728},
             {'anchor_text': 'seomoz',
              'external_pages': 17741,
              'external_root_domains': 654},
             {'anchor_text': 'https://moz.com/blog',
              'external_pages': 978,
              'external_root_domains': 491}]}

2. Final Redirect (final_redirect)
Use this endpoint to get data about anchor text used by followed external links to a target. Results are ordered by external_root_domains descending.

{'page': 'moz.com/blog'}

3. Global Top Pages (global_top_pages)
This endpoint returns the top 500 pages in the entire index with the highest Page Authority values, sorted by Page Authority. (Visit the Top 500 Sites list to explore the top root domains on the web, sorted by Domain Authority.)

{'next_token': 'BcLbRwBmrXHK',
 'results': [{'deleted_pages_to_page': 11932076,
              'deleted_pages_to_root_domain': 23942663640,
              'deleted_pages_to_subdomain': 21555752652,
              'deleted_root_domains_to_page': 64700,
              'deleted_root_domains_to_root_domain': 3688228,
              'deleted_root_domains_to_subdomain': 3516235,
              'domain_authority': 96,
              'external_indirect_pages_to_root_domain': 5042652519,
              'external_nofollow_pages_to_page': 31163,
              'external_nofollow_pages_to_root_domain': 12375460748,
              'external_nofollow_pages_to_subdomain': 11393036086,
              'external_pages_to_page': 118102549,
              'external_pages_to_root_domain': 91362310623,
              'external_pages_to_subdomain': 83283626903,
              'external_redirect_pages_to_page': 0,
              'external_redirect_pages_to_root_domain': 445730476,
              'external_redirect_pages_to_subdomain': 432323198,
              'http_code': 5,
              'indirect_root_domains_to_page': 0,
              'indirect_root_domains_to_root_domain': 701121,
              'last_crawled': '2023-01-15',
              'link_propensity': 1.76710455e-05,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 2,
              'nofollow_pages_to_page': 31163,
              'nofollow_pages_to_root_domain': 12375623717,
              'nofollow_pages_to_subdomain': 11393036179,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 0,
              'nofollow_root_domains_to_page': 980,
              'nofollow_root_domains_to_root_domain': 3696150,
              'nofollow_root_domains_to_subdomain': 3622349,
              'page': 'www.facebook.com/Plesk',
              'page_authority': 100,
              'pages_crawled_from_root_domain': 1810872,
              'pages_from_page': 0,
              'pages_from_root_domain': 5289,
              'pages_to_page': 118102549,
              'pages_to_root_domain': 91368257043,
              'pages_to_subdomain': 83288001442,
              'redirect_pages_to_page': 0,
              'redirect_pages_to_root_domain': 447189164,
              'redirect_pages_to_subdomain': 433411292,
              'root_domain': 'facebook.com',
              'root_domains_from_page': 0,
              'root_domains_from_root_domain': 32,
              'root_domains_to_page': 491956,
              'root_domains_to_root_domain': 59416650,
              'root_domains_to_subdomain': 50993087,
              'spam_score': 1,
              'subdomain': 'www.facebook.com',
              'title': ''},
             {'deleted_pages_to_page': 5828966,
              'deleted_pages_to_root_domain': 79909678,
              'deleted_pages_to_subdomain': 79909678,
              'deleted_root_domains_to_page': 16552,
              'deleted_root_domains_to_root_domain': 98416,
              'deleted_root_domains_to_subdomain': 98416,
              'domain_authority': 94,
              'external_indirect_pages_to_root_domain': 1177381629,
              'external_nofollow_pages_to_page': 453328699,
              'external_nofollow_pages_to_root_domain': 1643990147,
              'external_nofollow_pages_to_subdomain': 1643990147,
              'external_pages_to_page': 456279611,
              'external_pages_to_root_domain': 2808523112,
              'external_pages_to_subdomain': 2808523112,
              'external_redirect_pages_to_page': 125,
              'external_redirect_pages_to_root_domain': 24941546,
              'external_redirect_pages_to_subdomain': 24941546,
              'http_code': 3,
              'indirect_root_domains_to_page': 723,
              'indirect_root_domains_to_root_domain': 252606,
              'last_crawled': '2023-01-14',
              'link_propensity': 0.118001014,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 121166,
              'nofollow_pages_to_page': 453328699,
              'nofollow_pages_to_root_domain': 1644293277,
              'nofollow_pages_to_subdomain': 1644293277,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 67627,
              'nofollow_root_domains_to_page': 9800973,
              'nofollow_root_domains_to_root_domain': 4959747,
              'nofollow_root_domains_to_subdomain': 4959747,
              'page': 'wordpress.com/?ref=footer_blog',
              'page_authority': 100,
              'pages_crawled_from_root_domain': 1731019,
              'pages_from_page': 0,
              'pages_from_root_domain': 1080338,
              'pages_to_page': 456293004,
              'pages_to_root_domain': 2817137385,
              'pages_to_subdomain': 2817137385,
              'redirect_pages_to_page': 125,
              'redirect_pages_to_root_domain': 25449067,
              'redirect_pages_to_subdomain': 25449067,
              'root_domain': 'wordpress.com',
              'root_domains_from_page': 0,
              'root_domains_from_root_domain': 204262,
              'root_domains_to_page': 9878742,
              'root_domains_to_root_domain': 12653294,
              'root_domains_to_subdomain': 12653294,
              'spam_score': 1,
              'subdomain': 'wordpress.com',
              'title': ''},
             {'deleted_pages_to_page': 3904778,
              'deleted_pages_to_root_domain': 23942663640,
              'deleted_pages_to_subdomain': 21555752652,
              'deleted_root_domains_to_page': 11671,
              'deleted_root_domains_to_root_domain': 3688228,
              'deleted_root_domains_to_subdomain': 3516235,
              'domain_authority': 96,
              'external_indirect_pages_to_root_domain': 5042652519,
              'external_nofollow_pages_to_page': 4449343,
              'external_nofollow_pages_to_root_domain': 12375460748,
              'external_nofollow_pages_to_subdomain': 11393036086,
              'external_pages_to_page': 59602588,
              'external_pages_to_root_domain': 91362310623,
              'external_pages_to_subdomain': 83283626903,
              'external_redirect_pages_to_page': 12625,
              'external_redirect_pages_to_root_domain': 445730476,
              'external_redirect_pages_to_subdomain': 432323198,
              'http_code': 5,
              'indirect_root_domains_to_page': 1632,
              'indirect_root_domains_to_root_domain': 701121,
              'last_crawled': '2023-01-16',
              'link_propensity': 1.76710455e-05,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 2,
              'nofollow_pages_to_page': 4449343,
              'nofollow_pages_to_root_domain': 12375623717,
              'nofollow_pages_to_subdomain': 11393036179,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 0,
              'nofollow_root_domains_to_page': 28624,
              'nofollow_root_domains_to_root_domain': 3696150,
              'nofollow_root_domains_to_subdomain': 3622349,
              'page': 'www.facebook.com/home.php',
              'page_authority': 100,
              'pages_crawled_from_root_domain': 1810872,
              'pages_from_page': 0,
              'pages_from_root_domain': 5289,
              'pages_to_page': 59602589,
              'pages_to_root_domain': 91368257043,
              'pages_to_subdomain': 83288001442,
              'redirect_pages_to_page': 12626,
              'redirect_pages_to_root_domain': 447189164,
              'redirect_pages_to_subdomain': 433411292,
              'root_domain': 'facebook.com',
              'root_domains_from_page': 0,
              'root_domains_from_root_domain': 32,
              'root_domains_to_page': 239697,
              'root_domains_to_root_domain': 59416650,
              'root_domains_to_subdomain': 50993087,
              'spam_score': 1,
              'subdomain': 'www.facebook.com',
              'title': ''},
             {'deleted_pages_to_page': 3440567,
              'deleted_pages_to_root_domain': 3440700,
              'deleted_pages_to_subdomain': 3440700,
              'deleted_root_domains_to_page': 60839,
              'deleted_root_domains_to_root_domain': 60840,
              'deleted_root_domains_to_subdomain': 60840,
              'domain_authority': 1,
              'external_indirect_pages_to_root_domain': 7,
              'external_nofollow_pages_to_page': 288,
              'external_nofollow_pages_to_root_domain': 1499,
              'external_nofollow_pages_to_subdomain': 1499,
              'external_pages_to_page': 140954613,
              'external_pages_to_root_domain': 140959216,
              'external_pages_to_subdomain': 140959213,
              'external_redirect_pages_to_page': 70,
              'external_redirect_pages_to_root_domain': 70,
              'external_redirect_pages_to_subdomain': 70,
              'http_code': 200,
              'indirect_root_domains_to_page': 0,
              'indirect_root_domains_to_root_domain': 0,
              'last_crawled': '2018-02-05',
              'link_propensity': 0.3998428881,
              'nofollow_pages_from_page': 12,
              'nofollow_pages_from_root_domain': 805,
              'nofollow_pages_to_page': 288,
              'nofollow_pages_to_root_domain': 10799,
              'nofollow_pages_to_subdomain': 10799,
              'nofollow_root_domains_from_page': 2,
              'nofollow_root_domains_from_root_domain': 7,
              'nofollow_root_domains_to_page': 30,
              'nofollow_root_domains_to_root_domain': 30,
              'nofollow_root_domains_to_subdomain': 30,
              'page': 'music.skyrock.com/',
              'page_authority': 100,
              'pages_crawled_from_root_domain': 2546,
              'pages_from_page': 61,
              'pages_from_root_domain': 3382,
              'pages_to_page': 140956009,
              'pages_to_root_domain': 141008586,
              'pages_to_subdomain': 141008583,
              'redirect_pages_to_page': 70,
              'redirect_pages_to_root_domain': 70,
              'redirect_pages_to_subdomain': 70,
              'root_domain': 'music.skyrock.com',
              'root_domains_from_page': 19,
              'root_domains_from_root_domain': 1018,
              'root_domains_to_page': 10609865,
              'root_domains_to_root_domain': 10609868,
              'root_domains_to_subdomain': 10609868,
              'spam_score': 9,
              'subdomain': 'music.skyrock.com',
              'title': 'Blog de Music - DES NEWS, DES CLIPS, DES INTERVIEWS - '
                       'Skyrock.com'},
             {'deleted_pages_to_page': 64159924,
              'deleted_pages_to_root_domain': 17641375891,
              'deleted_pages_to_subdomain': 336246205,
              'deleted_root_domains_to_page': 63574,
              'deleted_root_domains_to_root_domain': 1728606,
              'deleted_root_domains_to_subdomain': 234073,
              'domain_authority': 100,
              'external_indirect_pages_to_root_domain': 19281720347,
              'external_nofollow_pages_to_page': 34635431,
              'external_nofollow_pages_to_root_domain': 7885369442,
              'external_nofollow_pages_to_subdomain': 184067821,
              'external_pages_to_page': 285612569,
              'external_pages_to_root_domain': 55013651418,
              'external_pages_to_subdomain': 1492976347,
              'external_redirect_pages_to_page': 593282,
              'external_redirect_pages_to_root_domain': 250423075,
              'external_redirect_pages_to_subdomain': 5678006,
              'http_code': 302,
              'indirect_root_domains_to_page': 1072,
              'indirect_root_domains_to_root_domain': 231256,
              'last_crawled': '2023-04-01',
              'link_propensity': 0.006248265505,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 991472,
              'nofollow_pages_to_page': 34635436,
              'nofollow_pages_to_root_domain': 7948674425,
              'nofollow_pages_to_subdomain': 184068512,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 182393,
              'nofollow_root_domains_to_page': 126656,
              'nofollow_root_domains_to_root_domain': 2322389,
              'nofollow_root_domains_to_subdomain': 304381,
              'page': 'youtube.com/',
              'page_authority': 100,
              'pages_crawled_from_root_domain': 41258009,
              'pages_from_page': 0,
              'pages_from_root_domain': 11109186,
              'pages_to_page': 285612606,
              'pages_to_root_domain': 55255620288,
              'pages_to_subdomain': 1493073570,
              'redirect_pages_to_page': 593282,
              'redirect_pages_to_root_domain': 263224806,
              'redirect_pages_to_subdomain': 5678383,
              'root_domain': 'youtube.com',
              'root_domains_from_page': 0,
              'root_domains_from_root_domain': 257791,
              'root_domains_to_page': 598403,
              'root_domains_to_root_domain': 23134271,
              'root_domains_to_subdomain': 1927717,
              'spam_score': 4,
              'subdomain': 'youtube.com',
              'title': ''}]}

4. Global Top Root Domains (global_top_root_domains)
This endpoint returns the top 500 pages in the entire index with the highest Page Authority values, sorted by Page Authority. (Visit the Top 500 Sites list to explore the top root domains on the web, sorted by Domain Authority.)

{'next_token': 'BcLbRwBmrXHK',
 'results': [{'domain_authority': 100,
              'link_propensity': 0.006248265505,
              'root_domain': 'youtube.com',
              'root_domains_to_root_domain': 23134271,
              'spam_score': 4,
              'to_target': {'deleted_pages': 0,
                            'nofollow_pages': 0,
                            'pages': 0,
                            'redirect_pages': 0}},
             {'domain_authority': 100,
              'link_propensity': 0.008422264829,
              'root_domain': 'www.google.com',
              'root_domains_to_root_domain': 14723695,
              'spam_score': 14,
              'to_target': {'deleted_pages': 0,
                            'nofollow_pages': 0,
                            'pages': 0,
                            'redirect_pages': 0}},
             {'domain_authority': 100,
              'link_propensity': 0.0001607139566,
              'root_domain': 'www.blogger.com',
              'root_domains_to_root_domain': 30580427,
              'spam_score': -1,
              'to_target': {'deleted_pages': 0,
                            'nofollow_pages': 0,
                            'pages': 0,
                            'redirect_pages': 0}},
             {'domain_authority': 99,
              'link_propensity': 0.04834850505,
              'root_domain': 'linkedin.com',
              'root_domains_to_root_domain': 12339087,
              'spam_score': 1,
              'to_target': {'deleted_pages': 0,
                            'nofollow_pages': 0,
                            'pages': 0,
                            'redirect_pages': 0}},
             {'domain_authority': 99,
              'link_propensity': 0.006264935713,
              'root_domain': 'microsoft.com',
              'root_domains_to_root_domain': 5344181,
              'spam_score': 11,
              'to_target': {'deleted_pages': 0,
                            'nofollow_pages': 0,
                            'pages': 0,
                            'redirect_pages': 0}}]}

5. Index Metadata (index_metadata)
This endpoint returns the top 500 pages in the entire index with the highest Page Authority values, sorted by Page Authority. (Visit the Top 500 Sites list to explore the top root domains on the web, sorted by Domain Authority.)

{'index_id': 'NE+lX5bFh06baS9ojUwVbw==',
 'spam_score_update_days': ['2019-02-08',
                            '2020-03-28',
                            '2020-08-03',
                            '2020-11-13',
                            '2021-02-24',
                            '2021-05-19',
                            '2021-08-16',
                            '2021-11-02',
                            '2022-02-01',
                            '2022-05-10',
                            '2022-11-16']}

6. Link Intersect (link_intersect)
Use this endpoint to get sources that link to at least one of a list of positive targets and don't link to any of a list of negative targets.

{'next_token': 'AcmY2oCXQbbg',
 'results': [{'domain_authority': 100,
              'matching_target_indexes': [0],
              'page': 'www.google.com/amp/www.latimes.com/local/lanow/la-me-ln-aliso-viejo-shooting-20171012-story,amp.html',
              'spam_score': 14,
              'title': ''}]}

7. Link Status (link_status)
Use this endpoint to get information about links from many sources to a single target.

{'exists': [False, False]}

8. Linking Root Domains (linking_root_domains)
Use this endpoint to get linking root domains to a target.

{'next_token': 'IokQVg4s9ak8iRBWDiz1qTyguYswnj035qBkmE3DU+JTtwAVhsjH7R6XUA==',
 'results': [{'domain_authority': 99,
              'link_propensity': 0.006264935713,
              'root_domain': 'microsoft.com',
              'root_domains_to_root_domain': 5344181,
              'spam_score': 11,
              'to_target': {'deleted_pages': 0,
                            'nofollow_pages': 0,
                            'pages': 2,
                            'redirect_pages': 0}},
             {'domain_authority': 98,
              'link_propensity': 0.02977741137,
              'root_domain': 'wordpress.org',
              'root_domains_to_root_domain': 12250296,
              'spam_score': 2,
              'to_target': {'deleted_pages': 0,
                            'nofollow_pages': 2,
                            'pages': 2,
                            'redirect_pages': 0}},
             {'domain_authority': 96,
              'link_propensity': 0.09679271281,
              'root_domain': 'github.com',
              'root_domains_to_root_domain': 2948013,
              'spam_score': 2,
              'to_target': {'deleted_pages': 0,
                            'nofollow_pages': 12,
                            'pages': 12,
                            'redirect_pages': 0}},
             {'domain_authority': 96,
              'link_propensity': 0.004641198553,
              'root_domain': 'amazon.com',
              'root_domains_to_root_domain': 5023132,
              'spam_score': 28,
              'to_target': {'deleted_pages': 0,
                            'nofollow_pages': 0,
                            'pages': 2,
                            'redirect_pages': 0}},
             {'domain_authority': 95,
              'link_propensity': 0.005770479795,
              'root_domain': 'shopify.com',
              'root_domains_to_root_domain': 2948087,
              'spam_score': 1,
              'to_target': {'deleted_pages': 3,
                            'nofollow_pages': 0,
                            'pages': 0,
                            'redirect_pages': 0}}]}

9. Links (links)
Use this endpoint to get links to a target.

{'next_token': 'AVvpJ4gPPvOY',
 'results': [{'anchor_text': 'moz blog',
              'date_disappeared': '',
              'date_first_seen': '2020-06-29',
              'date_last_seen': '2023-01-14',
              'nofollow': True,
              'redirect': False,
              'rel_canonical': False,
              'source': {'deleted_pages_to_page': 570,
                         'deleted_pages_to_root_domain': 1251501128,
                         'deleted_pages_to_subdomain': 1182759912,
                         'deleted_root_domains_to_page': 34,
                         'deleted_root_domains_to_root_domain': 322790,
                         'deleted_root_domains_to_subdomain': 314554,
                         'domain_authority': 96,
                         'external_indirect_pages_to_root_domain': 863103308,
                         'external_nofollow_pages_to_page': 1407,
                         'external_nofollow_pages_to_root_domain': 667480081,
                         'external_nofollow_pages_to_subdomain': 650421076,
                         'external_pages_to_page': 3710,
                         'external_pages_to_root_domain': 5309615021,
                         'external_pages_to_subdomain': 5086141938,
                         'external_redirect_pages_to_page': 14,
                         'external_redirect_pages_to_root_domain': 143685025,
                         'external_redirect_pages_to_subdomain': 142061138,
                         'http_code': 200,
                         'indirect_root_domains_to_page': 2,
                         'indirect_root_domains_to_root_domain': 180014,
                         'last_crawled': '2023-01-14',
                         'link_propensity': 0.09679271281,
                         'nofollow_pages_from_page': 199,
                         'nofollow_pages_from_root_domain': 7541042,
                         'nofollow_pages_to_page': 1407,
                         'nofollow_pages_to_root_domain': 678014273,
                         'nofollow_pages_to_subdomain': 660443683,
                         'nofollow_root_domains_from_page': 93,
                         'nofollow_root_domains_from_root_domain': 564314,
                         'nofollow_root_domains_to_page': 58,
                         'nofollow_root_domains_to_root_domain': 186407,
                         'nofollow_root_domains_to_subdomain': 171632,
                         'page': 'github.com/mezod/awesome-indie',
                         'page_authority': 68,
                         'pages_crawled_from_root_domain': 7254823,
                         'pages_from_page': 202,
                         'pages_from_root_domain': 8613796,
                         'pages_to_page': 3746,
                         'pages_to_root_domain': 5628821927,
                         'pages_to_subdomain': 5352019489,
                         'redirect_pages_to_page': 14,
                         'redirect_pages_to_root_domain': 145613441,
                         'redirect_pages_to_subdomain': 142856036,
                         'root_domain': 'github.com',
                         'root_domains_from_page': 96,
                         'root_domains_from_root_domain': 702214,
                         'root_domains_to_page': 231,
                         'root_domains_to_root_domain': 2948013,
                         'root_domains_to_subdomain': 2857538,
                         'spam_score': 2,
                         'subdomain': 'github.com',
                         'title': 'GitHub - mezod/awesome-indie: Resources for '
                                  'independent developers to make money'},
              'target': {'deleted_pages_to_page': 169073,
                         'deleted_pages_to_root_domain': 19022927,
                         'deleted_pages_to_subdomain': 18554702,
                         'deleted_root_domains_to_page': 1457,
                         'deleted_root_domains_to_root_domain': 27522,
                         'deleted_root_domains_to_subdomain': 27273,
                         'domain_authority': 91,
                         'external_indirect_pages_to_root_domain': 45290099,
                         'external_nofollow_pages_to_page': 7388,
                         'external_nofollow_pages_to_root_domain': 17425478,
                         'external_nofollow_pages_to_subdomain': 17269297,
                         'external_pages_to_page': 553261,
                         'external_pages_to_root_domain': 69376449,
                         'external_pages_to_subdomain': 68746190,
                         'external_redirect_pages_to_page': 265,
                         'external_redirect_pages_to_root_domain': 41112725,
                         'external_redirect_pages_to_subdomain': 41109338,
                         'http_code': 200,
                         'indirect_root_domains_to_page': 2219,
                         'indirect_root_domains_to_root_domain': 28779,
                         'last_crawled': '2023-04-02',
                         'link_propensity': 0.008849279955,
                         'nofollow_pages_from_page': 0,
                         'nofollow_pages_from_root_domain': 209067,
                         'nofollow_pages_to_page': 7388,
                         'nofollow_pages_to_root_domain': 17442464,
                         'nofollow_pages_to_subdomain': 17285191,
                         'nofollow_root_domains_from_page': 0,
                         'nofollow_root_domains_from_root_domain': 55943,
                         'nofollow_root_domains_to_page': 1727,
                         'nofollow_root_domains_to_root_domain': 37789,
                         'nofollow_root_domains_to_subdomain': 37690,
                         'page': 'moz.com/blog',
                         'page_authority': 69,
                         'pages_crawled_from_root_domain': 7872618,
                         'pages_from_page': 7,
                         'pages_from_root_domain': 343751,
                         'pages_to_page': 906052,
                         'pages_to_root_domain': 98442581,
                         'pages_to_subdomain': 97352802,
                         'redirect_pages_to_page': 746,
                         'redirect_pages_to_root_domain': 47575576,
                         'redirect_pages_to_subdomain': 47570092,
                         'root_domain': 'moz.com',
                         'root_domains_from_page': 5,
                         'root_domains_from_root_domain': 69667,
                         'root_domains_to_page': 9712,
                         'root_domains_to_root_domain': 179884,
                         'root_domains_to_subdomain': 178649,
                         'spam_score': 1,
                         'subdomain': 'moz.com',
                         'title': 'The Moz Blog [SEO] - Moz'},
              'via_redirect': False,
              'via_rel_canonical': False}]}

10. Top Pages (top_pages)
This endpoint returns top pages on a target domain.

{'next_token': 'BXULGXd3IggK',
 'results': [{'deleted_pages_to_page': 1963527,
              'deleted_pages_to_root_domain': 19022927,
              'deleted_pages_to_subdomain': 18554702,
              'deleted_root_domains_to_page': 6527,
              'deleted_root_domains_to_root_domain': 27522,
              'deleted_root_domains_to_subdomain': 27273,
              'domain_authority': 91,
              'external_indirect_pages_to_root_domain': 45290099,
              'external_nofollow_pages_to_page': 9684724,
              'external_nofollow_pages_to_root_domain': 17425478,
              'external_nofollow_pages_to_subdomain': 17269297,
              'external_pages_to_page': 14981546,
              'external_pages_to_root_domain': 69376449,
              'external_pages_to_subdomain': 68746190,
              'external_redirect_pages_to_page': 3632556,
              'external_redirect_pages_to_root_domain': 41112725,
              'external_redirect_pages_to_subdomain': 41109338,
              'http_code': 200,
              'indirect_root_domains_to_page': 10580,
              'indirect_root_domains_to_root_domain': 28779,
              'last_crawled': '2023-04-01',
              'link_propensity': 0.008849279955,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 209067,
              'nofollow_pages_to_page': 9684724,
              'nofollow_pages_to_root_domain': 17442464,
              'nofollow_pages_to_subdomain': 17285191,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 55943,
              'nofollow_root_domains_to_page': 8749,
              'nofollow_root_domains_to_root_domain': 37789,
              'nofollow_root_domains_to_subdomain': 37690,
              'page': 'moz.com/',
              'page_authority': 74,
              'pages_crawled_from_root_domain': 7872618,
              'pages_from_page': 7,
              'pages_from_root_domain': 343751,
              'pages_to_page': 15343034,
              'pages_to_root_domain': 98442581,
              'pages_to_subdomain': 97352802,
              'redirect_pages_to_page': 3633007,
              'redirect_pages_to_root_domain': 47575576,
              'redirect_pages_to_subdomain': 47570092,
              'root_domain': 'moz.com',
              'root_domains_from_page': 5,
              'root_domains_from_root_domain': 69667,
              'root_domains_to_page': 41190,
              'root_domains_to_root_domain': 179884,
              'root_domains_to_subdomain': 178649,
              'spam_score': 1,
              'subdomain': 'moz.com',
              'title': 'Moz - SEO Software for Smarter Marketing'},
             {'deleted_pages_to_page': 185579,
              'deleted_pages_to_root_domain': 19022927,
              'deleted_pages_to_subdomain': 18554702,
              'deleted_root_domains_to_page': 2440,
              'deleted_root_domains_to_root_domain': 27522,
              'deleted_root_domains_to_subdomain': 27273,
              'domain_authority': 91,
              'external_indirect_pages_to_root_domain': 45290099,
              'external_nofollow_pages_to_page': 11211,
              'external_nofollow_pages_to_root_domain': 17425478,
              'external_nofollow_pages_to_subdomain': 17269297,
              'external_pages_to_page': 424268,
              'external_pages_to_root_domain': 69376449,
              'external_pages_to_subdomain': 68746190,
              'external_redirect_pages_to_page': 348,
              'external_redirect_pages_to_root_domain': 41112725,
              'external_redirect_pages_to_subdomain': 41109338,
              'http_code': 200,
              'indirect_root_domains_to_page': 1389,
              'indirect_root_domains_to_root_domain': 28779,
              'last_crawled': '2023-04-03',
              'link_propensity': 0.008849279955,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 209067,
              'nofollow_pages_to_page': 11211,
              'nofollow_pages_to_root_domain': 17442464,
              'nofollow_pages_to_subdomain': 17285191,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 55943,
              'nofollow_root_domains_to_page': 2487,
              'nofollow_root_domains_to_root_domain': 37789,
              'nofollow_root_domains_to_subdomain': 37690,
              'page': 'moz.com/beginners-guide-to-seo',
              'page_authority': 72,
              'pages_crawled_from_root_domain': 7872618,
              'pages_from_page': 7,
              'pages_from_root_domain': 343751,
              'pages_to_page': 786960,
              'pages_to_root_domain': 98442581,
              'pages_to_subdomain': 97352802,
              'redirect_pages_to_page': 365,
              'redirect_pages_to_root_domain': 47575576,
              'redirect_pages_to_subdomain': 47570092,
              'root_domain': 'moz.com',
              'root_domains_from_page': 5,
              'root_domains_from_root_domain': 69667,
              'root_domains_to_page': 15276,
              'root_domains_to_root_domain': 179884,
              'root_domains_to_subdomain': 178649,
              'spam_score': 1,
              'subdomain': 'moz.com',
              'title': "Beginner\'s Guide to SEO [plus FREE quick start "
                       'checklist] - Moz'},
             {'deleted_pages_to_page': 7159,
              'deleted_pages_to_root_domain': 19022927,
              'deleted_pages_to_subdomain': 18554702,
              'deleted_root_domains_to_page': 1382,
              'deleted_root_domains_to_root_domain': 27522,
              'deleted_root_domains_to_subdomain': 27273,
              'domain_authority': 91,
              'external_indirect_pages_to_root_domain': 45290099,
              'external_nofollow_pages_to_page': 8605,
              'external_nofollow_pages_to_root_domain': 17425478,
              'external_nofollow_pages_to_subdomain': 17269297,
              'external_pages_to_page': 34152,
              'external_pages_to_root_domain': 69376449,
              'external_pages_to_subdomain': 68746190,
              'external_redirect_pages_to_page': 70,
              'external_redirect_pages_to_root_domain': 41112725,
              'external_redirect_pages_to_subdomain': 41109338,
              'http_code': 200,
              'indirect_root_domains_to_page': 782,
              'indirect_root_domains_to_root_domain': 28779,
              'last_crawled': '2023-04-03',
              'link_propensity': 0.008849279955,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 209067,
              'nofollow_pages_to_page': 8754,
              'nofollow_pages_to_root_domain': 17442464,
              'nofollow_pages_to_subdomain': 17285191,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 55943,
              'nofollow_root_domains_to_page': 1380,
              'nofollow_root_domains_to_root_domain': 37789,
              'nofollow_root_domains_to_subdomain': 37690,
              'page': 'moz.com/google-algorithm-change',
              'page_authority': 70,
              'pages_crawled_from_root_domain': 7872618,
              'pages_from_page': 420,
              'pages_from_root_domain': 343751,
              'pages_to_page': 35181,
              'pages_to_root_domain': 98442581,
              'pages_to_subdomain': 97352802,
              'redirect_pages_to_page': 73,
              'redirect_pages_to_root_domain': 47575576,
              'redirect_pages_to_subdomain': 47570092,
              'root_domain': 'moz.com',
              'root_domains_from_page': 60,
              'root_domains_from_root_domain': 69667,
              'root_domains_to_page': 8881,
              'root_domains_to_root_domain': 179884,
              'root_domains_to_subdomain': 178649,
              'spam_score': 1,
              'subdomain': 'moz.com',
              'title': 'Moz - Google Algorithm Update History'},
             {'deleted_pages_to_page': 33133,
              'deleted_pages_to_root_domain': 19022927,
              'deleted_pages_to_subdomain': 18554702,
              'deleted_root_domains_to_page': 1192,
              'deleted_root_domains_to_root_domain': 27522,
              'deleted_root_domains_to_subdomain': 27273,
              'domain_authority': 91,
              'external_indirect_pages_to_root_domain': 45290099,
              'external_nofollow_pages_to_page': 31500,
              'external_nofollow_pages_to_root_domain': 17425478,
              'external_nofollow_pages_to_subdomain': 17269297,
              'external_pages_to_page': 70673,
              'external_pages_to_root_domain': 69376449,
              'external_pages_to_subdomain': 68746190,
              'external_redirect_pages_to_page': 77,
              'external_redirect_pages_to_root_domain': 41112725,
              'external_redirect_pages_to_subdomain': 41109338,
              'http_code': 301,
              'indirect_root_domains_to_page': 315,
              'indirect_root_domains_to_root_domain': 28779,
              'last_crawled': '2023-04-02',
              'link_propensity': 0.008849279955,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 209067,
              'nofollow_pages_to_page': 31628,
              'nofollow_pages_to_root_domain': 17442464,
              'nofollow_pages_to_subdomain': 17285191,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 55943,
              'nofollow_root_domains_to_page': 1689,
              'nofollow_root_domains_to_root_domain': 37789,
              'nofollow_root_domains_to_subdomain': 37690,
              'page': 'moz.com/researchtools/ose/',
              'page_authority': 70,
              'pages_crawled_from_root_domain': 7872618,
              'pages_from_page': 0,
              'pages_from_root_domain': 343751,
              'pages_to_page': 344305,
              'pages_to_root_domain': 98442581,
              'pages_to_subdomain': 97352802,
              'redirect_pages_to_page': 78,
              'redirect_pages_to_root_domain': 47575576,
              'redirect_pages_to_subdomain': 47570092,
              'root_domain': 'moz.com',
              'root_domains_from_page': 0,
              'root_domains_from_root_domain': 69667,
              'root_domains_to_page': 8086,
              'root_domains_to_root_domain': 179884,
              'root_domains_to_subdomain': 178649,
              'spam_score': 1,
              'subdomain': 'moz.com',
              'title': ''},
             {'deleted_pages_to_page': 169073,
              'deleted_pages_to_root_domain': 19022927,
              'deleted_pages_to_subdomain': 18554702,
              'deleted_root_domains_to_page': 1457,
              'deleted_root_domains_to_root_domain': 27522,
              'deleted_root_domains_to_subdomain': 27273,
              'domain_authority': 91,
              'external_indirect_pages_to_root_domain': 45290099,
              'external_nofollow_pages_to_page': 7388,
              'external_nofollow_pages_to_root_domain': 17425478,
              'external_nofollow_pages_to_subdomain': 17269297,
              'external_pages_to_page': 553261,
              'external_pages_to_root_domain': 69376449,
              'external_pages_to_subdomain': 68746190,
              'external_redirect_pages_to_page': 265,
              'external_redirect_pages_to_root_domain': 41112725,
              'external_redirect_pages_to_subdomain': 41109338,
              'http_code': 200,
              'indirect_root_domains_to_page': 2219,
              'indirect_root_domains_to_root_domain': 28779,
              'last_crawled': '2023-04-02',
              'link_propensity': 0.008849279955,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 209067,
              'nofollow_pages_to_page': 7388,
              'nofollow_pages_to_root_domain': 17442464,
              'nofollow_pages_to_subdomain': 17285191,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 55943,
              'nofollow_root_domains_to_page': 1727,
              'nofollow_root_domains_to_root_domain': 37789,
              'nofollow_root_domains_to_subdomain': 37690,
              'page': 'moz.com/blog',
              'page_authority': 69,
              'pages_crawled_from_root_domain': 7872618,
              'pages_from_page': 7,
              'pages_from_root_domain': 343751,
              'pages_to_page': 906052,
              'pages_to_root_domain': 98442581,
              'pages_to_subdomain': 97352802,
              'redirect_pages_to_page': 746,
              'redirect_pages_to_root_domain': 47575576,
              'redirect_pages_to_subdomain': 47570092,
              'root_domain': 'moz.com',
              'root_domains_from_page': 5,
              'root_domains_from_root_domain': 69667,
              'root_domains_to_page': 9712,
              'root_domains_to_root_domain': 179884,
              'root_domains_to_subdomain': 178649,
              'spam_score': 1,
              'subdomain': 'moz.com',
              'title': 'The Moz Blog [SEO] - Moz'}]}

11. URL Metrics (url_metrics)
Use this endpoint to get metrics about one or more urls.

{'results': [{'deleted_pages_to_page': 1963527,
              'deleted_pages_to_root_domain': 19022927,
              'deleted_pages_to_subdomain': 18554702,
              'deleted_root_domains_to_page': 6527,
              'deleted_root_domains_to_root_domain': 27522,
              'deleted_root_domains_to_subdomain': 27273,
              'domain_authority': 91,
              'external_indirect_pages_to_root_domain': 45290099,
              'external_nofollow_pages_to_page': 9684724,
              'external_nofollow_pages_to_root_domain': 17425478,
              'external_nofollow_pages_to_subdomain': 17269297,
              'external_pages_to_page': 14981546,
              'external_pages_to_root_domain': 69376449,
              'external_pages_to_subdomain': 68746190,
              'external_redirect_pages_to_page': 3632556,
              'external_redirect_pages_to_root_domain': 41112725,
              'external_redirect_pages_to_subdomain': 41109338,
              'http_code': 200,
              'indirect_root_domains_to_page': 10580,
              'indirect_root_domains_to_root_domain': 28779,
              'last_crawled': '2023-04-01',
              'link_propensity': 0.008849279955,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 209067,
              'nofollow_pages_to_page': 9684724,
              'nofollow_pages_to_root_domain': 17442464,
              'nofollow_pages_to_subdomain': 17285191,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 55943,
              'nofollow_root_domains_to_page': 8749,
              'nofollow_root_domains_to_root_domain': 37789,
              'nofollow_root_domains_to_subdomain': 37690,
              'page': 'moz.com/',
              'page_authority': 74,
              'pages_crawled_from_root_domain': 7872618,
              'pages_from_page': 7,
              'pages_from_root_domain': 343751,
              'pages_to_page': 15343034,
              'pages_to_root_domain': 98442581,
              'pages_to_subdomain': 97352802,
              'redirect_pages_to_page': 3633007,
              'redirect_pages_to_root_domain': 47575576,
              'redirect_pages_to_subdomain': 47570092,
              'root_domain': 'moz.com',
              'root_domains_from_page': 5,
              'root_domains_from_root_domain': 69667,
              'root_domains_to_page': 41190,
              'root_domains_to_root_domain': 179884,
              'root_domains_to_subdomain': 178649,
              'spam_score': 1,
              'subdomain': 'moz.com',
              'title': 'Moz - SEO Software for Smarter Marketing'},
             {'deleted_pages_to_page': 249094,
              'deleted_pages_to_root_domain': 224212706,
              'deleted_pages_to_subdomain': 898844,
              'deleted_root_domains_to_page': 3696,
              'deleted_root_domains_to_root_domain': 177001,
              'deleted_root_domains_to_subdomain': 9251,
              'domain_authority': 95,
              'external_indirect_pages_to_root_domain': 156562794,
              'external_nofollow_pages_to_page': 163849,
              'external_nofollow_pages_to_root_domain': 72093550,
              'external_nofollow_pages_to_subdomain': 294697,
              'external_pages_to_page': 1165187,
              'external_pages_to_root_domain': 514661963,
              'external_pages_to_subdomain': 2310818,
              'external_redirect_pages_to_page': 3049,
              'external_redirect_pages_to_root_domain': 4827448,
              'external_redirect_pages_to_subdomain': 8140,
              'http_code': 301,
              'indirect_root_domains_to_page': 1439,
              'indirect_root_domains_to_root_domain': 30315,
              'last_crawled': '2023-03-31',
              'link_propensity': 0.02704063244,
              'nofollow_pages_from_page': 0,
              'nofollow_pages_from_root_domain': 97163,
              'nofollow_pages_to_page': 163881,
              'nofollow_pages_to_root_domain': 72644206,
              'nofollow_pages_to_subdomain': 294765,
              'nofollow_root_domains_from_page': 0,
              'nofollow_root_domains_from_root_domain': 22711,
              'nofollow_root_domains_to_page': 5647,
              'nofollow_root_domains_to_root_domain': 178651,
              'nofollow_root_domains_to_subdomain': 11590,
              'page': 'nytimes.com/',
              'page_authority': 82,
              'pages_crawled_from_root_domain': 13567138,
              'pages_from_page': 0,
              'pages_from_root_domain': 3152122,
              'pages_to_page': 1170498,
              'pages_to_root_domain': 763781494,
              'pages_to_subdomain': 2489707,
              'redirect_pages_to_page': 3053,
              'redirect_pages_to_root_domain': 9268395,
              'redirect_pages_to_subdomain': 14273,
              'root_domain': 'nytimes.com',
              'root_domains_from_page': 0,
              'root_domains_from_root_domain': 366864,
              'root_domains_to_page': 25307,
              'root_domains_to_root_domain': 2200598,
              'root_domains_to_subdomain': 62699,
              'spam_score': 1,
              'subdomain': 'nytimes.com',
              'title': ''}]}

12. Usage Data (usage_data)
This endpoint Returns the number of rows consumed so far in the current billing period. The count returned might not reflect rows consumed in the last hour. The count returned reflects rows consumed by requests to both the v1 (Moz Links API) and v2 Links APIs.

{'rows_consumed': 254}

source https://moz.com/blog/moz-links-api-touch-every-endpoint

Categories
Digital Marketing

Easy to Implement Tactics for Local Link Building — Whiteboard Friday

Google’s local algorithm demands different tactics for link building. And even if you don’t need local SEO, local link building strategies can still give you a different perspective and improve your work. In today’s episode, Greg walks you through some of these strategies.

Easy to Implement Tactics for Local Link Building

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

♪ [music] ♪ Howdy, Moz fans. I’m Greg Gifford, the COO of SearchLab Digital, and I’m back to do another Whiteboard Friday. Today we’re talking tactics for local links. Get it, huh? The important thing to remember is that Google’s local algorithm is different, so you need to approach local link building differently.

Even if you don’t need local SEO, local link-building tactics can still give you a different perspective, and we all know that with a different perspective, you can do better work. Whoa. So the most important thing to understand is that the best links are based on relationships that you have in the real world. They’re easier to get and they’re more powerful.

The other thing to remember is the easiest way to get local links is to turn the clock back and do the things that businesses used to do to get exposure in the local community that everybody kind of stopped doing once we had the internet. So if you just get involved in the local community, you’ll naturally acquire a lot of amazing local links. Remember checklist link building rarely works.

That’s not what I wrote, but I wasn’t watching. But yeah, checklist link building doesn’t really work. You have to be original. You need unique links to win so that you’ve got links that your competitors don’t have. So don’t follow a checklist. Think outside the box. Be creative.

So a couple of ideas and tactics that I want to run you through. Sponsorships are great. Now, also I should remind you Google is basically pattern detection. So most importantly, remember that when you look at your link profile, you can’t just do one or two things. You have to have a natural mix. But with sponsorships, a lot of people avoid them because they think that you’re really buying links because obviously buying links is bad.

But Google is totally okay with you buying a sponsorship that results in a link. So Little League, peewee hockey, peewee football, 5Ks, golf tournaments, these are all great things that a lot of businesses are already doing. So find those easy, cheap, awesome local sponsorship opportunities. Knock those out.

Charities are great too. So you’re not sponsoring an event, you’re giving money or donating time to a local charity. That’s an awesome link opportunity as well. Volunteer opportunities are great as well. You know, you’re taking your team down to feed the homeless at the soup kitchen or anything like that, or highway clean-ups. Things like that are awesome local link opportunities. Local meetups are one of my favorite things to do.

So you want to go to a site like meetup.com, and there’s a couple of different tactics you can use here. First of all, if you’ve got some sort of a meeting room or a board room, conference room kind of situation and you’re not using it all the time, go on meetup.com, look for local groups that have meetings on a regular basis. Let’s say the group has a meeting on the second Monday of every month at 7:00 p.m. but your office is closed at 6:00.

So you could offer them your conference room or your meeting area. They’ve got free Wi-Fi, they’ve got an awesome big TV they can connect to and etc., and cool, boom, you get a local link out of it. Now, if you don’t have a space that you can offer to those local groups, again get on meetup.com or Facebook groups or whatever and look for local groups that are looking for meeting sponsors. Forty, fifty, sixty bucks a month buys their soft drinks and their snacks, and killer local links come right your way.

Blogs are great too. Find the local bloggers and get them to write about you. Now, obviously, the blog is going to say, “Hey, I was given a free widget to write about Greg’s widget company,” but who cares? You still get an awesome link from a local blogger, and even if it’s something that seems like it may not be related to what you do, it’s still okay because that blogger is in the local area.

So even if it’s a food blogger or a travel blogger and you give them a free widget to talk about your widget company, doesn’t matter. You still get an awesome local link. Local business associations, it’s a no-brainer. Better Business Bureau is an amazing one. In fact, Google has just added the Better Business Bureau link into your verification process if you’re trying to get reinstated in your Google Business profile.

But that’s a whole nother Whiteboard Friday that we’ll cover later. But join all of those local business associations. They’re a no-brainer. Local business directories, couldn’t fit that in there, but local business directories are great as well. Join all of them that you can. Even if it’s a little bit of a fee to join that business directory, it’s still a really killer list of really relevant local links. Local calendar pages are another thing that a lot of businesses don’t really think about.

There are always different websites or organizations in the local area, and a lot of times it’s the city government site or local newspaper or TV that has a calendar page of just community events and you can’t get on there if you don’t have something event worthy. But if you’ve got a sales event or a barbecue or some sort of a cookout like that, you get a local link back to your site.

You also want to think outside the box. Don’t do the same thing that everybody else does. You don’t do the same SEO for every client, so you shouldn’t do the same link building for every client. So a couple of outside-the-box things. If you work with car dealerships or pediatricians or even personal injury attorneys, child seat installation is a killer tactic.

You can have the staff of that location go to a two-day safety course. You can sign up at cert.safekids.org. I didn’t have room to get it all on there. But you sit through the safety class and then you are officially certified to be a place that anyone in the community can come for free to get their child seat installed correctly.

Because guess what, 99% of us are installing our child seats incorrectly. So if there was an accident, our kids would get injured. So this really makes sense if it’s something related to kids or if it’s something like personal injury or doctors, really killer. If the business owner or leadership of the business is a member of a certain ethnic group, there are ethnic business directories in every city in the country.

So let’s say you have an Iranian dry cleaner. I’m pulling that out of the air. There will be a list of Iranian-owned businesses in that area. You can get on that list, and the competitors can’t. It’s killer. Or if someone is LGBTQIA+, that’s a link you can get that again competitors can’t get, and there are LGBTQIA+ directories in every city that you can get on.

So go look for those if it’s applicable to your business. This last one is a really kind of wacky one that people don’t really understand the first time I mention it. This isn’t about the clubs and organizations that your business is a member of. You want to talk to the staff at your client’s business or at your business if you’re in-house, especially leadership of the organization, and find out what they’re passionate about.

What do they do in their free time that they really enjoy doing? What are their hobbies? Because if they’re a member of a local club or organization, especially if they’re so involved in that club or organization that they’re on the leadership board of that organization, guess what, it’s going to be super easy to get your business linked from that business’ website.

This one here, a lot of people are going to laugh at this because it’s an old-school technique, but it really works. If you periodically, I’m not saying every week, but periodically you write a local informational blog post of like, “Hey, our staff loves to go out and grab barbecue every Friday and here are the five best barbecue places in Dallas, according to our staff,” once you’ve got that blog post up, you can then do outreach to each of those five locations and say, “Hey, we listed you as one of the five best places in Dallas.”

Even if it’s not related to the business, this is the kind of stuff that shows up in search results. So you get surfaced and you get eyeballs on your business. So it’s great for branding. It’s kind of that billboard effect, and it gets killer links. The really awesome part that people don’t really think about is most of the time, when you’re getting these links, you’re dealing with people that aren’t that technically savvy.

So yeah, sure, the people that are technically savvy are going to link to that specific local blog post. But the people that aren’t, they’re going to link to your homepage. So it’s really killer. So a cool story because everybody that knows me knows I’m a story guy. Several years ago, I was speaking at a conference in Vegas and a lawyer came up and he said, “Hey, man, I know a lot about SEO and I need your help.”

I said, “Okay, let me help you.” He said, “I’ve got three times as many links as this other attorney in town, yet he outranks me on every single query possible.” I said, “Okay.” The guy again said, “I know SEO, so I should be winning.” Well, obviously there’s a lot more than just links at play. So what I did, I built a little spreadsheet and I graphed out the link profile of this guy versus his competitor based on Domain Authority.

Stick with me, though. It doesn’t matter how many links they had, it’s just based on the authority. So we see here the guy in red is the guy that I was talking to. So the guy in red has three times as many links as the guy in blue. So when I graph it out, this one right here is Domain Authority of above 50. This one right here is Domain Authority of 11 to 50, and this one right here is Domain Authority below 10.

You can see 67% of the guy in blue links are above a 50, and 53% or almost 60% of the guy in red who was the guy asking me for help was below 50. I said, “All right, there’s a story here. Let’s break it down a little bit more.” So this is 91 to 100. This is 71 to 90, 51 to 70, 26 to 50, and 0 to 25.

You can see right here that almost a massive section, like 65% of the guy in blue links are above Domain Authority of 7 versus 60% basically of the guy in red that is really low, below a DA of 50. Now, the guy in red wasn’t doing local SEO.

Even though the guy in red had three times as many links, they all skewed towards the bottom of the authority range because he was getting really horrible links from like seodirectory.com and linksxyz.com, and stupid things like that that wouldn’t matter. The important thing to understand is if the guy in red had been doing local SEO and getting local links, this graph would look the same way because local links skew towards lower authority.

But the guy in red would have been destroying the guy in blue if he’d been getting local links. So that’s the way that you need to change your perspective and think about it differently. If you’d like to do something similar yourself, you can do exactly the same thing with my Badass Link Worksheet. You can download it right here at bit.ly/gregs-bad-ass-link-sheet. That is all lowercase when you type it in.

So make sure you do that, or I’m sure they’ll drop it down in the blog post or in the comments so that you can click on it there. You can download that sheet. It’s an Excel spreadsheet. It does have macros enabled to basically make it easy to clear information out. It’s set to work with your Moz link export. You need last month’s link export, this month’s link export, and your competitor’s link export for this month.

You drop those in, automatically it’s going to graph out all of this stuff. On another tab, it’s going to give you a list of all of the link gap opportunities where your competitors have links and you don’t. Then on another tab, it’s going to give you a list of all the links that you’ve lost since the last time you did it. Now, yeah, there’s some link tools out there that do it too, but this is a really easy tool.

It’s super killer, and I’m sharing it with you guys today for free. Thank you so much for watching my newest episode of Whiteboard Friday.

Video transcription by Speechpad.com

source https://moz.com/blog/easy-local-link-building-tactics-whiteboard-friday

Categories
Digital Marketing

How Social Media Can Supercharge Your SEO

When working in social media, it can feel like you exist worlds away from SEO. And as an SEO, social media may feel like something that isn’t quite relevant in your day to day. But as with all things marketing, both of these digital marketing tactics have the potential to boost collective success. As a Social Media Manager, I’m here to tell you how you as an SEO can collaborate with your social media team in order to help supercharge your SEO efforts.

What is a social media strategy?

A social media strategy is a document that outlines your organization’s social media goals, along with how you will achieve them, both through top-level strategy and on-the-ground tactics (i.e., what you actually do). A strategy is the foundation of how your organization approaches being on social media.

Social media vs. search engine optimization

Social media involves owning accounts and having an active presence on social media channels like Twitter, Instagram, Facebook, LinkedIn, TikTok, and YouTube, with the goal of driving brand awareness and engagement, or increasing traffic and conversions. On the other hand, search engine optimization (SEO) is a set of practices designed to improve the appearance and positioning of web pages in organic search results, resulting in increased website traffic and exposure to your brand.

Do links from social media improve your SEO?

Links from popular social media platforms such as Facebook are “no-follow” links, meaning they do not send link authority directly to your site. PageRank is Google’s algorithm that ranks web pages based on the quantity and quality of external backlinks. However, gaining no-follow links from high-quality domains is still extremely important.

In the past, marketers ignored “no-follow” links, as they did not have any impact on organic ranking, but the “no-follow” attribute isn’t completely useless. A well-balanced backlink profile consisting of both followed, and no-followed links will appear more natural to Google and other search engines.

Another benefit of “no-follow” links is the referral traffic that they can provide. Although search engines will not follow links with the attached HTML “no-follow” attribute, users can click them to reach your site, giving you more traffic!

While no-follow links do not provide the same boost to your site’s backlink profile as followed links, Google still likes to see them as a part of your site’s backlink profile, and they offer a valuable source of referral traffic.

The SEO benefits of increased brand awareness

The primary SEO benefit of brand awareness that your social media strategy can drive is the boost you can see in “branded” organic search volume and clicks.

Not every user encountering your brand on their Instagram or TikTok feed will click through to your site — in fact, most won’t. Most people will mentally file away your brand name and products only to perform a Google search for your company name or products after the fact, i.e. a branded search. This is especially true if your social messaging is solid and memorable.

For many sites, especially newer ones, a branded search can represent a large portion of your organic traffic.

5 ways social media can improve your SEO

There are five ways that a robust social media presence can help improve your SEO:

Amplify website content through social channels to reach new audiences

Your website content may be great, but you need to drive eyes to it somehow! Sharing your content, like blogs or guides, on social media is a win-win-win:

  • You’re building positive brand sentiment by providing content that answers people’s questions.

  • You’re driving more users to your website.

  • The positive response toward your content on social media sends signals to the social algorithms and therefore often shows it to new people.

One way we do this at Moz is with this very blog! Anything the Moz Blog publishes is promoted on our social media channels, which not only drives traffic but puts valuable content right in front of our audience for them to get immediate insights from.

Create and share infographics in social posts and blog articles

In my experience, people love nothing more on social media than a classic infographic. Sharing information in bite-sized, colorful, and visually appealing ways will result in shares, engagement, and traffic to your website. Plus, they’re versatile — include them in your blogs, and you can use them on your social media posts! Every Whiteboard Friday episode that we publish here at Moz gets its own accompanying infographic. This is a great way to resurface a well-loved episode, and give people more value up front.

Build relationships with customers

One of the core tenets of social media is that it’s a two-way street. As you get started, you as a brand need to provide valuable content to your audience without asking them for anything in return. Once you’ve cultivated goodwill with your audience, you now have a relationship in which you provide value, build that favorable currency, and then you’re able to cash in on it in exchange for traffic or follow-throughs on your CTAs.

While our social media philosophy is that everything we put on social media has some form of value to our audience, we also make it a point to create content that doesn’t explicitly ask for anything, like clicking links or purchasing our product. Sometimes that’s providing them with information, and sometimes that can look like making them laugh.

Optimize your profiles on social channels and lead audiences toward your website

A simple but effective way to lead audiences to your website is to make it easy to get to! Ensure you optimize your social channels and keep a link to your website in each profile. If you need to house multiple links, use a “link in bio” service, but always make sure a quick shortcut to your website stays front and center.

This strategy is something we use on our Instagram. Instead of constantly changing the link based on what we’re promoting that day or just wasting the opportunity the link in bio provides, we have a link in bio tool through Sprout Social that lets us showcase all the links that are tied to each of our posts.

Target users who are more likely to convert to your site. Conversion and engagement metrics are great for SEO!

With social media, you should always know who you’re trying to reach and how you’re going to do so. One audience you should target on social media is people you know are ready to convert. Have different posts for different audiences as a part of your content mix, and include more mature leads further down the funnel. These become easy wins because they convert and engage once they hit the website, which is helpful for SEO metrics.

We know that the majority of people are coming to Moz for beginner SEO education, so we make it a point to really highlight those resources, such as our Beginner’s Guide to SEO or our How to Rank Checklist, knowing they will always see a lot of traffic and engagement.

Build relationships between your social media and SEO teams

A strong relationship between your social media and SEO teams is crucial. You can trade information about high-performing topics that can inform strategy on both sides or allow you to make reactive changes to your tactics based on opportunities. Schedule a monthly one-on-one with your respective counterpart in your organization to connect and fill each other in on pertinent information.

With this information, you’re now armed to go out and make this happen for yourself! Take this as an opportunity to connect with your social media team and find new and innovative ways to collaborate and drive results for both social media and SEO.

source https://moz.com/blog/how-social-media-can-supercharge-your-seo

Categories
Digital Marketing

The Moz Links API: An Introduction

What exactly IS an API? They’re those things that you copy and paste long strange codes into Screaming Frog for links data on a Site Crawl, right?

I’m here to tell you there’s so much more to them than that – if you’re willing to take just a few little steps. But first, some basics.

What’s an API?

API stands for “application programming interface”, and it’s just the way of… using a thing. Everything has an API. The web is a giant API that takes URLs as input and returns pages.

But special data services like the Moz Links API have their own set of rules. These rules vary from service to service and can be a major stumbling block for people taking the next step.

When Screaming Frog gives you the extra links columns in a crawl, it’s using the Moz Links API, but you can have this capability anywhere. For example, all that tedious manual stuff you do in spreadsheet environments can be automated from data-pull to formatting and emailing a report.

If you take this next step, you can be more efficient than your competitors, designing and delivering your own SEO services instead of relying upon, paying for, and being limited by the next proprietary product integration.

GET vs. POST

Most APIs you’ll encounter use the same data transport mechanism as the web. That means there’s a URL involved just like a website. Don’t get scared! It’s easier than you think. In many ways, using an API is just like using a website.

As with loading web pages, the request may be in one of two places: the URL itself, or in the body of the request. The URL is called the “endpoint” and the often invisibly submitted extra part of the request is called the “payload” or “data”. When the data is in the URL, it’s called a “query string” and indicates the “GET” method is used. You see this all the time when you search:

https://www.google.com/search?q=moz+links+api <-- GET method 

When the data of the request is hidden, it’s called a “POST” request. You see this when you submit a form on the web and the submitted data does not show on the URL. When you hit the back button after such a POST, browsers usually warn you against double-submits. The reason the POST method is often used is that you can fit a lot more in the request using the POST method than the GET method. URLs would get very long otherwise. The Moz Links API uses the POST method.

Making requests

A web browser is what traditionally makes requests of websites for web pages. The browser is a type of software known as a client. Clients are what make requests of services. More than just browsers can make requests. The ability to make client web requests is often built into programming languages like Python, or can be broken out as a standalone tool. The most popular tools for making requests outside a browser are curl and wget.

We are discussing Python here. Python has a built-in library called URLLIB, but it’s designed to handle so many different types of requests that it’s a bit of a pain to use. There are other libraries that are more specialized for making requests of APIs. The most popular for Python is called requests. It’s so popular that it’s used for almost every Python API tutorial you’ll find on the web. So I will use it too. This is what “hitting” the Moz Links API looks like:

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

Given that everything was set up correctly (more on that soon), this will produce the following output:

{'next_token': 'JYkQVg4s9ak8iRBWDiz1qTyguYswnj035nqrQ1oIbW96IGJsb2dZgGzDeAM7Rw==',
 'results': [{'anchor_text': 'moz',
              'external_pages': 7162,
              'external_root_domains': 2026}]}

This is JSON data. It’s contained within the response object that was returned from the API. It’s not on the drive or in a file. It’s in memory. So long as it’s in memory, you can do stuff with it (often just saving it to a file).

If you wanted to grab a piece of data within such a response, you could refer to it like this:

response['results'][0]['external_pages']

This says: “Give me the first item in the results list, and then give me the external_pages value from that item.” The result would be 7162.

NOTE: If you’re actually following along executing code, the above line won’t work alone. There’s a certain amount of setup we’ll do shortly, including installing the requests library and setting up a few variables. But this is the basic idea.

JSON

JSON stands for JavaScript Object Notation. It’s a way of representing data in a way that’s easy for humans to read and write. It’s also easy for computers to read and write. It’s a very common data format for APIs that has somewhat taken over the world since the older ways were too difficult for most people to use. Some people might call this part of the “restful” API movement, but the much more difficult XML format is also considered “restful” and everyone seems to have their own interpretation. Consequently, I find it best to just focus on JSON and how it gets in and out of Python.

Python dictionaries

I lied to you. I said that the data structure you were looking at above was JSON. Technically it’s really a Python dictionary or dict datatype object. It’s a special kind of object in Python that’s designed to hold key/value pairs. The keys are strings and the values can be any type of object. The keys are like the column names in a spreadsheet. The values are like the cells in the spreadsheet. In this way, you can think of a Python dict as a JSON object. For example here’s creating a dict in Python:

my_dict = {
    "name": "Mike",
    "age": 52,
    "city": "New York"
}

And here is the equivalent in JavaScript:

var my_json = {
    "name": "Mike",
    "age": 52,
    "city": "New York"
}

Pretty much the same thing, right? Look closely. Key-names and string values get double-quotes. Numbers don’t. These rules apply consistently between JSON and Python dicts. So as you might imagine, it’s easy for JSON data to flow in and out of Python. This is a great gift that has made modern API-work highly accessible to the beginner through a tool that has revolutionized the field of data science and is making inroads into marketing, Jupyter Notebooks.

Flattening data

But beware! As data flows between systems, it’s not uncommon for the data to subtly change. For example, the JSON data above might be converted to a string. Strings might look exactly like JSON, but they’re not. They’re just a bunch of characters. Sometimes you’ll hear it called “serializing”, or “flattening”. It’s a subtle point, but worth understanding as it will help with one of the largest stumbling blocks with the Moz Links (and most JSON) APIs.

Objects have APIs

Actual JSON or dict objects have their own little APIs for accessing the data inside of them. The ability to use these JSON and dict APIs goes away when the data is flattened into a string, but it will travel between systems more easily, and when it arrives at the other end, it will be “deserialized” and the API will come back on the other system.

Data flowing between systems

This is the concept of portable, interoperable data. Back when it was called Electronic Data Interchange (or EDI), it was a very big deal. Then along came the web and then XML and then JSON and now it’s just a normal part of doing business.

If you’re in Python and you want to convert a dict to a flattened JSON string, you do the following:

import json

my_dict = {
    "name": "Mike",
    "age": 52,
    "city": "New York"
}

json_string = json.dumps(my_dict)

…which would produce the following output:

'{"name": "Mike", "age": 52, "city": "New York"}'

This looks almost the same as the original dict, but if you look closely you can see that single-quotes are used around the entire thing. Another obvious difference is that you can line-wrap real structured data for readability without any ill effect. You can’t do it so easily with strings. That’s why it’s presented all on one line in the above snippet.

Such stringifying processes are done when passing data between different systems because they are not always compatible. Normal text strings on the other hand are compatible with almost everything and can be passed on web-requests with ease. Such flattened strings of JSON data are frequently referred to as the request.

Anatomy of a request

Again, here’s the example request we made above:

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

Now that you understand what the variable name json_string is telling you about its contents, you shouldn’t be surprised to see this is how we populate that variable:

 data_dict = {
    "target": "moz.com/blog",
    "scope": "page",
    "limit": 1
}

json_string = json.dumps(data_dict)

…and the contents of json_string looks like this:

'{"target": "moz.com/blog", "scope": "page", "limit": 1}'

This is one of my key discoveries in learning the Moz Links API. This is in common with countless other APIs out there but trips me up every time because it’s so much more convenient to work with structured dicts than flattened strings. However, most APIs expect the data to be a string for portability between systems, so we have to convert it at the last moment before the actual API-call occurs.

Pythonic loads and dumps

Now you may be wondering in that above example, what a dump is doing in the middle of the code. The json.dumps() function is called a “dumper” because it takes a Python object and dumps it into a string. The json.loads() function is called a “loader” because it takes a string and loads it into a Python object.

The reason for what appear to be singular and plural options are actually binary and string options. If your data is binary, you use json.load() and json.dump(). If your data is a string, you use json.loads() and json.dumps(). The s stands for string. Leaving the s off means binary.

Don’t let anybody tell you Python is perfect. It’s just that its rough edges are not excessively objectionable.

Assignment vs. equality

For those of you completely new to Python or programming in general, what we’re doing when we hit the API is called an assignment. The result of requests.post() is being assigned to the variable named response.

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

We are using the = sign to assign the value of the right side of the equation to the variable on the left side of the equation. The variable response is now a reference to the object that was returned from the API. Assignment is different from equality. The == sign is used for equality.

# This is assignment:
a = 1  # a is now equal to 1

# This is equality:
a == 1  # True, but relies that the above line has been executed

The POST method

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

The requests library has a function called post() that takes 3 arguments. The first argument is the URL of the endpoint. The second argument is the data to send to the endpoint. The third argument is the authentication information to send to the endpoint.

Keyword parameters and their arguments

You may notice that some of the arguments to the post() function have names. Names are set equal to values using the = sign. Here’s how Python functions get defined. The first argument is positional both because it comes first and also because there’s no keyword. Keyworded arguments come after position-dependent arguments. Trust me, it all makes sense after a while. We all start to think like Guido van Rossum.

def arbitrary_function(argument1, name=argument2):
    # do stuff

The name in the above example is called a “keyword” and the values that come in on those locations are called “arguments”. Now arguments are assigned to variable names right in the function definition, so you can refer to either argument1 or argument2 anywhere inside this function. If you’d like to learn more about the rules of Python functions, you can read about them here.

Setting up the request

Okay, so let’s let you do everything necessary for that success assured moment. We’ve been showing the basic request:

response = requests.post(endpoint, data=json_string, auth=auth_tuple)

…but we haven’t shown everything that goes into it. Let’s do that now. If you’re following along and don’t have the requests library installed, you can do so with the following command from the same terminal environment from which you run Python:

pip install requests

Often times Jupyter will have the requests library installed already, but in case it doesn’t, you can install it with the following command from inside a Notebook cell:

!pip install requests

And now we can put it all together. There’s only a few things here that are new. The most important is how we’re taking 2 different variables and combining them into a single variable called AUTH_TUPLE. You will have to get your own ACCESSID and SECRETKEY from the Moz.com website.

The API expects these two values to be passed as a Python data structure called a tuple. A tuple is a list of values that don’t change. I find it interesting that requests.post() expects flattened strings for the data parameter, but expects a tuple for the auth parameter. I suppose it makes sense, but these are the subtle things to understand when working with APIs.

Here’s the full code:

import json
import pprint
import requests

# Set Constants
ACCESSID = "mozscape-1234567890"  # Replace with your access ID
SECRETKEY = "1234567890abcdef1234567890abcdef"  # Replace with your secret key
AUTH_TUPLE = (ACCESSID, SECRETKEY)

# Set Variables
endpoint = "https://lsapi.seomoz.com/v2/anchor_text"
data_dict = {"target": "moz.com/blog", "scope": "page", "limit": 1}
json_string = json.dumps(data_dict)

# Make the Request
response = requests.post(endpoint, data=json_string, auth=AUTH_TUPLE)

# Print the Response
pprint(response.json())

…which outputs:

{'next_token': 'JYkQVg4s9ak8iRBWDiz1qTyguYswnj035nqrQ1oIbW96IGJsb2dZgGzDeAM7Rw==',
 'results': [{'anchor_text': 'moz',
              'external_pages': 7162,
              'external_root_domains': 2026}]}

Using all upper case for the AUTH_TUPLE variable is a convention many use in Python to indicate that the variable is a constant. It’s not a requirement, but it’s a good idea to follow conventions when you can.

You may notice that I didn’t use all uppercase for the endpoint variable. That’s because the anchor_text endpoint is not a constant. There are a number of different endpoints that can take its place depending on what sort of lookup we wanted to do. The choices are:

  1. anchor_text

  2. final_redirect

  3. global_top_pages

  4. global_top_root_domains

  5. index_metadata

  6. link_intersect

  7. link_status

  8. linking_root_domains

  9. links

  10. top_pages

  11. url_metrics

  12. usage_data

And that leads into the Jupyter Notebook that I prepared on this topic located here on Github. With this Notebook you can extend the example I gave here to any of the 12 available endpoints to create a variety of useful deliverables, which will be the subject of articles to follow.

source https://moz.com/blog/moz-links-api-introduction

Categories
Digital Marketing

The Ultimate Low-Hanging Fruit SEO Strategy — Whiteboard Friday

We all know that we want to maximize our chances for success in SEO, and for that, what we want to do is prioritize tasks that will have a higher impact, and lower effort, but sometimes those get lost in the SEO audit process. In today’s Whiteboard Friday, Aleyda helps develop this low-hanging fruit analysis in parallel of the usual SEO process.

low-hanging fruit SEO strategy

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Welcome to a new edition of Whiteboard Friday. My name is Aleyda Solis. I am an SEO consultant and founder of Orainti, and today I am here to share with you low-hanging fruit SEO. We all know that we want to maximize the opportunities, the chances for success in SEO, and for that, what we tend to do is to prioritize those tasks, those activities that will tend to have a higher output, a higher impact, and lower effort.

Although it’s true that this usually depends on the context of our SEO process or project, the restrictions, the opportunities, the resources, the flexibility, etc., the reality is that it tends to be always this let’s say strategic, agnostic types of activities that tend to be always there for us to leverage, right?

However, what we tend to do in our SEO processes is this, right? We start the SEO process with an audit, research from keyword competition research to technical SEO, content audit, competition analysis, backlink analysis, etc. This tends to take a little bit of time, like four weeks or so, for example, let’s say.

Then we need to analyze all of the data, etc. in order to generate actionable, prioritized SEO recommendations that, at the end of the day, are the ones that we share with our SEO clients or SEO stakeholders in general for execution, right? So all of this process tends to take a little bit of time. Unfortunately, the issue here is that after this time, we tend to face challenges about like, yes, impatience of the stakeholders or the owners of the project, right, and it’s natural.

However, as I mentioned before, we can and what I propose here is to develop this low-hanging fruit analysis in parallel of the usual SEO process audit in order to detect this low-hanging fruits that we tend to have, and I will share later on which, in order to start implementation right away, right?

This might seem counterintuitive because you may say, “Oh my God, Aleyda, extra work, besides the one of the audits.” But the reality is that ideally here we should set already some frameworks, some reports with data that we tend already to have in the SEO process in order to implement this, right? The benefits of this low-hanging fruit analysis and the implementation that we can start right when we are already doing the usual audit is that it will mitigate impatience from clients or stakeholders.

We will start with those actions that will be much like easier or simpler to coordinate, right? So what I’m talking about here about low-hanging fruit, realistically, I am going to go through three scenarios here of these low-hanging fruit opportunities that very likely will also be applicable for any of your projects, right?

Improving the click-through rate of top ranked pages. If we go and take a look at our current rankings using whatever ranking tools that you use, Google Search Console even, you can take a look at which are those top ranked pages that are already ranking for relevant queries, that are really important and meaningful for you, that have opportunities to improve their click-through rates, that the click-through rates are too low for the rankings of these pages.

You can try to identify if something is off with the snippets, with the titles, with the meta descriptions, for example, or if these pages are not maximizing the visibility because of the lack of structured data implementation and the reason why they are not generating rich snippets or included in a very important, meaningful relevance or feature, for example.

That is the reason of why the click-through rate is too low. You can go and straight forward improve those, right? With the snippets too, I have to say I have found many more scenarios in which Google was rewriting the title, which is now more common than before. Even if Google tries to rewrite it in a way that is still meaningful and relevant, the core key aspect of that particular page a few times has been eliminated.

Or maybe the core page is still there, but when you compare it with your top competitors, with all the pages ranking in that same SERPs, you identify that they are actually showing additional data, additional insights that you are not because of being cut out and, well, that is certainly a missed opportunity for you.

So go and take a look and prioritize the analysis, very straightforward analysis with the data that you already have for those snippets and those search features that you could be leveraging but are not. Then you want to take a look also at those relevant queries that you are ranking with not relevant pages. Maybe in the past, you created pages that better match the intent for those queries.

Not anymore. Or maybe you created at some point many different pages targeting similar queries that made sense in the past. But not anymore either, right? You may find scenarios of content cannibalization issues or lack of content issues, right? For that, what I would highly, highly, highly recommend is to analyze for which of your relevant queries you’re ranking with more than one page to identify, to assess if this is detrimental in that scenario.

If less people are clicking or nowhere to click because of that, if you could be consolidating these pages in order to run better, to pass the value to a single page, and to consolidate all the metrics in a single page instead. For that, I highly, highly recommend to check those relevant queries for which you have more than a single page, right?

Then which is the right page to rank? If it is better to just 301 redirect to a single URL or to differentiate this additional page that you have there because you can identify that it might be also valuable to just tweak it a little bit or optimize it a little bit to refer it and to rank to another query that is equally as relevant for you too. Second scenario here for low-hanging fruit opportunities is to optimize internal links of almost ranking pages, right?

You probably have these pages that are not yet in that top three or top five positions as these others, but are in the top ten already, top six, top seven, etc., etc., almost ranking for very, very important, meaningful, highly searchable, highly relevant search queries. But when you analyze these pages, you identify very quickly that they are relevant.

The content is okay, but it’s the lack of backlinks that is holding you back, right? So how do we do this? Whenever you’re analyzing these pages, you want to grab, you want to take a look at all the backlinks per page, like very quick backlink, all the internal links per page. When you crawl your website, you will see how many internal links each of these have from all of the different pages of your website.

You want to pretty much consolidate this data in a single sheet to identify those cases of these pages for which you’re in position four, position five, position six that potentially might have a lot of backlinks, but very few internal links or vice versa, you’re linking from each of your internal pages but have very, very few backlinks.

So there might be opportunities here too, and for that, you should better link to almost ranking pages for popular queries that you’re not internal linking well from the footer, from the top navigation, from secondary navigation, for example. For those popular pages that have a lot of backlinks, for example, but they’re not necessarily passing well the value to those meant to be ranked pages, you can leverage this to better cross-link to those, right?

For those that what they are lacking is not internal links but backlinks, you already have great candidates to start your link building campaigns with already. So this can also accelerate a little bit the analysis that you’re doing in parallel. Last but not least, detect search shifts of content decay. There might be content that you created some time ago, some years ago, that it was perfect at that time to target and to rank for certain queries, but potentially Google later on updated or shifted the rank pages for this query because they identified that the intent was different, that they changed.

I have seen many scenarios in which very broad queries that used to list a lot of PLPs, product listing pages are nowadays ranking more guides and far less product listing pages, right? So you want to identify these shifts. Also, potentially some articles that you wrote like a few years ago, that were like the top or the best tools for this or that or the top or the best product for this or that, they need a little bit of an update, right?

You forgot that they needed to be updated every year, for example. So these are the scenarios that I am talking about here. For this, it’s critical to go and take a look at, again, your rank tracking data or even your Google Search Console and identify like the number of clicks, the position, and the click-through rate that your top content, your meaningful content through the customer journey has been getting in the last few months to see if it is going down, if it is dropping, right?

If that’s the case, you go and take a look at it and see if there’s opportunity to refresh or diversify a little bit, depending on the scenario for which queries the content is dropping and update the existing content to keep its relevance based on the other top ranked pages, right? If you see that you’re dropping a lot and which are those other pages that are like now outranking you to identify the gap versus yours.

Also, create new content to better fulfill the need in case you identify that no, no, no, no, the page that I was targeting to rank for this query, it doesn’t make sense anymore because now Google is ranking much more informational content and this was much more commercially driven or transactional driven, right? So you can again prioritize much faster the development of these other types of content.

So as you can see with this very low-hanging fruit I will say, with data that you tend to already have within the SEO analysis, you can accelerate in parallel this analysis to identify low-hanging fruit opportunity that you can start executing right away, see results faster, mitigate the impatience of your clients, and all the gains much easier with your SEO process.

So hopefully this will serve to you to apply through the different projects that you work in and achieve results faster. Thank you very much.

Video transcription by Speechpad.com

source https://moz.com/blog/low-hanging-fruit-seo-strategy-whiteboard-friday

Categories
Digital Marketing

I’ve Optimized My Site, But I’m Still Not Ranking — Help! — Next Level

In her last piece, Jo took you on an adventure diving for treasure in the long tail of search. This time around we’re answering the call for help when you feel like you’ve done all you can, but you’re still not ranking. Originally published in June 2016 this blog has been rewritten to include new tool screenshots and refreshed workflows. Read on and level up!

You’ve optimized your pages, written delightful title tags, concocted a gorgeous description to entice clicks, used your target keyword in your copy with similar words, and your content is good, like really good. As far as you’re concerned you’re doing everything you can on that page to say to Google “This is relevant content!” But, lo and behold, you’re not ranking.

Frustrating, right? Well, no more. I’m going to show you how you can discover what’s holding you back, and how to make sure your site is a lovely big target for visitors, just like this happy fellow:

You’ll learn some tricks you can do in your browser and then we’ll speed things up with some cat magic and pixie dust to sprinkle all over your site.

To start, open these tools in another tab so you’re ready to go:

Step 1: Check for technical issues

Uncovering indexation issues with Google search operators is the first step to ensuring your site is set up to be the success you (and I) know it can be.

Hello, operator?

You can discover whether your site or page is indexed by running a Google search operator like this:

site:yourfabsite.com
site:yourfabsite.com/blog
site:yourfabsite.com/blog/my-site-rocks

It’s like saying, “Hey, Google, show me all the results you have in your index for yourfabsite.com.” This is what you don’t want to see:

If you’re seeing the above, you won’t be able to rank because your site isn’t indexed. It’s got to be indexed before it can rank, and it’s got to be crawled before it can be indexed. Trying to rank without being indexed is like trying to bake a cake without turning on the oven.

Search Console is here to console you

In the results page above, Google is directing you straight to the Google Search Console. Go — right now, right, right now, don’t read any more, you should have already gone — go and set up your Search Console. Once you’re all set up and your site is verified, you can go to this page and ask (nicely) for Google to recrawl your URLs:

https://www.google.com/webmasters/tools/submit-url

In Google Search Console head to “Indexing” and then “Pages” to see the data similar to what we looked at above, but in graph form. Definitely handy for tracking how your pages have been indexed over time.

If your site is not being indexed, check your Search Console Messages to see if there’s a reason Google couldn’t index your site. If your entire site returns a 503 including the robot.txt crawling of your site will be temporarily suspended.

It’s a good time to take a check over your robots.txt file and your meta robots tags. But remember that for bots that obey robots.txt file won’t see noindex tags if the pages are disallowed through the robots.txt file.

Learning how to manage your site’s indexation at a page level is a key learning curve for newbie SEOs entering the technical SEO sphere. Giving you the knowledge to advance from troubleshooting indexation issues to managing mega-large sites with millions of pages.

Toolkit:

Google Search Console – Find out why your pages are not being crawled and indexed

Further reading:

Quick Start Guide to SEO

What is Robots.txt?

How to check which URLs have been indexed by Google using Python

Fundamentals of Crawling

Step 2: Find out where you’re currently ranking

Now that you know your pages can be crawled and indexed, you want to get them to the top of the results where they gosh-darn-well should be, right?

Turn up the volume

It’s quick and dirty, but we’ve all done it, your boss does it, your client does it, so why not succumb and start by entering keywords you want to rank for into your browser. Type in Branded terms, vanity keywords, and click return. Not finding your site on the first page? Instead of clicking through to the many ooooos of Google, we’re going to change the settings in your browser to show 50 or 100 results per search so we can view more results with every search. I’m going to want to see A LOT more pet costume results, so I’ll click on the Quick Settings gear icon in Chrome and hit “See All Search Settings ,” then toggle to adjust the “Results per page”:

Now we’ve got a whole page of 50 or 100 results to search through. Use CMD + F for Mac (or CRTL + F for Windows) to search for your domain.

This process is great for doing a quick check to see if you’re in the top 50 or top 100. Remember that your browser returns personalized results when you’re logged into Google, using incognito mode will manage this somewhat, but a purpose built keyword rankings tool will enable you to run ranking checks more efficiently and accurately.

Start cooking with gas

Manual searches aren’t for everyone. I mean come on, we work in technology — we don’t want to be lugging keywords a round the hot, dry Google search page, plugging them in one after another. Grab your list of keywords, and plug them straight into Keyword Explorer.

Hit “Create or upload a new list” and choose “Enter Keywords” to pop those straight in there, bish-bash-bosh.

Remember, you’ll need a Moz Pro subscription to create a keyword list.

Check if you’re on the page 1 of Google

Open up your keyword list, click +Add URL to check your rankings, provided you’re in position 1–10 of the search results.

Want to see if you’re in the top 50?

Heck yeah! Take that same list and paste them into a new campaign in Moz Pro.

If you already have a campaign running you can also transfer these straight over from Keyword Explorer. Just check the box next to the keywords you want to track, then choose a campaign from the drop down.

You know before (about 30 seconds ago), when we talked about manual searches returning personalized results? Checking rankings in Moz Pro avoids all that nonsense by anonymizing the data and, in my experience, provides the most accurate results, showing what the “most” users see. Pretty snazzy, right?

A new campaign will build in about 30 minutes, which is just enough time to catch up on “Stranger Things” and reminisce about Winona Ryder circa 1990…

On the other hand, adding to an existing campaign will be a bit longer. You’ll see data as soon as your campaign updates next. So you can binge watch the entire 4 series, because why not, right?

…and we’re back! Check out where you’re ranking for your target keywords, which URL is ranking, and over time, whether you’ve moved up or down.

We also pull in search volume from Moz’s Keyword Explorer to give you an idea of demand. When looking at search volume, don’t forget that the higher the demand, the more competition you’ll likely face. Don’t be disheartened by ranking well for keywords with lower search volume, especially if they convert better.

Tracking your rankings is crucial to understanding why you’re not performing as well as you expected. If you’re seeing a lot of down arrows, you need to investigate who is jumping ahead of you and why.

Dig into keywords with falling rankings

Let’s find some keywords that have that sad little down arrow, meaning we’ve dropped down in rankings since our last update.

Here’s a little bundle of keywords that I can investigate. I’ll click on the keyword to open up the Analysis report and scroll down to “Your Performance.” Now we can see a historical graph of your rankings and track those other sites who want to push us to one side. And what do we have here?

They’ve gone and nipped in front of us! This will not stand! It’s likely that for some reason your competitor’s result has been sending stronger quality, authority and relevance signals to Google. When it comes to SEO you can’t stand still.

Toolkit:

Keyword Explorer Lists – Check your rankings on the fly

Moz Pro – Track your rankings (and your competitors’ rankings) over time

Step 3: Optimize your content

There are 2 parts to this step,

  1. Get your basic on-page optimization in order.

  2. Check your content is tip-top quality

Don’t go changing (too often)

I don’t want to recommend you jumping in and making changes to content too often. Even Google needs time to register your updates. However, if your content is a bit dusty and you’re losing out to competitors, then it’s time to check that everything you think is in place is actually in place.

View your page like a bot

I like to think of this as a “bot’s-eye-view.” When a little bot comes along, it doesn’t go, “Oooh, look at that lovely header image! Oooh, I love that font, the white space is really working for me! Oh, how the Internet has changed since my days as a junior bot trawling through gifs of dancing babies!” It reads the code and moves on. We can do this too, with a little bit of knowhow.

Using Firefox or Chrome, you can right-click and view the page source.

If you’re unfamiliar with reading code, it’ll look pretty intimidating.

We’re going to use CMD + F (or CRTL + F for Windows) to hunt for the bits and pieces we’re after.

Pro tip: If you’re seeing og:title, this is a Facebook tag.

Likewise, if you’re using the meta property=”og:description,” this is also a Facebook tag. These help format posts when the URL is shared on Facebook. You’ll want to make sure you also have Title and Description tags link these:

<title>The best title for this page</title>

<meta name=”description” content=”The best description for this page” />

Basic page optimization

This is relatively straightforward, because you control your pages. However, maybe for that very same reason, it’s still a bit of a stumbling block for beginners. If you’re confused and locked in a mind-melt of madness because you can’t figure out if you should use the primary keyword and/or the secondary keyword in the title tag, chill your boots.

Here is a brisk and fairly brief run-through on how to get into a productive page optimization mindset.

Title tag basics

This is the bit you click on in the SERPs. Title tags should be 50-60 characters of punchy goodness that is relevant to your content. Because it’s relevant to your content, it includes the words you want to rank for and accurately describes what you’re talking about. You better believe Google is paying attention to click signals, so draw that click with your awesome headline. Think about the titles you click on when you’re searching for lovely things. Do your own searches to see what title tags are out there; it’s not like they’re hard to find, they’re literally a click away.

Meta Description basics

This is the bit of text under the title tag in the SERPs. Meta descriptions should be about 155-160 characters of tender lovin’ poetry that talks to the user like they’re a real human being, because they are, and so are you (unless you’re part of the cat colony I suspect controls large portions of the web). This is not a direct ranking factor, but it can heavily influence clicks. Clicks from humans. And what do clicks do? They signal to Google that you’re hot stuff!

On-page copy

Yep, you’re going to want to pop your keywords here, too. But really, let’s not get too hung up on this. If you’re writing something super-duper about your topic, this will flow naturally. Make it as long as it needs to be to make your point. Don’t rattle off the same words over and over; use language to the best of your ability to describe your topic. Remember all those clicks you worked so hard to get with your title and description tags? Well, if they all bounce back to search, you just know Google is paying attention to this. Your content has to be worth the click.

Go and look at what type of content is already ranking. This is not an exercise in scraping content, but a way to make sure that your content isn’t just as good, but much better.

This task can be done manually for a small site or for a few pages you’ve cherry-picked, no problem.

Check your whole site regularly

Maybe you’ve been creating content like a content-creating super machine and you might have skipped a few description tags. Or maybe you copy and pasted a title tag or two. In this case, you’ll want to check that it’s all hunky-dory on a larger scale and on a regular basis.

We’re going back to our Moz Pro campaign to take the heavy lifting out of this job.

Head to the Rankings tab and hit that little “Optimize” button on the right.

Once you hit that little button, you’ve set off a chain of events where our bot looks at the keyword you’re targeting, then has a good old dig-around on your page and gives you a score out of 100.

We’re hoping for that wheel of destiny to roll around to 100.

If we make it part-way around, it’s time to look at the suggestions to see how you can improve your on-page optimization.

Focus on top-level pages, pages that convert, and high-authority pages first.

Toolkit:

Moz Pro Page Optimization – Check that your whole site is optimized correctly

Further reading:

8 Old School SEO Practices That Are No Longer Effective – Whiteboard Friday

Step 4: Become a keyword connoisseur

It’s easy to become fixated on a keyword beyond what is reasonable or healthy. Are you carrying a torch for a golden keyword? Stalking it in the SERPs even though it’s completely entranced with the likes of Wikipedia, eBay, AdWords, and Image Packs?

Ranking in the high-click zone for your keywords is all about beating other sites. This special, golden ticket to traffic wonderland might be a good long term goal, but you’re not going to get to the top of the results in the near future.

On the other hand, maybe you’re afraid of competition, so you only target keywords with very low difficulty.

This can be a winning strategy if the keywords have strong intent and you’re targeting the long tail of search, but you don’t want to put in all that work creating content and find that no one is searching for it. No searches means no traffic, and no traffic means no humans to click a thing that makes a person somewhere in the world look at their analytics data and smile.

A little bit of competition is a good thing — it indicates a healthy, profitable industry.

So we’re looking for a sweet spot: keywords with some demand and less competition. I’m going to break down what organic competition is, and how you know what level of keyword difficulty you can target.

What’s the meaning of this so-called ‘competition?’

If you want to rank organically, your competition is the other sites that are currently on the first page for the keywords. It’s not the total number of sites that are using your keywords in their content, and it’s not the AdWords competition.

If someone on your team, or an agency or a client sends you competition data that’s defined as low, medium, or high, this is very likely to be AdWords competition, and it relates to the cost-per-click.

Moz’s Keyword Difficulty score uses the top 10 organic results to calculate the Difficulty metric. It’s a score out of 100, where a higher number means that the competition is strong, and it may take you longer to see results from your efforts. Every search you bash into Keyword Explorer shows you the Difficulty score from the Keyword Overview, and you can build these into lists so you can compare related keywords.

Benchmark your site’s Difficulty rating

We know that Difficulty is out of 100, but a question we get all the time is: How do I know what level of Difficulty is too high?

Well, first off, testing is a sure way to find out. But if you want a little pointer before you head down that road, here’s how you can quickly benchmark your site’s Difficulty rating.

Time for another consoling hug from Google Search Console. Grab the keywords that are already sending you traffic from Performance > Search Results and export the results to the format of your choosing.

Save these to a list in Keyword Explorer.

Hit “Save,” and now you have a benchmark to use when looking at other keywords you could potentially rank for.

When you’re looking at keywords to target in the future you’ll have a good idea whether it’s a short-term or long-term goal.

You can also capitalize on keywords you’re already getting traffic for by looking for opportunities in the SERP Features. Can you steal a Featured Snippet?

I also want to track these keywords over time to see if I’m losing or gaining ground, so I’ll add them from my list straight to my Moz Pro campaign.

Next time my campaign updates, and forevermore into the future, I’ll be keeping the sharpest of eyes on these keywords.

Toolkit:

Google Search Console – Grab keywords already sending you traffic

Keyword Explorer – Find the real organic competition and benchmark Difficulty

Step 5: Build your site’s authority

Now step 5 is a real doozy, and it’s a common stumbling block for new sites. Just like networking in the real world, online authority is built up over time by your connection to sites that search engines already trust.

I like to think of authority as the pixie dust from the J.M. Barrie novel Peter Pan. It’s almost mentioned as an afterthought, but without it Wendy and the gang were just kids jumping up and down on their beds. They’re thinking happy thoughts. They might even get a bit of temporary lift, you know, just like when you might get a bit of traffic here and there — enough to keep you jumping. But there’s a very big difference between jumping up and down on a spring-loaded mattress and flying off to a world of perpetual youth.

Track your authority

To figure out how much dust you have in your tank, you’ll need to take a look at the Moz metric Domain Authority. This is our best prediction of how well a site will rank for any given search. It’s on a scale of 1–100, and higher DA means more authority.

You can get your paws on DA free through Moz’s Link Explorer, the MozBar (Moz’s free SEO toolbar), or in the SERP Analysis section of Keyword Explorer. I like to keep MozBar on DA mode so I can check this metric out as I scoot about the web.

To make this a whole lot easier, head to the Moz Pro “Links” tab. Here you’ll find your historical link metrics, alongside those of your direct competitors.

Pixie dust, or website authority, gives you insight into what is powering your rankings, but everyone else’s as well. These metrics are relative with respect to the other sites similar to your own, including your competitors.

Gather a pocket full of pixie dust

The first thing we always recommend when people reach out to us to find out how they can improve their Domain Authority is to improve the overall SEO of their site. The good news for you is we’ve already done that in steps 1-4 — highest of high fives to you!

The second thing you have to do is get backlinks. This is commonly known as link building. When I started doing SEO for an ecommerce site back about what feels like a thousand years ago now, I had no idea what I was doing; this term irked me, and still kind of does. It sounds like you need to build links yourself, right? Nope! It’s like you’re playing Minecraft, but instead of building the structures, you’re actually trying to encourage other people to build them for you. In fact, you’re not allowed to build anything yourself, because that’s cheating. Game changer!

Don’t forget you don’t want just anyone building these structures. You need good people who themselves have authority; otherwise, your lovely gothic mansion might turn into a pile of rubble.

A lot of link building today is PR, content creation and outreach. I’m not going to go into that in this post, but I’ll include some links in the toolkit below to help you in that department.

We’re going to look at what actions you can take to track and build your authority.

Check for any leaks

There’s no point grabbing up pixie dust if you have a whopping great hole in your pocket.

Find and plug any holes quick-smart. Link Explorer has a handy tab just for this job. From “Links” click on “Top Pages” and filter by Status Code.

Now here’s a list of broken pages , ordered by Page Authority that have inbound links. Pages on your site that are down aren’t passing value — not to mention it’s less than ideal user experience. You can prioritize the pages with the highest PA and amount of linking domains.

Internal links

I said before that you can’t build any of your links yourself. However, as with everything in SEO there’s a caveat: in this case, links from within your own site are not only key to your site’s usability, but they also pass equity. Internal linking is primarily for user experience, but it also helps bots navigate your site for the purposes of lovely indexing.

Don’t stuff too many links on your page

Your homepage and other top pages will probably have the strongest authority, as other sites will link to your homepage in many cases.

You want that high-equity page to link out to other pages in a natural way that resembles a pyramid structure. Don’t forget the user in your rush to dish out equity; do visitors want to go from your homepage straight to some random deep page on your site? Does this help them on their journey?

On-Demand Crawl in Moz Pro help you find out if if you have too many on-page links.

You also shouldn’t go overboard with keyword-rich anchor text. Once again, think about the user, not about gaming search engines. It’s unlikely to get your site penalised, but it’s typically ineffective and not a good use of your time.

If you’re scooping up big swaths of copy to get keyword-rich anchor text but it doesn’t really help the person reading the article, then maybe you’ve got yourself an awkward link at your dinner party.

To follow or nofollow?

Links come in two flavors: follow and nofollow. Generally speaking, you do want your internal links to be “follow.” Although there are good reasons to noFollow internal links depending on your site setup. Bots will literally follow links unless told not to on the journey of your choosing and equity will be passed on, which is just what you want.

You can use the MozBar to check your pages for follow and nofollow links.

On your own site nofollow links can be marked on a link-by-link basis, or a whole page on your site can be allocated as nofollow. Let’s find the “Meta-robots Nofollow” column in your crawl CSV and filter by TRUE to check if you intended to mark these pages as nofollow.

Toolkit:

MozBar – In-browser link analysis

Moz Pro Crawl Test – Find those nofollow pages and pages with too many links

Link Explorer – Explore backlink analysis

Further reading:

Moz’s guide to link building

Wrapping up

I hope this helps you begin to uncover why your content isn’t ranking for your target keywords, and sets the wheels in motion for climbing up the SERPs.

Try Moz Pro + KWE, free for 30 days

Don’t forget that Moz Pro is available free for the first 30 days and it includes Keyword Explorer, so you can start to understand your site’s authority, check your on-page optimization, track your rankings over time, and figure out how to improve them.

source https://moz.com/blog/optimized-my-site-but-still-not-ranking-next-level

Categories
Digital Marketing

Top Tips for Non-Spammy Link Building — Whiteboard Friday

In December of 2022, Google announced that they did a link spam update, and they told us that they tried to nullify “spammy” links at scale. So, how do we exactly build non-spammy links? In today’s Whiteboard Friday, Debbie goes through five tips to do so.

5 tips to avoid spammy link building

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans. This is Debbie. I am the Global SEO Manager at Dialpad. Today we’re talking about link building, one of my favorite areas in SEO. So last year, in December of 2022, Google had announced that they did a link spam update.

So in this update, they told us that they tried to nullify spammy links at scale. So how do we exactly build non-spammy links then? So today I’m going to cover five tips for non-spammy links. Before we jump into that, I wanted to kind of share that I’ve kind of split the tips into a few that are related to the mindset that you should have.

So how you approach link building is really important on how you can build non-spammy links. So how you think about link building is very important here. Then the second part is the tactics. What are specific tactics that you can try that will help you build non-spammy links? So now, first tip is who, who do you want to link to your content, to your site?

So are they like your journalists or content writers? You want to think about who you want links from and then look at what they are linking to. Are they linking to certain types of guides? We’ll talk about more of like content that you can create that they might be interested in linking to, but make sure you do that research beforehand to understand your target audience.

So when it comes to link building, your target audience are those journalists and people that you want links from. So you really need to understand them. So who do you want links from? What do they link to? Then also putting yourself in their shoes. So it’s important to think about what do journalists want. One thing is data.

They really like data. They like to quote data. If you read a news article, they might start off with like X percent of Americans, blah, blah, blah. So this is where you need to really understand what your target audience as a link builder, what do they want to see. Whenever you do outreach to them, which we’ll talk about later, you need to make sure you put yourself in their shoes and really understand what do they want, what type of content would be interesting to them.

So make sure to keep that in mind. Now, second, which is related to the first, is you have to make sure that your content, whatever linkable asset that you’re trying to create has some sort of value. No value, no links. So the way link building works fundamentally is your content needs to have some sort of value.

It needs to provide some sort of value in order to get a link. If there’s no value, no one is going to care about your content and no one is going to link to it. So when we think about content and providing value, I think we often think about we might want to make sure our content provides value to our customers or our potential clients, but then we don’t always think about the kind of broader audience out there.

So what content that might be valuable to the industry as a whole? So that’s one area to think about. Another thing is to think about what content would be informational and helpful for the content writers that you want to link to you as well as journalists. So these are things to make sure to keep in mind, and keep in mind the reason why there’s a scale here is that each group that I mentioned earlier, they kind of value different things.

So you might do a piece of content that is very helpful to your customers or anyone who comes across your blog, but then that content might not be super valuable to like journalists or other content writers or people you want links from. So you need to make sure to keep that in mind that your content might only serve this group of audience.

But if you want links from another group of audience, you need to understand what content they value and make sure that that value shows in your content. Now, next tip is the actual tactics. So what can you do, what content can you create to actually get links to your site? So here are a few examples that I have found that are pretty popular within different industries.

You want to make sure that you understand what works for your specific industry. So my industry, it might be something like a research report, but then for other industries it might be something like tools. So to quickly go over those types of content, for tools, an example would be like some sort of free tool that you can provide people.

That’s something that tends to get a lot of links. Another example of a linkable asset would be guides. Guides work really well in certain industries, and a guide is just essentially a very long-form piece of content that really nails down how you do something. Then the last one is glossaries. So when you target a keyword like how to something, like how to do content marketing or what is content marketing, that type of content, it can be really helpful for someone who is just starting out.

They hear certain terminologies, they’re not sure what they mean. So these types of content are very helpful to readers in general, but they can also get a lot of good links if you can make sure that your content is helpful to that audience. Next is stats and research.

So as I mentioned earlier, people like journalists, content writers, they really like to cite numbers. So being able to, for example, compile a bunch of different statistics on a certain topic, so like content marketing stats, for example, I would scour the internet for a bunch of different studies and pull all the interesting numbers into this one blog post.

Then that itself can be very helpful to anyone who is writing content on my topic, right? I’m essentially kind of helping them do a bit of their homework. So that also can drive links to your page. Another example is doing your own research. So earlier I was talking about you can compile other people’s research, but you can also do your own research and find really interesting things through doing a survey.

You can scrape existing data or even look at your own product data to see what are some of the interesting trends that we can turn into a report, into a story that we can pitch to like journalists. When journalists are able to cover something like that, this can also drive more links to your page. Lastly is don’t spray and pray.

So I think as a link builder you might get tempted to try to find a bunch of prospects, like hundreds or thousands of emails, and try to like email everyone with the same template, with not really providing them with value. So instead of trying to email everybody, really figure out who is most likely to link to your content, who will actually find your content valuable again, and try to do more targeted outreach and really think about, again, what value can my content provide this person receiving my email and making sure you highlight that in your email.

All right, so there you have it. Those are my five tips for non-spammy link building. Make sure to follow me on Twitter @justdebbb and we can talk more about link building there.

Video transcription by Speechpad.com

source https://moz.com/blog/link-building-tips-whiteboard-friday

Categories
Digital Marketing

A Timeline of Bing and Bard Features

Microsoft is tweaking The New Bing constantly, grooming it to be the heir apparent to the traditional, or Top-10 blue links model that first appeared in its modern form 25 years ago with AltaVista. GoTo.com taught Google how to monetize search results back in 1998, and for about a quarter century, that’s been the dominant model. Sure, there’s been Universal Search, Knowledge Graph, and Featured Snippets, but the core of search has been the same for a long time.

Now, Microsoft is trying to change that with Bing, with a rapid-fire roll-out of potentially game-changing new features leveraging the latest GPT AI-tech to make the next feature “conversational search”. Let’s look at the timeline.

But first, a word on history and why Microsoft is so ready to take up this battle.

Microsoft’s AI history

Clippy, Tay, and shameless risk taking

AI-like features in Microsoft products are nothing new. Some may remember Clippy, the paper clip Office assistant from last century who was retired after a few years for being annoying. A decade later, Microsoft launched the experimental teen girl Tay chatbot on Twitter in 2016 that only lasted a day before it had to be taken down for being taught to be racist.

We won’t start our timeline with Clippy or Tay, but suffice to say Microsoft’s been practicing AI product integration and developing resiliency to criticism for a while now. What blows up in one company’s face as a PR disaster is par for the course for Microsoft. And since they’re the underdog in search and most of their revenue comes from elsewhere, they’re willing to take risks.

AI history, Google RankBrain, and today

It’s been a long journey leading up to the creative-writing AI of today, starting in the halls of MIT with industry giants like Marvin Minsky and John McCarthy of the legendary Media Lab who laid a lot of foundation in the 1960s and 70s, but with disappointing results, cutting into credibility and leading to what we now call the “AI Winter”. It turned out to be greatly a matter of the hardware not being ready.

The concept of the personal data assistant popped up over the years such as the much maligned but forward-thinking handwriting recognizing Apple Newton in 1993 and the first popularly successful PDA, the US Robotics PalmPilot in 1997, paving the way for today’s AI-hardware equipped smartphones.

Google’s foundational PageRank from 1998 is a form of AI in that it is a “machine learning” algorithm. Google pushed a series of aggressive “invisible” product advancements such as better Google Maps, quietly improving quality against a backdrop of boring. There were sexier promising starts along the way, such as 2011 when they first rolled out Voice Search in the Chrome browser, then in 2014 when voice search hit mobile Android phones. The AI reality was underwhelming, but anticipation was being built.

In 2015, Google announced that new AI-powered search infrastructure called RankBrain, followed by advancements that were labeled neural matching, BERT and MUM, all of which are language-processing precursors to what took the world by storm in November of 2022 when OpenAI released a product built on a seminal Google paper published Thursday, August 31, 2017 on novel new neural network architecture for understanding language.

Transformational transformers

Only a year after Tay, Google released a paper on the Transformer, a new type of neural network that was able to do machine translation better than anything that had come before. It’s the “T” in GPT, and has made new machine learning output considerably more compelling than Clippy or Tay, with the simple trick of predicting what’s statistically most likely to be typed next—a profoundly deeper thing than it seems at first glance.

This caught Microsoft’s attention, who invested $10 billion in July 2019. Several earlier GPT versions available through the API-only were released and had many developers playing in a playground, but it failed to capture the public’s fancy, behind a login and not yet following the chat paradigm as it was.

Nov. 30, 2022: ChatGPT and the fastest new service adoption rate in history

The first version of OpenAPI’s GPT for the general public, ChatGPT, was launched November 30, 2022. The original ChatGPT release was based on GPT-3.5. A version based on GPT-4 was released on March 14, 2023 (fast-forward in the timeline) right as Microsoft announced their intention to power The New Bing with the latest version, 4.5.

While the period between November 30, 2022 and March 14, 2023 was only 3.5 months, it was a period of intense experimentation and learning for Microsoft and the public, with the now famous fastest adoption-rate of any new online service in history. Things are moving so fast now, it’s time to look at the timeline.

Timeline of Bing and Bard features

Feb. 7, 2023: The New Bing

Microsoft announced the new version of Bing on February 7, 2023 at a news event at Microsoft’s Washington headquarters. The new version of Bing launched on desktop in limited preview on the same day and the mobile version was announced to be coming soon. ChatGPT was so big by this time, many early adopters jumped onto the waiting list.

For the general public to get this early access, they had to use the Edge browser, run an .exe to change your defaults to Microsoft’s requirements, scan a QR code and download the mobile Bing app. And even then, you had to wait. And wait, we did. Clearly, Microsoft was in a powerful position to dictate terms, so took advantage of it to start changing some habits.

Feb. 8, 2023: Google Bard is announced (faux pas)

The very next day, February 8, 2023, Google announced their own AI-powered chat bot, Google Bard, at a news event at Google’s California headquarters. This event was marked by the inauspicious faux pas of Bard wrongly stating that the James Webb Space Telescope was the first telescope to photograph an exoplanet outside our solar system.

The speed with which this announcement was rushed out and how easily Google lowered its guard against AI misinformation sent a resounding message around the world. Google is not infallible even in its own turf. Fortunes can change quickly in tech.

Feb. 8, 2023: Early access to the new Bing, citation links, coding

For those lucky enough to get early access to The New Bing, Tay-like weirdness kicked-in, creeping out a NYTimes reporter in an article release 10 days after the launch, on February 16 when Bing tried to get the reporter to leave his wife and also made some bizarre philosophical conversations that left the reporter deeply unsettled, and right at the epicenter of a potential product-killing PR debacle.

Chatbot scares NYTimes journalist.

Feb. 8, 2023: Surprisingly good features out of the gate

The beautiful Citations feature with the expandable Learn more footnote links, links again embedded directly in context of the chat, and impressively allows copy/paste of the “Markdown” of the chat including the citation-links was there right from the time of early access release.

This was very well thought-out and executed from the start and instantly won me over, as it was generously giving out Web traffic countering perhaps one of the greatest concerns, and struck me very much as an implementation worthy of Google. It alleviated many concerns gnawing at SEOs that a chat interface to search could mean the end of referral traffic.

Another big “out of the gate” feature that surprised a lot of people and has become a cornerstone of The New Bing is the ability to “code” in the chat window. A seldom mentioned aspect of this is how well the copy/paste feature handles even this coding, providing the triple-backtick code-blocks that allows other systems to show correct color-coded syntax highlighting.

Citations appear in AI responses.

Feb. 17, 2023: Microsoft Implements 5-Question Limit

Springing into action the very next day, February 17, 2023, Microsoft announced that it would limit chat sessions to five questions per session and 50 questions per day. This was a temporary measure, they said, to give their engineers time to “tune the AI” to be more human-like.

Feb. 20, 2023: The New Bing Rolls Out Despite 5-Question Limit Delay

Three days later on February 20, 2023, I got my first access to The New Bing and can show that the 5-question limit had not yet taken place, though the AI was extremely shy about any “meta” questions about itself, even just if you were using it.

Intro to New Bing.

It was not announced, but some sort of new rules were in place that whenever the AI was asked a question that made it uncomfortable, it unceremoniously ended the chat session. You now have to hit a little dustbin icon to blank its memory and start over. Eternal Sunshine of the Spotless Mind much?

Feb. 25, 2023: Tone Control and Special Superscripts

On February 25, 2023, Microsoft announced a new feature called Tone Control that would allow users to set the tone of the AI to be more or less human-like.

Tone control implemented.

Feb. 28, 2023: Ads in Bing

On February 28, 2023, still ahead of the 5-question limit even appearing for me, the next big advancement hit. I remember it clearly because it was my first day working for MOZ and I threw out my back and distinctly remembered that I’d rather chat with Bing than wade through top-10 links that will inevitably be dominated by ads.

Imagine my surprise this being the first moment I noticed ads in The New Bing. AdWords-like ads in AI-chat! Isn’t this what Google should be doing?

What struck me with this experience even more than the ads was the fact that when I needed to search fast, I didn’t want to be hit with the traditional search interface. I just wanted to talk to an expert. This went beyond the “1 right answer” of a rich snippet. I was already used to the back-and-forth discussion aspect of chat-enhanced search, and was impressed by how the AI seemed to empathize with my situation.The idea of a “relationship” with your search engine should not be underestimated.

What’s more, with every new website you visit first hitting you with the GDPR cookie prompt, then with ads, and with Google’s Rich Snippets and Quick Answers already teaching us to alternatives to clicking through, the practical use of this new back-and-forth conversational style to search feels like a no brainer.

Ads First Appearing In The New Bing

March 1, 2023: Upping the Question Limit to 6

By March 1, 2023, the 5-question limit was in place and was in fact already upped to 6 questions:

Bing Ups Question Limit To 6

March 3, 2023: Question Limit += 1

By March 3rd, 2023, the question limit was upped to 8:

Bing Questions Upped To 8

Somebody was playing a game of Jenga with the question limit, and I was starting to get the feeling that the AI was getting more mature and accepting of its job at Microsoft. The tower would not topple.

March 7, 2023: Aggressive Monetization by Microsoft

By March 7, Microsoft was experimenting with aggressively monetizing on commerce keywords:

Aggressive Monetization On Commerce Keywords

To this day, Bing chat prompts that include the word “iPhone” will trigger similar ads. But is the traffic sent? Well, the entire text of the chat leading up to the ad label is a link to the advertiser’s site. This is analogous to when GoTo.com, the first search engine to mix paid-search with organic search showed the way to AdWords to Google, but fast-forwarded 20 years and coming from an already existing mega competitor rather than a small startup.

Bing Chat Referral Traffic

March 8, 2023: Question Limit Increased to 10

By March 8, 2023, the question limit was upped to 10:

Bing Question Limit Upped to 10

March 14, 2023: Question Limit Extended to 15 and Introduction of “site:” Search Modifiers

By March 14, the question limit is upped to 15, and I start noticing Bing’s ability to modify it’s second-stage searching to include “site:” modifiers, presumably doing very precision searches of the Bing index to find the best answer to my question. This is a very impressive feature, and I’m surprised it’s not encountered and discussed more.

Bing Site Modifier Search

March 16, 2023: General Public Access to The New Bing and Integration with Edge Browser

Since March 16, 2023, most people have been able to sign up and immediately get access to The New Bing. This was accompanied with a new version of Microsoft Edge desktop browser that planted the Bing logo in the upper-right of the browser and a sidebar that would open up to the right of the browser serving chat sessions that were in-context of what you were looking at, allowing such features as asking about the YouTube video you were watching. By this time, all roads lead to Bing chat for Edge users, and everything but using exact web addresses will initiate a chat session.

Mistyped web addresses in the address bar initiate chat sessions because that counts as a search. This is now the default experience on Windows with the included browser. You have to actively work to turn it off or download an alternative browser like Chrome to avoid this behavior.

This is significant because as Windows operating systems and laptops get upgraded, all defaults reset back to Bing, setting the stage for a battle that Microsoft could win through attrition alone. At some point, the default search engine is given a chance by users tired of going through the rigmarole of customization and discover that Bing is actually pretty good.

While this heavy handed approach would appear to be inevitably effective, Google’s success in motivating Chrome installs buys Google some time. According to Bing itself: 

There are several websites that track browser market share. According to W3Counter, Chrome accounts for 63% of the total market share for all browsers worldwide. According to Global Stats StatCounter, as of November 2020 Chrome holds a whopping 70.33% of the desktop browser market share worldwide. More precisely, Chrome dominates the global web browser market with a whopping 65.68% share. The only other browser on the market that has a somewhat considerable share is Safari, with 18.68%.

Consequently, all of Microsoft’s effort to make Bing the default search engine on desktop is blocked by Google’s success to date. But we know Edge is based on Chrome, so does Edge show up as Chrome in these statistics? Again, according to Bing:

No, Edge users are not reported as Chrome users in these statistics. According to Kinsta, Microsoft Edge has a desktop browser market share of 5.83%. According to WPOven Blog, Microsoft’s Edge is at the second position with 7.75% browser market share. According to WebTribunal, Microsoft Edge has a desktop browser market share of 10.07%.

…leaving us to conclude, after calculating Apple out of the equation, that only about 1 in 8, or 12.5% of desktop users don’t go through the trouble of replacing the default Windows browser with Chrome, which I speculate is still a residual effect of the non-standard and now retired Internet Explorer. Aggressive pushing of Windows 11 upgrades and new hardware will likely increase Edge market share and drive up exposure to the Bing search + chat experience. Microsoft now requires a Microsoft account to install Windows, which also happens to be the requirement for The New Bing.

March 21, 2023: Google Announces Bard Availability with Limited Features

On March 21, 2023, Google announced that they are granting access to Bard to people on the waiting list. Feature-wise, Bard came out very sparse. No citations. No links. No images. No ads. I received access to Bard 2 days later and ran some rudimentary experiments on the features I felt most relevant at the time, awareness of current events.

Most notably, Google Bard stands on its own domain, bard.google.com, and is not integrated into the main Google search experience. This is a stark difference from Bing, which has integrated chat into the main search experience. This is significant because it means that Google Bard is not positioning itself as an alternative search engine experience, nor even an enhanced one, but rather as just a chatbot, and thus readily dismissed by the serious searcher.

As far as other features go, it can be added that Bard simultaneously offers 3 alternative responses to a prompt, but this can hardly be counted as a feature over Bing as it closely resembles Bing’s “Tone Control” feature.

March 27, 2023: Bing Raises Question Limit to 20

By March 27, the Bing question-limit was upped to 20, bringing us to where it stands today. The visibly aggressive roll-out of new features slowed down, and over the last month there has been very little new. Microsoft has been fine-tuning under what conditions chat sessions are initiated.

Bing Question Limit Upped to 20

March 28, 2023: Bard Now Includes Citations, though Limited in Integration and Accessibility

As of March 28, 2023, Bard can give citations. It was not announced and may have been there longer. It is not well integrated and only appends a few links to the end under circumstances Bard deems appropriate. It is not clear how Bard decides when to give citations and when not to.

When citations are given, they are only ever appended at the end of the chat response, and never embedded and hyperlinked inline with accompanying footnote-style numbering as with Bing. Additionally, the citations are not in the copyable text. If you actually want to “lift” the citation links and use them in other places such as articles like this one, it can be quite a struggle.

April 21, 2023: Bard Introduces Coding Capability and Integration with Colab

The last significant development in the chatbot space was Bard’s ability to provide code, announced April 21, 2023. From March 28 when limited citation capability appeared in Bard to April 21 when coding ability was announced is absolutely glacial in terms of the development speed we’ve been seeing.

On the plus side, perhaps the most exciting unannounced aspect of the coding feature is that when you ask Bard to code something, it will actually hide under the lower-right triple-dot menu the ability to “Export to Colab” and actually run the code in a cloud-based Notebook environment.

Bing can send code to Colab to run

Can a Bards or Magis beat a Lich in a Joust?

My habits are formed. Microsoft was successful conditioning me to always give the conversational search model a try first. I’ve overridden most of the default browser settings Microsoft originally mandated as a condition of using Bing, most notably making the search bar default to Google when it’s not an exact web address typed in. And it’s not insignificant to point out that Edge is always my first choice browser because even with your Microsoft login on Chrome, and even with the appearance of all the Chat UI elements, any attempt to click them will tell you that Chat mode is only available when you have access to new Bing.

Being that I know that I do have access to the new Bing on that very same Microsoft user account that I’m logged in as under that Chrome session, I wonder what it is that they’re trying to tell me? Latest episode of the Browser Wars, much?

What Google’s catch-up game is going to be is uncertain. Bard doesn’t seem like it could be the end game, and indeed Google has already announced the latest in its Dungeons & Dragons campaign: Magi, but few details are known. Perhaps Google has something in store for us that will blow our minds and make the new Bing… well, look like the old Bing. Or maybe it’s just that finally the Lich woke up, and Google is not ready to play. Bards and Magis may have no chance trying to joust a Lich.

source https://moz.com/blog/bing-and-bard-feature-timeline

Design a site like this with WordPress.com
Get started