Jump to content

Cre8asiteforums Internet Marketing
and Conversion Web Design


Photo

The 3Rd Google Q&a Session


  • Please log in to reply
15 replies to this topic

#1 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9416 posts

Posted 23 March 2016 - 09:50 PM

So, these are still going, and we're already scheduling the next (4th) one.

 

This time the SEOs firing questions for Google Senior Strategist Andrey were Rand Fishkin, Eric Enge and myself (Dear Bill needed to be elsewhere at this time for some personal business) and my feeling from this is that Andrey has really started to open up, as despite some guardedness and reservations at times (which are understandable given the terrible misquotes and misrepresentations anything they say can get), I really think the amount of clarity and openness we got to questions about RankBrain are unprecedented.

 

 

 

I'll add some of my other observations another time, as its late here, but this one was definitely both interesting and insightful, I felt.



#2 glyn

glyn

    Sonic Boom Member

  • Hall Of Fame
  • 3174 posts

Posted 24 March 2016 - 09:26 AM

General things while listening:

 

TBPR was not meaningless because it reflected how important a website was. Whether this was true or not is not actually relevant. Why Google does not say that people could use it as a measure of how important a website was and then work out whether their SEO strategies were being considered by Google as something that was benefiting the website (increase in PR), is pretty poor. I also think oftentimes we evolve and then look back forget that and try and rationalize our behaviour as though at the time we were being stupid, which we weren't. I don't really care for Page Rank but as Ammom says, if I saw a website with X amount of SEO performed on it and a PR of 2, I would think something was broken. Google made TBPR a currency like MOZ made DA/PA a currency too!

 

21:21 - A lovely indication of how Google users Google Analytics to discover how it's changes are impacting on websites.

 

.----

 

My takeaways...

 

Overall I feel sorry for Google because the online market is slowly going under-cover. Wherever you look people are closing the doors on either their content behind paywalls. Or they are hiding content inside apps. Factor in ad-blocking and the answer for Google is just what? Pray that it can get more ads in the spectrum?

 

I've said before but Google needs to rethink their strategy of advertising which is leading to the death of the internet. In Google Play Find me a review of a trending app where users are not saying "great game shame about the advertising ruining it". I fear that in the end Google will probably destroy it's brand through the over-proliferation of advertising and lack of care or realizing that people are prepared to pay for things if they are good.

 

If I look at how I manage client content now Google features are part of that equation but our guests are first. I take it as fact that all my content and traffic needs to be purchased, I don't think about free traffic it's completely pointless. Sure I'll take any free traffic and do all the analysis I can trying to work out how the pages need to be constructed and optimized. Hotels & travel is an industry that is ripe for lots of improvement but despite having pushed very hard with the implementation of deep-inside the hotel walk-throughs with hi-res images, the takeup has been poor. Despite being promised a huge increase in the way that Google integrates with hotels inventory and direct booking - despite having the products (still without the necessary marketing wrapper that actually makes them palatable and understandable as self-serve or semi self serve solutions) - we' more than 18 months on and very little has been done. Those advertising bucks from bigger OTAs are no doubt playing an important role in this process. But it's a Search Engine in chains unable to move yet again from it's plastering of advertising up down left and right. It's sad really but this is how it's going.

 

Thanks to these sessions I now see Google as a bit of a schizoid creature of the moment desperately trying to index every piece of content and information on the web. Super aggressive spiders that will crawl everything. I published a website the other day and Google had indexed it inside 24 hours. I remember when it might take a month. I don't think that the reason for indexing is due to a wish to get my site indexed because they want to shed light on it, I think instead it's about trying to get their hands into content and see whether it can be monetized for it's users - Hopefully it will have schema markup so that the really important information can be re-skinned with a small attribution. I'm not saying Googles days are numbered, but I think that things are certainly not all Charlie and the Candy Mountain at GHQ.

 

In fact around the 21 minute work there is a little release from Sergei about that shows how Google used to have things easy in terms of analysis. Link volume etc... But how you measure user behavior and use it as a ranking system that does not keep on perpetuating the same results. That is pretty much what I see happening already. The same old results across the same old verticals. I've run keyword experiments and have seen 260 keywords pull back 2,600 webpages of which there are only 61 unique websites. SIXTY ONE! SO then perhaps go back to analyzing on-page content in a better way.

 

Thanks to BK these sessions have been really eye-opening in as much as giving me an insight as to how much on the back-foot Google really is.

Rand: Do you have any Italian in you? I've never seen a guy move around so much.

 

Glyn.
 



#3 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9416 posts

Posted 24 March 2016 - 10:33 AM

The first bit, about toolbarPR was of course the most nuanced.  My takeaway is that once TbPR was far from accurate in any sense at all, it had no value even for guestimating any actual value, which I think is true and fair, and is why people now use third-party versions such as 'Domain Authority' etc from third-party link analysis tools (Moz, Majestic, etc).  However, just as was true of TbPR, many mistakenly assume that this allows them to predict link-juice, and it really, really doesn't.  There are layers of dampening factors in Google's systems that can mean a great, clean link from a powerful domain has absolute zero value.  And there are also many links that we rationally assume have no value that can actually still trigger spidering and freshness boosts and thus give value.

 

Ultimately, if we had any access to anything remotely resembling actual PR values, I believe (and nothing Andrey said directly refuted this), it would still be one of the useful metrics for guessing at crawl priority (but social media and 'burstiness' have come along since), but TbPR was dead for 18 months and had no correlation to the actual PR value.



#4 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9416 posts

Posted 24 March 2016 - 10:39 AM

Rand: Do you have any Italian in you?

 

 That's a scurrilous rumour!  The friendship between Rand and Gianluca is entirely platonic! :D



#5 glyn

glyn

    Sonic Boom Member

  • Hall Of Fame
  • 3174 posts

Posted 25 March 2016 - 03:14 AM

So here comes the first email about the importance of links citing the video above (see attachment)

 

 

 

I've prepared the following executive summary for oyu

 

Attached Thumbnails

  • HEREWEGO.png

Edited by glyn, 25 March 2016 - 03:15 AM.


#6 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9416 posts

Posted 25 March 2016 - 03:44 AM

Yes, we all knew that would be one consequence.  It of course is a big part of the reason that these Q&A sessions are still something of a careful dance around what Google can say given how much people will then apply their own biases, misconceptions, outright misquotations, and things taken far out of context.

 

Yet the feeling I came away from this discussion with is that everyone in the discussion was actually working cooperatively to reveal as much as they safely could.  There's an obvious element of respect for each other's viewpoints (and positions) that is far different from the "two sides of a massive fence" that we used to have.  As I jokingly said elsewhere when asked what my biggest takeaway personally was, it is really something when a Google guy can sit down to talk about tough subjects with a group of highly inquisitive SEOs and the only disagreement in an hour is between 2 of the SEOs (and even that was only semantics that I easily aligned us all on). :)



#7 glyn

glyn

    Sonic Boom Member

  • Hall Of Fame
  • 3174 posts

Posted 25 March 2016 - 08:07 AM

Thing is BK its all very well and good Google playing humble pie and vague transparency because their mechanisms that have led to an online marketplace that - one might argue was inevitable - has been accelerated by the practices they have put in place. When I look at one online business that went from a steady earner right the way down to nothing because of ad-sense cutter pages penguinizing the website it becomes a little bit of a harder pill for business owners to swallow. However, the current online marketplace was inevitable (wall mart, amazon et al hoovering everything), it's just there was an opportunity to come at it a different way, if people had used their brains a bit more.

 

So underwriting all of this, is the missed opportunity of it all. Look how shi* online advertising is. And frankly it is sh** considering that it's most potent success (display re-marketing) is based on an unknowing public that could, if they actually understood how to, destroy it by installing various privacy mechanisms. WOW WHAT A BUOYANT SYSTEM. WELL DONE CLEVER PEOPLE. Adwords won't even tell you the  IP of the user request under the auspices of privacy.

 

So while I think these sessions are good from the point of view of getting some feedback, it is always impossible for me to shake off the feeling that whenever I hear someone senior from Google talking it sounds a) like they have a god complex and b) the way they always try to simplify statements - in an effort to be even clearer - are pretty insulting because the concepts are not actually that hard to understand first time around and so elaborating them again simply reinforces the position that they view people as idiots or unable to understand things. I am prepared to accept (having been on these too) that this type of language could be the result of a communication course they tend to run inside big companies to help train staff.

 

LittleG



#8 iamlost

iamlost

    The Wind Master

  • Site Administrators
  • 5344 posts

Posted 25 March 2016 - 09:31 AM

I never used TBPR and recommended against fixating on it for at least a decade so found it fascinating that TBPR was being used to guesstimate crawl/indexing... am I the only webdev who tracks Google's actual crawls over time, maps them, then tests extrapolations?

The idea that TBPR had a currency value superseded by Moz's PA/DA silliness is something that had never occurred to me, colour me naive. Totally blown away that I missed that perspective. SEO does have a mystic astrology-type fetish weakness.

 

Thought: a Google/SEO Tarot card type deck might be a viable product/tool... SEO practioner as gypsy fortune teller :infinite-banana: :infinite-banana: :infinite-banana:
 
I never expect any person/organisation to open up and detail the nitty gritty, i.e. non-compiled Win OS source code, Google algorithm source code, that is a foundation of their business model. And pretty much anything less results in broad shallow replies offered as clarification aka brand outreach goodwill.

So I watch productions such as your Google Q&A sessions more as an aficionado of marketing than as a webdev looking for SEO insights. And I again was impressed by the dance. It made all the Q&A seem all shiny new or improved.

* linking is critical.
* content is critical.
Note: chicken and egg: consider tandem rather than 1:2.
* user search results interactions are subsidiary input(s).
* RankBrain adds Natural Language input to classic term ordered search.
* different jurisdictions/language results get treated differently.

Aside: I've been working with Natural Language (and other) improvements to site search for a decade. A much simpler as immensely constrained compared to Google's requirements but the principles/requirements are much the same. Yet, here again, webdevs/SEOs generally, seem to be so narrow-focussed that (1) they are constantly surprised by developments telegraphed years since and (2) totally baffled by science aka misunderstand what they are told.

I immediately leaned forward when 'other language' results were mentioned; but it wasn't the question and followup that needed to be asked, which are:
* why does Google treat translated pages as 'different' unduplicated content.
---why does Google accept scraped and translated (badly) pages as original in face of DMCA requests?
Note: unless from lawyer quoting specific copyright provisions and suggesting court as remedy.
Note: translation does not itself transfer copyright.
The question actually asked was pure softball with prior asked/answered history.

A fun conversation but: yes, we have no bananas today.

 

And I will, once again, be in the audience eagerly watching/listening to your next one. :D

Great fun. Thanks, Ammon. And all others involved in the production. Appreciated.



#9 glyn

glyn

    Sonic Boom Member

  • Hall Of Fame
  • 3174 posts

Posted 25 March 2016 - 09:38 AM

I for one know that IAMLOST has a TBPR teddy bear.

 

Perhaps you'll find solace in this post...

https://medium.com/@...d29f#.clp708owk



#10 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9416 posts

Posted 25 March 2016 - 11:55 AM

I never used TBPR and recommended against fixating on it for at least a decade so found it fascinating that TBPR was being used to guesstimate crawl/indexing... am I the only webdev who tracks Google's actual crawls over time, maps them, then tests extrapolations?

 

The only people I knew that ever immediately understood my usage of it this way were agency and consultant types - people who frequently have to take a site that is new to them with unknown history and get some fast insights and benchmarks.  It's not like we can quickly go back in time to instill good monitoring behaviour 4 years before we meet the site first, and thus have the data available. :)

 

The language/translation thing is all very interesting.  If one looks at literary history, there are many, many instances were a great and famous work was truly nothing more than a translation (and sometimes a poor one with deliberate fictional additions for effect).  Whether it is the story of William Tell which is actually an older Chinese story, The Magnificent Seven, or simply the Kama Sutra, history is filled with finding a great tale and simply translating it.  Heck, The Bible, especially the prized and famous King James edition is a classic of 'creative' translation, and certainly far from an unsuccessful work.

 

I mention this because I think there is a great truth that a translation is actually a derivative work, which needs and demands a little 'poetic license' if one is truly to translate the spirit of the work as well as the concepts.  Thus, a translation should never be a true 'duplicate' but a derivative work, near-duplicate at harshest.



#11 iamlost

iamlost

    The Wind Master

  • Site Administrators
  • 5344 posts

Posted 25 March 2016 - 04:34 PM

Copyright is a relatively new legal protection. As, indeed, are many of our current laws, regulations, and legal frameworks.

However, while translation is a recognized derivative it can only be only be granted by the original author/copyright holder. And for the derivative to be itself copyrightable it must add sufficient additions to be viewed as a separate work. So simply scrape and translate fails twice. As G can be arm twisted to recognise but does its best to ignore.

Note: the number of DMCAs my law-type-person files each year would blow your mind. As would the number of copyright infringement claims other than DMCA. Fortunately, to date, the settlements cover costs.

Further: auto-magic compilations without significant human selection participation can not be copyrighted. What that can mean I'll leave to your imagination.

Copyright is still a force and until Disney et al change business models will exist.

It is the best defence a small business has in the intellectual property wars. If one learns how to wield it effectively.

@EGOL: I consider it money in the bank when a nonprofit uses my stuff without permission. :) which they seem to do without asking far more than commercial organisations.

#12 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9416 posts

Posted 02 May 2016 - 03:07 PM

Just a 'heads-up' that the 4th session is now arranged - http://webpromo.expert/google-qa-may/- and we finally have my wish and a female panelist in Jenn Slegg.  I think this is about as large as the panel can realistically get to have depth on topics.



#13 cre8pc

cre8pc

    Dream Catcher Forums Founder

  • Admin - Top Level
  • 14631 posts

Posted 02 May 2016 - 07:49 PM

Date: 18th of May

Time: 11.30 am LA,14.30 NY,19.30 London, 20.30 Valencia 21.30 Kiev

Major points:

  • Google's Senior Strategist & 5 Gurus

 

Looks like a lot of fun!

  •  


#14 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9416 posts

Posted 24 May 2016 - 11:27 AM

Things got complicated for a while, thus the original planned Q&A for May got (in order) cancelled, restructured, and rescheduled.  http://webpromo.expert/google-qa-duplicate-content/ 

 

The attention that the third Q&A session got (well over 20,000 views, and many citations and references) mean that these also drew more attention within Google itself.  The 'ask anything' format is, from Google's perspective, the hardest to approve and prepare for.  With foreknowledge of what will be asked, they can ask around internally, and work out exactly how much they can say, and agree a 'company line'.  I think that also at the higher levels of the company, they start to apply the same rationalisation to these appearances as to any other use of staff time: i.e. what benefit will it bring, by how much, and to how many.  That all means that 'Google' as a company really wanted the next Q&A to be more practically useful to more webmasters.

 

Most of you have known me long enough to know I rather shake my head at that.  To me, it is far more honest and genuine to be able to ask a question and be told "I'm sorry but we can't really give any detail on that" than to limit what is even asked.  I'm fine with being given an honest and genuine 'no comment' answer.  I'm not fine at all with ending up with something that masquerades as an open question session but railroads the questions that can be asked and doesn't disclose that railroading.  Thus the cancellation, restructuring, and general delays in negotiating a solution that can be acceptable to all parties.

 

We are focusing on a single particular area which is practical and helpful to many (suiting Google's needs), yet is also one that SEOs themselves still struggle with, and which myths remain about (e.g. The duplicate content penalty).  Naturally, the topic will have to touch on the practicalities of such things as redirects, canonicals, and whether redirects done for, say, HTTPS might still suffer a damping factor.  We'll of course be looking at both technical duplicate content (where multiple URL variants reach a single piece of content, such as when tracking URLs are used) and at near-duplicate content, such as where one article is republished to several sites, or 'boiler plate' content is used on many pages.

 

Hopefully, this format, while narrower in scope, will prove more useful for all parties.  Focus is not the same thing as blinkers.  If it starts to feel in any way that this is less about focus, and more about narrowing the playing field, this might be the last one I do.



#15 glyn

glyn

    Sonic Boom Member

  • Hall Of Fame
  • 3174 posts

Posted 25 May 2016 - 09:46 AM

Wow, good job Ammon. I certainly had my views changed about Google and I've not doubt that it probably affected their stock price to come across as fallible. Don't see any kind of privacy policy on that link:webpromo.expert/google-qa-duplicate-content/ and will the final session go on YouTube.

 

Frankly I dread to see what the next session looks like based on what I am reading. I've seen some of Jon's hangouts and the impression I get is that no-one really knows what's going on with the way Google ranks, but in the meantime let's just focus on a whole load of cra* that can't be measured so it's filling time under the auspices of responding to users questions that invariably have been created by.......Google's own PR and marketing of the jedi mind tricks black/white/yellow/blue hat.

 

The session is being touted a new format because it will make it as useful to users as possible, just like masking keywords was truly about protecting users privacy.

 

Good sessions, good past work BK.

 

Glyn.



#16 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9416 posts

Posted 25 May 2016 - 11:10 AM

To a very real extent, Glyn, I've somewhat viewed all of the Google Q&As as walking a very tight line between huge potential, and the huge potential for disappointment.  I have massively enjoyed each of them, but one can't miss what a delicate 'skating on ice' was the first one.  I felt that each session since that first had gotten more comfortable, more open in dialogue, and I can't help feeling that this change threatens that progress, as any would.  Yet, as I said above, sometimes more focus is purely a good thing, and doesn't have to mean blinkers to the peripherals.  I do like the idea of making these more practical and useful for more people, even if I will miss the option to mention the more abstract and theoretical side of things like AI.

 

Ultimately, how the next session will turn out is, like all prior sessions, largely about how carefully we can craft and frame questions so that Googlers can answer them.  The topic is one of our choosing, and I think is a great starting point.  There are still so many issues (and myths) around duplicate content, around how PageRank is passed through redirects, etc. that I think filling an hour with useful and interesting discussion should be easy.

 

I think of it as a conversation, where one party has said that something about the conversation has made them uncomfortable, and now we are all attempting to keep the conversation going, and useful, taking that into account.  Working around things has always been a core skill of SEO, whether those things are technical or psychological. :)





RSS Feed

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users