Jump to content

Cre8asiteforums Internet Marketing
and Conversion Web Design


Photo

Urgent help with my site


  • Please log in to reply
67 replies to this topic

#1 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 29 August 2005 - 06:08 AM

First of all, Hi to all.

I'm from Spain and unfortunately the sites I build are usually only in Spanish so, sorry for this inconvenience.

Right now I have a big problem. I work for a construction company and I made a website for them. I managed to put it in first pages for a set of keywords on Google, MSN and Yahoo!. Everything was perfect until 20 days ago. Since then, I've been dissapearing from Google results, going deeper and deeper everyday or even every few hours.

The first problem that I face is that content, design and keywords are not in my hands, but results are. This is pushing me to use some 'tricks' that might not be well seen, but I don't know what else to do.

Results at Yahoo! and MSN are still right. So I've been wondering about sandbox and other issues. I even removed the pointers from 6 other domain names pointing to this site to make sure that was not the problem.

I'd appretiate some help. As I wrote in other forum in Spanish, the first page results in Google are just not good talking about relevance and quality, and I just can't understand why do my site sinks deeper and deeper.

Ok, the site is : www . grupohabitathumano . es

and a few of the keywords...

construccion alicante
constructora alicante
aislamientos acusticos alicante
promocion alicante
edificacion alicante

Oh, by the way, the positioning is made, mostly from the 'Grupo' option in the index page.

well, I guess you have enough info to help me out. If it wasn't so, please ask me for whatever you need.

THANKS A LOT
Jmira

#2 bragadocchio

bragadocchio

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 15634 posts

Posted 29 August 2005 - 06:48 AM

Hi JMira,

I'm having troubles reaching the site right now. I did get to it earlier, and took a very brief look around. One thing that I noticed was that it was only showing a pagerank 2 in the Google toolbar. I don't know if you've done much in getting links to the pages, but that might be a good thing to do more of if you haven't - look for regional and topical directories, and other places where a link may be appropriate.

What do you mean by "pointer" from other domains? Alternative domain names for the site? If you had those, were you using 301 redirects for them, or something else?

#3 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 29 August 2005 - 06:54 AM

By pointer I mean so, alternative domain names. These have the DNS Zone configured to reach the same site.

I don't know why should there be any trouble to reach the site. It loads fine to me. Maybe, since it's hosted in Spain... and we hava a country internet node here... loads much faster for us. Anyway, the customers are supposed to be from this area.

#4 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 29 August 2005 - 07:03 AM

About backlinks...

I tried the tool someone indicated in a previous subject.
C Class Bakclink Analyzer Tool
http://www.webuildpa...class/index.php

It reports 48 backlinks, 15 of them from inside the site.

I don't think it's so bad, but I'll try to get some more links. Anyway, the problem must be some other, since pages in first page results for those keywords don't get half of these links.

Thanks bragadocchio.

Jmira

#5 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9339 posts

Posted 29 August 2005 - 08:25 AM

Anything under 1,000 backlinks is very low, unless those backlinks are from some very significant sites like major newspaper and television sites, etc.

48 backlinks, of which 15 are from the site itself is very very very low.

#6 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 29 August 2005 - 09:12 AM

Ok, ok, I'm working on it. But I have the feeling that 1000 backlinks for a spanish page is a huge amount of links. I'll see what i can do

Anyway, Did you see any terrible mistakes within the site code to get such awfull results? The competitors are not great either...

Looks like today Google is in a good mood, my site is coming up little by little, a few positions, just 5 up on the few keywords before the 100th result but, anyway, that's better than sinking down. What surprises me is that I have 38 keywordx on 1st page of MSN and 49 on Yahoo!, but only 3 on Google!!!!!

I appretiate your support.

#7 Michael_Martinez

Michael_Martinez

    Time Traveler Member

  • 1000 Post Club
  • 1354 posts

Posted 29 August 2005 - 09:19 AM

Most sites don't need 1,000 backlinks. That is just overkill. However, closer inspection of this particular situation indicates that some of the competition are depending on massive linkage for support.

Google is showing 55 references to the site, but only 15 are deemed significant:

The site comes up fourth on a search for its title string (in Google.es), out of 34 possible results:


That indicates extremely poor on-page optimization. The first of your search expressions, "construccion alicante", is moderately competitive with just over 1,000,000 hits on the search.

The top page listed has only 1 inbound link, but its parent domain has about 6,500 (6.500 in European numeral designation) inbound links. However, all but 47 of them come from internal linkage.

By contrast, your company has only 38 pages indexed.

The second site listed for "construccion alicante" shows only 3 inbound links for the displayed page. However, the parent domain has 195 inbound links, of which 154 come from other domains.

The third listing for "construccion alicante" shows only 1 inbound link. But the parent domain shows over 1 million inbound links, most of which are internal -- only about 4,000 links come from other domains (I assume from the name of the site, Mundo Annuncio, that it is some sort of yellow pages or classified ads site).

It appears to me that your company site is being outcompeted by internal linkage and better on-page optimization.

If you cannot change your company's web site, then you may indeed have to shoot for those 1,000 external links. But you could try setting up a secondary site that summarizes the information on the official site and provides clear links to it. The purpose of the secondary site would be to inform people about your company from an outside point of view.

A blog or collection of short articles would work best. If you can create the content as a sub-domain on an existing popular primary domain, you'll have a step up on your competition.

Despite the large numbers of inbound links that the top-ranked sites are showing, I believe it would be fairly easy to get a content-rich site that optimizes its pages for relevance to rank well. You should be able to dominate these results in 3-6 months, depending on how much effort you put into it. It would require less work than finding 1,000 inbound links.

#8 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 29 August 2005 - 09:39 AM

Ok, so you mean that the links to the parent domain are also very important.

Well, if you take a look to the 2nd keyword (constructora alicante) you'll find www. alicante-urbana .com on 3rd position, and that page/site has 19 backlinks, 6 from its own domain with no parent domain. Google reports 32 references to the site, only 10 significant

the 3rd site: infojobs.net has far many more bakclinks, internal and external, but it is the 3rd.

I don't know, I'm afraid there's something else I'm not counting on yet

#9 Michael_Martinez

Michael_Martinez

    Time Traveler Member

  • 1000 Post Club
  • 1354 posts

Posted 29 August 2005 - 01:36 PM

There is an undocumented, unconfirmed principle which I and others have observed in Google's SERP behavior that I have named "child inheritance". Google has not revealed anything about how it treats children of well-established, reputable sites, but it is my belief (based on the performance of new content that I release through my own well-established, well-linked, large content sites) that new content pages are vetted faster, maybe even receive a boost in importance by being a child of an important site.

I did not have time to perform many searches against your list of keywords. I assumed you listed the most important one first.

To be honest, without knowing anything about your target market's search patterns, I cannot form an opinion whether you should be concerned.

For example, although only a small percentage of U.S. insurance agents and brokerages operate Web sites, most of them are marketing unused search expressions.

Take that comparison for what it's worth.

#10 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9339 posts

Posted 29 August 2005 - 01:37 PM

I have the feeling that 1000 backlinks for a spanish page is a huge amount of links

You could be right, which is in turn bringing you two separate lessons.

Lesson 1: The Spanish market may be more 'niche' than you really need to be - the Spanish-speaking market will probably offer you a lot more maneuvering room, and oportunities to work with the latin-American website pool.

Lesson 2: It is hard for a Spanish page to gain 1,000 backlinks unless it says something truly remarkable that people will want to pass on to others. 10 Spanish pages could probably more easily gain 100 links each. 100 Spanish pages would only need to garner 10 links each. Stop trying to collect the ocean in a bucket. Diversify. Think broad. Think multi-chanel strategies.

#11 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 30 August 2005 - 02:30 AM

Black Night,

I have a question related to what you just said about keywords, though I'm not sure this is the right forum to ask it... Google used to ignore parts od the search queries like 'the' 'in' 'at'.... saying those were too common words and wouldn't make any difference in the search results.

But for a while now Google doesn't say so anymore, What would be a better strategy in keyword selection, to choose 'construction in Dallas' or 'construction Dallas' ???

(excuse if the expressions are not right in English)

#12 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 30 August 2005 - 04:43 AM

And yet another one...

What are the results obtained with the Google_id Key if they differ so much from reality?? Are those future results?? Or maybe it's just a substitute to watch Google Dance so long useless?

It's almost a metaphysical question, isn't it? :)

#13 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9339 posts

Posted 30 August 2005 - 08:31 AM

The answer to your first question is "construction in dallas".

The small insignificant words are called Stop Words and are still ignored in specific, but then again, Google does still know that there was a word in the middle. It searches for "construction * dallas" where *=wildcard insignificant word.

So you get the exact same results for each of the following:
http://www.google.co...ction in dallas
http://www.google.co...ction of dallas
http://www.google.co...tion the dallas
http://www.google.co...uction a dallas
http://www.google.co...tion who dallas
http://www.google.co...on which dallas

but not the same as
http://www.google.co...truction dallas

Because there was no insignificant word in the middle, nothing to match the wildcard.

#14 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 31 August 2005 - 04:12 AM

please take a look to this specific page:

www.grupohabitathumano.es/endesarrollo.php

It's suposed to be optimized for 'constructor alicante' but it appears at the 219th result in google.es.

If you look at the 'text in cache', google's toolbar highlights the words seached found in the text. I think it should rank much better

Don't you think so????

By the way if been doing a little link building as you suggested, and I'm thinking about other versions of the pages or some kind of blog as you also suggested, but I'm going nuts.

I'm 1st page on Yahoo for 61 keywords and 1st page on MSN for 43 keywords but still on Google I'm only 1st page for 2 keywords. It's so annoying! After all Google is the target!

#15 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9339 posts

Posted 31 August 2005 - 07:07 AM

Okay, we need to go back a little to the end of the last decade and millennium when the "on-page" criteria was all there was. Back then, showing me the use of keywords on a page was all that was needed to see if a page might rank well. Your question above would have been perfectly reasonable back in 1999.

However, even then a little search engine called Google was around and a very small percentage of users had heard of it. Google didn't care too much what was on a page because cloaking, invisible text, hidden layers, and off screen CSS positioning, made anything it could 'see' on the page questionable anyway. Maybe what Google could see was not what the user would see. So Google placed far more emphasis on human review data.

This means that google was not as much interested in the words actually on the page as in the words that other human beings used to describe the page. It was easy to find this data by looking at the words used in and around links that pointed to the page.

Want to see just how big a factor that is?

http://www.google.co...ch?q=click here

Look at the number one result out of the billion results for this search.
http://www.google.co...html click here

These terms only appear in links pointing to this page: click here


If you want to know what page should rank highest for a phrase, survey all the links of the web, and see which page most of the highest reputation sites link to using that phrase and parts or variants of that phrase. That's how Google do it.

#16 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 31 August 2005 - 09:48 AM

Thank you so much for bringing the light to my mind

That's an example!!

The real problem now is how to! I'm afraid there's no chance but heavy link building.

Anyway, if I keep looking at the first results in some of my keywords... not even this rule applies, at least not so much to figure it out easily

#17 Michael_Martinez

Michael_Martinez

    Time Traveler Member

  • 1000 Post Club
  • 1354 posts

Posted 31 August 2005 - 01:49 PM

Google has ALWAYS cared about on-page content. They have never wavered from their stipulation that on-page content factors must be taken into consideration. They have varied how much on-page content helps, but over the past two years on-page content has helped far more than SEOs in general have realized (because they have invested so much time and effort in link-building they have missed the boat on the shift in Google's priorities).

Google determines RELEVANCE before it determines anything else. A page may have external data (mostly derived from inbound links) which helps to establish its relevance, but Google cannot and never has ignored on-page content when determining relevance.

After it determines RELEVANCE, it looks at other factors, among which is what it regards as IMPORTANCE as measured by their distinctive PageRank (not the Toolbar 0..10 values) algorithm. PageRank measures importance on the basis of a product of the total cumulative importance of other pages linking to a page (divided by their outbound links) AND an arbitrary but adjustable term they call a Damping Factor, which represents the probability that a random surfer will abandon following a chain of links from page to page.

Favorable damping factors can outweigh the cumulative PageRank conferred by inbound links. Unfavorable damping factors can also outweigh the cumulative PageRank of inbound links. Google technical papers imply that a majority of Web documents are assigned a standard, arbirtrary Damping Factor. When competing only against each other, these documents will be ranked in importance on the basis of the importance of their inbound links.

No one outside of Google is able to determine how they determine relevance or how they set the damping factors.

#18 Ruud

Ruud

    Hall of Fame

  • Hall Of Fame
  • 4887 posts

Posted 31 August 2005 - 01:58 PM

No one outside of Google is able to determine how they determine relevance or how they set the damping factors.


Couldn't you infer it from testing?

#19 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9339 posts

Posted 31 August 2005 - 03:21 PM

So, Michael, of all the articles pointing out that 'click here' is bad text to use in links, the thousands of usability articles and debates, plus all the hundreds of mentions of this issue by SEOs, any of which would really be a relevant result for "click here", why has a single one of those never beaten the absolutely irrelevant number one result which has not one single solitary mention of either word, and thus has zero on-page factors?

Because with enough off-page factors, on-page can never ever beat them. The same is not true of on-page factors.

You may not like it, but its hard to dispute the evidence.

Of course, if you believe you can write a page to rank at #1 for "click here" I would be delighted to see it.

#20 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 01 September 2005 - 02:04 AM

Yeah!

Now we're getting to the point!

So this means both factor do influence on results but one is more important than the other or, perhaps they weight the same, but an overwhelming amount of backlinks makes on-page factors irrelevant.

The thing is that if I understood right on-page factors can never offer such weight as 1 milion backlinks can, so that's why, in some cases on-page content doesn't matter at all ( besides the fact that without quality content you never get such amount of backlinks).

Did I get it wrong?

#21 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9339 posts

Posted 01 September 2005 - 09:18 AM

There's always a risk of over-simplification, but what you say can be seen to exist in many many cases to prove it does happen that way.

The over-simplification would be that we are assuming all links to be equal, which they are not. Just a very few high-value links can outweigh thousands of low-value links. Indeed, some links can be worth nothing at all. That's why I still beleve in, and practice, a content-driven approach to link-building. Remarkable content gets links of higher value for far less effort.

#22 bwelford

bwelford

    Peacekeeper Administrator

  • Site Administrators
  • 9008 posts

Posted 01 September 2005 - 09:58 AM

The other thing to add is that what appears in the search engine report page (SERP) for the keyword search is important. The entry for a particular web page will have a Title and will also have a snippet either taken from the visible content or from the description. If the keywords don't appear anywhere on the web page, then they can't appear in the entry on the SERP. This could mean that the searcher may choose to click on one of the other entries in the SERP where the connection to his keywords is more obvious.

#23 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 01 September 2005 - 10:49 AM

Ok, so that old document I suppose all of us read about calculating PageRank (Not toolbar's page rank) still applies, as Black Night pointed.

That one that said that PR of a page can be calculated through a formula. I think I have the url: http://www.iprcom.com/papers/pagerank/

Do you think still applies?

Even so, we have the 2 sides of the equation:

POSITION= PageRank(based in links) + Rank(based in content)

The Content side is more or less clear... (at least lets leave it aside by now)

The Links side is affected by that equation (or some other one that rates the quality of backlinks)

But where do you fit the relevance of the link related to the keywords??

Because it's not enough to obtain links but also those links have to be closely related to the keywords to count, isn't it so?

#24 Michael_Martinez

Michael_Martinez

    Time Traveler Member

  • 1000 Post Club
  • 1354 posts

Posted 01 September 2005 - 10:57 AM

Michael: No one outside of Google is able to determine how they determine relevance or how they set the damping factors. 

Ruud: Couldn't you infer it from testing?


Anyone can guess. Proving that an inference is correct is well beyond the means of anyone currently in the SEO community.

Black Knight: So, Michael, of all the articles pointing out that 'click here' is bad text to use in links, the thousands of usability articles and debates, plus all the hundreds of mentions of this issue by SEOs, any of which would really be a relevant result for "click here", why has a single one of those never beaten the absolutely irrelevant number one result which has not one single solitary mention of either word, and thus has zero on-page factors?


What, exactly, is the point of your question?

On-page factors win with Google for a multitude of search expressions. That isn't going to change any time soon. People need to focus on the basics of good Web site design before they start hyperoptimizing, especially when most of them don't need to hyperoptimize in the first place.

#25 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9339 posts

Posted 01 September 2005 - 11:38 AM

What, exactly, is the point of your question?

To show that your own earlier point is questionable at best, and at worst can be blatently false. I refer to your absolute statement earlier:

Google has ALWAYS cared about on-page content.

That is an absolute. The word 'always' means, by definition, without any exceptions. You even added exceptional emphasis to the word by using all Caps for it. By showing an exception, even one, I prove the statement to be utterly false. You compounded this by stating:

They [Google] have never wavered from their stipulation that on-page content factors must be taken into consideration.

I then demonstrate that not only are they not taking into account that the page ranked at #1 for "click here" has absolutely no on-page criteria for that phrase, but then go on to indicate hundreds of pages that do have the on-page criteria to support a good ranking.

of all the articles pointing out that 'click here' is bad text to use in links, the thousands of usability articles and debates, plus all the hundreds of mentions of this issue by SEOs, any of which would really be a relevant result for "click here", why has a single one of those never beaten the absolutely irrelevant number one result which has not one single solitary mention of either word, and thus has zero on-page factors?

That, dear Michael, was the point. I think it is one well made. Have you an equally effective counter-point?

Of course, if you believe you can write a page to rank at #1 for "click here" I would be delighted to see it.

Oh, that part was the throwing down of a gauntlet. Have you the commitment or courage to pick it up, or do you concede the fact that on-page factors are not enough to beat the purely off-page factors in this case?

#26 Michael_Martinez

Michael_Martinez

    Time Traveler Member

  • 1000 Post Club
  • 1354 posts

Posted 01 September 2005 - 01:51 PM

Michael: What, exactly, is the point of your question? 

Black_Knight: To show that your own earlier point is questionable at best, and at worst can be blatently false. I refer to your absolute statement earlier:


Son, that dog won't hunt.

Let's take a look at the famous Anatomy of a Large-Scale Hypertext Web Search Engine paper that Messrs. Brin and Page published in 1998. This is, as most people are now aware, the seminal definition of what Google is all about.

Section 2.2 deals with anchor text:

The text of links is treated in a special way in our search engine. Most search engines associate the text of a link with the page that the link is on. In addition, we associate it with the page the link points to....


I have emphasized their point that they are looking at the targets of links in addition to the pages on which the links are found. From the very start, Brin and Page acknowledged that a link's anchor text was relevant to its hosting page. The SEO community has buried this fact for years.

And there is more.

Section 2.3 deals with "Other Features":

Aside from PageRank and the use of anchor text, Google has several other features. First, it has location information for all hits and so it makes extensive use of proximity in search...


Again, we see that they are concerned with on-page content.

...Second, Google keeps track of some visual presentation details such as font size of words. Words in a larger or bolder font are weighted higher than other words....


Same section, same concern (on-page content).

Section 4.1, "Google Architecture Overview", paragraph 2:

...The indexing function is performed by the indexer and the sorter. The indexer performs a number of functions. It reads the repository, uncompresses the documents, and parses them. Each document is converted into a set of word occurrences called hits. The hits record the word, position in document, an approximation of font size, and capitalization....


Again, they are concerned with on-page content.

Section 4.2.5, "Hit Lists":

A hit list corresponds to a list of occurrences of a particular word in a particular document including position, font, and capitalization information. Hit lists account for most of the space used in both the forward and the inverted indices. Because of this, it is important to represent them as efficiently as possible....


Again, they are concerned with efficiently managing the data they have retrieved from on-page content.

Second paragraph begins with:

Our compact encoding uses two bytes for every hit. There are two types of hits: fancy hits and plain hits. Fancy hits include hits occurring in a URL, title, anchor text, or meta tag. Plain hits include everything else....


"Anchor text" is only one factor they mention here, and they don't distinguish between anchor text of on-page links and anchor text of off-page (inbound) links.

Section 4.2.6, "Forward Index":

The forward index is actually already partially sorted. It is stored in a number of barrels (we used 64). Each barrel holds a range of wordID's. If a document contains words that fall into a particular barrel, the docID is recorded into the barrel, followed by a list of wordID's with hitlists which correspond to those words....


Again, they are describing how the contents of the documents (the on-page contents) are being indexed.

Section 4.2.7, "Inverted Index", second paragraph:

An important issue is in what order the docID's should appear in the doclist. One simple solution is to store them sorted by docID. This allows for quick merging of different doclists for multiple word queries. Another option is to store them sorted by a ranking of the occurrence of the word in each document. This makes answering one word queries trivial and makes it likely that the answers to multiple word queries are near the start. However, merging is much more difficult. Also, this makes development much more difficult in that a change to the ranking function requires a rebuild of the index. We chose a compromise between these options, keeping two sets of inverted barrels -- one set for hit lists which include title or anchor hits and another set for all hit lists. This way, we check the first set of barrels first and if there are not enough matches within those barrels we check the larger ones.


Again, I have emphasized their key point: they compromised between the two methods, one of which is concerned with number of occurrences of a word in a document.

Section 4.5.1, "The Ranking System":

Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word....


So, their first stab is to look at the on-page content. And they note that PageRank is only one of several factors taken into consideration.

They do not simply grab pages on the basis of the anchor text of inbound links. They grab pages on the basis of a complex evaluative process which can be and has been influenced by an abundance of inbound link anchor texts -- that abundance usually being contrived by artificial means for the purpose of spoofing the search engine.

Some people call that "good SEO". It doesn't matter how you distinguish between inbound links and inbound links, the point is that Google has ALWAYS been concerned with on-page content.

In fact, I do very well focusing on on-page content, so do many other people. Do we need links? Sure. Google is crawling the Web constantly, and the more inbound links you have, the more frequently your content is crawled.

But I don't rely upon inbound link anchor text to establish relevancy. That is, in my opinion (based on extensive experience at maintaining over 100 top-ten effective search positions at Google since 1999), a waste of time, effort, and resources.

That the SEO community has impaired itself in general by focusing on one of the most inefficient uses of its time and resources doesn't in any way invalidate what I have said about Google always being concerned with on-page content.

#27 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9339 posts

Posted 01 September 2005 - 02:58 PM

Michael, I don't know if you are being deliberately obtuse by knowingly keep refering to one algorithm that has no topicality or keyword specific factors, or are just under-educated about search engines, but with your experience I naturally have to assume the former.

Noone here but you is discussing PageRank.

We are discussing off-page ranking factors, and most specifically off-page relevancy factors. PageRank is not a relevancy factor. PageRank has never been a relevancy factor. PageRank is just a number that indicates the probability of landing on a given page if you were to preform a random crawl. PageRank therefore has absolutely no place in this discussion.

The document you cite there is not about Google. It is about PageRank, and was based on research done in Stanford before there was a Google. It describes just one algorithm with a very specific job - to add popularity factoring to an already relevancy-sorted set of results. It is an after-filter, not a search algorithm.

In addition, that document not only predates the existance of Google as a company, or indeed as a full search engine, looking only at one algorithm that could be added to an existing fully-functional search engine, but it predates the work of hundreds of top-level Information Retieval Scientists that Google has employed for the past 7 years.

Perhaps you think all those scientists are employed merely to bow around the shrine of PageRank. I however am a realist who accepts that the work of hundreds of such scientists is likely extensive. If Page and Brin came up with PageRank largely alone, imagine how many funky algorithms those hundreds of scientists have come up with in the span of the past 7 years. Must be over 100 things as good, big, and important as PageRank at minimum. I'd estimate far more.

We know for a fact that PageRank is not a sacred cow that employees are not allowed to criticise or change with a view to improvement. TSPR and Local Rank are just 2 of many examples of work done to radically change PageRank so dramatically from that initial paper that whatever importance it one had is now only as a historical record of what they chose to tell us at the time. It says nothing about how Google works, or what thousands of other algorithms might be applied before a heavily modified version, many times descended and removed from the original paper's description of PageRank is applied.

PageRank is utterly irrelevant to this discussion on a dozen different grounds, any one of which would suffice.

Have you any argument that actually bears some relevance to the topic of keyword-specific rankings or is spouting the 6 years out of date pedestrian references to what everyone knows all you have to offer?

That the SEO community has impaired itself in general by focusing on one of the most inefficient uses of its time and resources doesn't in any way invalidate what I have said about Google always being concerned with on-page content.

Of course it doesnt. But my presentation of a page that has no on-page factors at all that beats so many hundreds of better documents, solely on the basis of link-text most certainly does.

For a thing to be 'always' true, there can be no exceptions. One exception makes always a mere sometimes at best, and far more likely an actual error of understanding of which factors are truly in play.

#28 Michael_Martinez

Michael_Martinez

    Time Traveler Member

  • 1000 Post Club
  • 1354 posts

Posted 01 September 2005 - 03:32 PM

Michael, I don't know if you are being deliberately obtuse by knowingly keep refering to one algorithm that has no topicality or keyword specific factors, or are just under-educated about search engines, but with your experience I naturally have to assume the former. 

Noone here but you is discussing PageRank.


I'm not discussing PageRank, Ammon. I am pointing out that Google has ALWAYS been concerned with on-page content.

The point is that I have backed up what I said with the most authoritative source on the subject.

You can be as insulting and belittling as you wish, but I will continue to rely upon what Google says (and does) in this matter. On-page content matters to Google, always has mattered to Google, and probably always will (although only Google can make that determination).

#29 Ruud

Ruud

    Hall of Fame

  • Hall Of Fame
  • 4887 posts

Posted 01 September 2005 - 06:14 PM

Google has ALWAYS been concerned with on-page content


But perhaps then in the same way politicians are concerned about the environment?

My impression is that in the current scheme of things, content has very little to do with it. Using links you can rank a page high for a term it doesn't contain. Using on-page factors you cannot rank high if you don't have the links to support it. As such, off-page factors seem to be the deciding factor, more so than on-page.

If the situation is such that links do ultimately decide, then Google's stated concern with on-page factors is a moot point at best.

Of course no-one is arguing against on-page per se. Using titles and emphasis you can do beautiful things in Yahoo and MSN.

I found Ammon's "click here" example convincing and interesting. I wonder, Michael, if, as advocate of "the other camp", you could present a similar example which underscores your position?

#30 bwelford

bwelford

    Peacekeeper Administrator

  • Site Administrators
  • 9008 posts

Posted 01 September 2005 - 08:21 PM

Perhaps we can have a little fun here. :)

Just to even up the score, perhaps I can enter the ring to support Michael as the second member of a two-man tag wrestling team.

I find the Click Here example a somewhat extreme case and extreme cases do not always provide convincing proof of general principles. I know that in-page content is important. You would never get any Google official to say otherwise. I think it's more than the politicians mouthing support for the environment, Ruud. Looking at some of my historical blog postings that only have one or two internal links to them, they can score in the first few for Google key word searches against what might be thought strong competition based on external links.

I also am not sure how Google assesses words versus meanings now. Could it be that a button labelled Download (with associated code that includes onclick) be deemed to have a similar 'meaning' to a Click Here link.

That lead me on to do a Google search for Download. Interestingly the #1 web page is Download.com while the #2 is our friend the Adobe Reader Download web page. The surprising thing is that the Download.com webpage has a PageRank of 8, while the Adobe Reader Download web page has a PageRank of 10. I know PageRank may or may not be a good measure of back links but this does seem an interesting case.

Would you say that this constitutes the counter-example that Michael was encouraged to find? .. or did I go astray somewhere in my thinking?

#31 Black_Knight

Black_Knight

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 9339 posts

Posted 01 September 2005 - 08:51 PM

You went just a little astray.

I find the Click Here example a somewhat extreme case and extreme cases do not always provide convincing proof of general principles.

If you think the Click Here is extreme, then pick any Googlebomb. For example, did the President's page ever contain the word 'miserable' or the word 'failure'? There are thousands of other examples to demonstrate the same point, but I deliberately chose the one for which Google has found over one billion 'relevant' pages and yet makes number one of the one with absolutely no supportive on-page criteria. Despite how common online the words 'click' and 'here' are, Google managed to pick one of the very few pages that contains neither word.

There are literally thousands of other examples that make a nonsense of the idea that on-page factors are essential to a high ranking. However, the converse argument is that a page with no off-page criteria (no inbound links) could ever rank purely on on-page criteria. We all know that a page must have at least one inbound link to be considered.

The point illustrated: Links are essential to ranking. Use of keywords is not essential to ranking.

Meanwhile your counter example is actually another confirmation of my argument, and not countering it at all. Download.com is ranked number one because it has thousands of links pointing to it using that word, and additionally, it has the word in the domain name (another off-page criteria example).

Find me a site with few inbound links, none of which contain the keyword in or near them, that still ranks better than a site with even just five good quality links that do use the keyword.

#32 Ruud

Ruud

    Hall of Fame

  • Hall Of Fame
  • 4887 posts

Posted 01 September 2005 - 08:56 PM

Ah, I love fun!

I find the Click Here example a somewhat extreme case and extreme cases do not always provide convincing proof of general principles.


True - but it seems to be a general principle:

download it: first 10 results the term only exists in the links pointing to it
download here: link power
read here: there we go again
map it: ... not again!!

Now let's take.. erm... "download here".Without quotes it gives 285,000,000. I've been taught that doesn't mind squat. Who is at least trying to target, trying to optimize? allintitle:download here: 98,000.

To assume that Google goes the content way I'd say it would start messing around with those 98 thousand and pick from among them. Good old content. Title, H tags. But instead Google says; no, no, let me present you with these sites which have so many links about it pointing to them.

Content is not a determing factor then. It can tip the balance when all other things seem to be equal. But when they're not equal Google has no problem letting the links decide - no matter what the content says... or in this case doesn't say.

#33 Michael_Martinez

Michael_Martinez

    Time Traveler Member

  • 1000 Post Club
  • 1354 posts

Posted 01 September 2005 - 10:16 PM

My impression is that in the current scheme of things, content has very little to do with it...


All I can do is point to the obvious sources which show that Google stipulates otherwise.

I cannot force people to accept the facts.

...Using on-page factors you cannot rank high if you don't have the links to support it....


I do it all the time. So do other people.

The "click here" example, of course, is completely unrealistic because people don't naturally search for it, much less optimize for it.

#34 Michael_Martinez

Michael_Martinez

    Time Traveler Member

  • 1000 Post Club
  • 1354 posts

Posted 01 September 2005 - 10:23 PM

Content is not a determing factor then. It can tip the balance when all other things seem to be equal. But when they're not equal Google has no problem letting the links decide - no matter what the content says... or in this case doesn't say.


Do you want to insist that the word "pizza" isn't found on Pizza Hut's site? That "Britney Spears" isn't found on Britney Spears site?

What about hurricane katrina? How many sites have been optimized for that expression through link power?

Sorry. That dog just don't hunt.

When you look at real, natural search expressions that people use every day, you find that on-page content matters, matters most, and is taken into consideration before the linkage.

Links CAN be abused and ARE abused by the SEO industry every day. No one argues with that.

But the fact is that most Web content is not set up by SEO people and most searches are not conducted by SEO people and most searches don't even go after the expressions that SEOs are most prone to discuss.

#35 projectphp

projectphp

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 3935 posts

Posted 01 September 2005 - 10:31 PM

When you look at real, natural search expressions that people use every day, you find that on-page content matters, matters most, and is taken into consideration before the linkage.

Always? Seriously, always? I don't think Ammon ever disagreed thatmany searches, (like http://www.google.co.....in the world" ) are on-page content driven, but that is not the same as the ascertion that all searches are on-page content driven, which is false.

For a long time, many Australian Government sites ranked number one for their name, despite having zero pages indexed, due to technological issues. You can't get much more proof that on-page isn't always used than a page that isn't indexed at all ranking well!

#36 Ruud

Ruud

    Hall of Fame

  • Hall Of Fame
  • 4887 posts

Posted 01 September 2005 - 11:50 PM

These terms only appear in links pointing to this page



#37 DaveChild

DaveChild

    Honored One Who Served Moderator Alumni

  • Hall Of Fame
  • 3446 posts

Posted 02 September 2005 - 02:41 AM

Do you want to insist that the word "pizza" isn't found on Pizza Hut's site? That "Britney Spears" isn't found on Britney Spears site?


Are you saying that if those words were not on-page, that those sites would not be number 1 for those phrases? The chances are that they would.

Actually, the Britney example goes a long way towards proving Ammon's point. "Britney Spears" only appears in the page title - nowhere else (the word "Britney" in the sample text shown on Google's results page comes from the ODP listing). The page itself is flash - no content, no meta tags, nothing but a title. Every link to that site, of course, includes the words "Britney Spears" - which is why it is number one for "Britney Spears".

#38 JMira

JMira

    Ready To Fly Member

  • Members
  • 45 posts

Posted 02 September 2005 - 03:03 AM

Yes bwelford, we're sure having fun!

But what I think Black Night means is that (I think I already said so) on page content matters, but only as a part of the equation.

Inbound links usually only compliment in-page content to provide positioning, but, in some cases, inbound links are soooo many that in-page content doesn't matter anymore.

Correct me if I'm wrong Black Night.

Perhaps what we should do is to put both arguments together to obtain a global view. That's it.

#39 bwelford

bwelford

    Peacekeeper Administrator

  • Site Administrators
  • 9008 posts

Posted 02 September 2005 - 07:01 AM

Wow, I go to bed and I get up and we're still all having fun here. :D

Actually I think my tag team wrestling analogy wasn't the best image of what is going on. A better picture would be those 6 blind men standing around the elephant and trying to describe what they sense is there. We're all blind of course because that's the way Google prefers it. I'm glad to see, JMira, if I'm interpreting what you say correctly that you're on the same side of the elephant as Michael and me.

Perhaps I can try to make sure at least we know what we know and what we don't know (the facts as Ruud said). That may help to have a little better understanding of the elephant. When I go astray, I hope people will be kind enough to point that out. :)

1. Google does not parse words. So even though the words 'onclick' and 'there' appear in the source code for the Adobe Reader Download page, Google would not see the words click and here in these longer words.
2. When we say 'on-page' content, I assume we're all accepting that those are the words that Google acknowledges are 'part of the page'. So it does not include the words onclick and there that I mentioned in Point 1 since these are part of the coding of the page. On the other hand, it would include the Title, the actual browser-visible content of the page, the <ALT> text for images, any text in a <NOFRAMES> section of a framed page and so on. It would not include the contents of the Keywords Metatag since Google ignores this. It probably does not include the Description Metatag since although this is retained in their database, it is only used to provide a more relevant snippet in the SERP.

Hopefully I'm OK so far. Now we may start to diverge.
3. Google now assesses web pages based on meaning rather than the exact keyword used. Sometimes this can be as simple as accepting the plural version of a keyword when the singular version was used by the searcher. Sometimes it can be accepting the American 'optimization' has the same meaning as the UK 'optimisation'. ... and sometimes the actual words used may look quite different. Google's algorithms now work on meaning rather than merely the precise keyword used.

Perhaps even greater divergence on the next one.
4. The Google process is to select a subset of say 50,000 web pages from its total database for any keyword search. This uses a basic indexing process for every web page that associates with it certain keywords. This may not be based on meaning but rather on a whole set of associated keywords, a bit like tagging. So when you do a search for 'Click Here', the Adobe Reader Download web page ends up in this subset of 50,000 web pages. [I should say that for me, this step is the really interesting part of the puzzle.]

5. The final step of how these 50,000 web pages are ranked by the current Google algorithm using those 'over 100 factors' that are talked about is probably something we wouldn't argue too much about. Back links clearly are rated very highly by Google.

That's how I'm seeing it from my position. My real question would be, "If we had a Google representative joining the discussion, which side of the elephant would he or she be?"

Would it be the 'The web page must have some relevant meaning that links to the key word search' side?

Or would it be the 'Links are all that count and on-page content is very, very much secondary' side?

Oh where is GoogleGuy when you need him/her?

#40 Ruud

Ruud

    Hall of Fame

  • Hall Of Fame
  • 4887 posts

Posted 02 September 2005 - 08:26 AM

A better picture would be those 6 blind men standing around the elephant and trying to describe what they sense is there.


I like the elephant thing :D

In this case however we're not trying to describe the elephant - we're trying to describe where the elephant goes. Some say it ALWAYS pays attention to where there are trees with fine, juicy, healthy leaves. Others say, no, the elephant doesn't ALWAYS go to actual leaves - it often pays attention to where you say there are trees with fine, juicy, healthy leaves.

Given that we're talking about an elephant it's easy to track its actual movements, even for 6 blind men...

"In the wild" this elephant does go to places of which it has been told that there are trees with fine, juicy, healthy leaves. Once established that it does go there while there are, in fact, no leaves, and in some cases not even trees, we can no longer hold true the theory that the elephant ALWAYS pays attention to where the trees with the leaves are.

As good biologists we can then further test our theory. We place the elephant near actual trees with actual leaves but point into the direction of the dessert and say; no, seriously, the real trees are there.

Again we observe the elephant moving away from the real trees to move towards the place it has been told there are leaves.

Once back in the urban world, what do we put in our report? That elephants ALWAYS pay attention to where the trees with leaves are?

"If we had a Google representative joining the discussion, which side of the elephant would he or she be?"


It would tell you there is a tipping point where what the elephant is told by a group of people weighs stronger than where it knows there are trees.

You can outrank any page for any term with the right amount of links. You can't mount such an "attack" purely with content on your one page. Links can win over content, content can't win over links. Therefore; content is secondary.

The issue is inherent to paying attention to links. You can penalize for on-page keyword stuffing - but apart from some obvious cases of abuse can't penalize for off-page "link stuffing". To rank a page lower the more often it repeats a keyword after some point makes sense. To rank a page lower the more links there are to it with a specific text would be utter nonsense.



RSS Feed

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users