Know Your Risk: Penguin Analysis | Panda Risk

More on Google’s Javascript Handling

Many of you probably noticed my recent post without any substantive content. I was seeking to answer the following questions… Does Google wait for timeouts and display that content in the index? is that content searchable? How does Google handle content generated at intervals in javascript? Will Google index content that is only displayed after an action like a button click occurs? We now have some pretty solid answers to each question… Does Google wait for timeouts and display that content in the index? is that content searchable?Yes. Google does wait for timeouts and display that content in that index. That is to say, the content that was displayed after the timeout is included in the search index such that you can find it by searching Google for...

Testing JS: Nothing To See Here

Seriously, I am just testing some stuff out with Googlebot. The javascript running on this page should help us know a couple of things… 1. Does Google wait for timeouts and display that content in the index? is that content searchable? 2. How does Google handle content generated at intervals in javascript? 3. Will Google index content that is only displayed after an action like a button click occurs? It is worth pointing out that it appears Google is still asynchronously parsing Javascript. This page was almost instantly indexed by Google, but the javascript generated content has not been parsed. the marker blue wax among elephant made candle popular kids No tags for this...

Should we move to an all HTTPS web? No.

Joost de Valk has started a great discussion about https everywhere over at his blog and it is well worth the read, however I believe he has come to the wrong conclusions. The discussion was spurred on by Bing’s apparent move to HTTPS which would influence the passing of referrer and, subsequently, keyword data to webmasters from search queries. It is worth noting that as of the writing of this, Bing’s HTTPS version is not working and Bing has made no announcement of a move. Much of this discussion in the SEO community revolves around Section 15.1.3 of RFC2616 which indicates that… Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol. Subsequently, as a...

Simple DDOS Amplification Attack through XML Sitemap Generators

It was all too easy really. Filling up a 10Mb/s pipe, tearing down a website with just a handful of tabs open in a browser seems like something that should be out of the reach of your average web users, but SEOs like myself have made it all too easy by creating simple, largely un-protected tools for unauthorized spidering of websites. So, here is what is going on… Yesterday a great post was released about new on site SEO crawlers that allow you to determine a host of SEO issues by merely typing in a domain name. This seems fantastic at first glance, but I immediately saw an opportunity when I realized that none of these tools – and really almost none of the free SEO tools out there – require any form of authentication you actually own the website...

The Disadvantages of Speed: Finding Exact Match Domains in Drop Lists

I recently wrote a post on the advantages of speed specifically dealing with the ability to find exact match domains. One of the disadvantages of speed is that of the classic hammer problem. If you have a hammer, everything looks like a nail. Because lookup speeds are very fast, I made the assumption that I could just pound away. Eventually, though, that led to some insurmountable speed problems and would force more horizontal scaling. Because the lookups were so fast, I assumed that the number of lookups could be egregiously large without greatly damaging performance. I. Was. Wrong. It hit me over New Years Eve night that I had been looking at the problem all wrong. The lookup data was structured in a way that required the massive lookups. Subsequently,...