(Oct 10) Updated nss packages that fix one security issue are now available for Red Hat Enterprise Linux 4 Extended Life Cycle Support, Red Hat Enterprise Linux 5.6 Long Life, Red Hat Enterprise Linux 5.9 Extended Update Support, Red Hat Enterprise Linux 6.2 Advanced Update Support, and Red Hat [More…]
In the October 2014 survey we received responses from 1,028,932,208 sites,
which is nearly six million more than last month.
Apache regains the lead
Microsoft lost the lead to Apache this month, as the two giants continue to battle closely for the largest
share of all websites. Apache gained nearly 30 million sites, while Microsoft lost 22 million, causing Apache to be thrust back into
the lead by more than 36 million sites. In total, 385 million sites are now powered by Apache, giving it a
37.45% share of the market.
A significant contributor to this change was the expiry of domains previously used for link farming on Microsoft IIS servers. The domains used by these link farms were acquired and the sites are now hosted on Apache servers at Confluence-Networks, which display Network Solutions parking notices.
A new major release in the Apache 2.2 legacy branch was announced on 3 September. Apache 2.2.29
also incorporates many changes — including several security fixes — from version
2.2.28, which was not officially released.
New versions of nginx stable and
mainline were also released during September,
which included fixes for an SSL session reuse vulnerability, plus several other bugfixes.
Top million sites
The million busiest websites now represent less than 0.1% of all websites in the survey, but provide an insight into the preferences amongst the sites which are responsible for the great majority of today’s web traffic.
Just over half (50.2%) of the top million sites use Apache, which is very similar to its
share amongst all active sites; however, nginx’s market share is skewed noticeably higher amongst the top million
sites, where it powers 20.3% of sites, compared with only 14.3% of all active sites.
Computer growth
The most stable metric is the market share of web-facing computers — hundreds of thousands of websites can easily be served from a single computer (and subsequently disappear all in one go) but it is obviously far less trivial and less desirable to deploy or decommission a significant number of computers. Netcraft’s survey is also able to identify distinct computers which use multiple web-facing IP addresses, which adds further stability.
Apache leads in this market with a 47.5% share, and Microsoft also performs well with 30.7%, but both have been gradually falling over the past few years as a result of nginx’s strong growth. nginx gained more than 17,000 additional web-facing computers this month, helping to bring its market share up to 10.3%.
New top level domains
The relatively new .xyz domain, which showed tremendous growth over the past couple of months, has started to flatten out slightly after gaining only 33,000 sites this month (+8%). Nonetheless, this is still quite a healthy gain, albeit notably less than last month’s growth of 177,000 hostnames which then boosted its total by 78%.
Other promising TLDs include .london, .hamburg and .公司,
each of which had fewer than 50 sites in last month’s survey, but now have 17,000, 11,000 and 10,000
sites respectively.
The internationalised .公司 (.xn--55qx5d) TLD is delegated to the Computer Network Information Center of Chinese
Academy of Sciences. It means “company”, making it the Chinese equivalent of .com.


Developer | September 2014 | Percent | October 2014 | Percent | Change |
---|---|---|---|---|---|
Apache | 355,925,985 | 34.79% | 385,354,994 | 37.45% | 2.66 |
Microsoft | 371,406,909 | 36.31% | 345,485,419 | 33.58% | -2.73 |
nginx | 144,717,670 | 14.15% | 148,330,190 | 14.42% | 0.27 |
19,499,154 | 1.91% | 19,431,026 | 1.89% | -0.02 |

Developer | September 2014 | Percent | October 2014 | Percent | Change |
---|---|---|---|---|---|
Apache | 90,229,153 | 50.74% | 90,599,505 | 50.85% | 0.11 |
nginx | 25,865,132 | 14.54% | 25,588,943 | 14.36% | -0.18 |
Microsoft | 21,122,925 | 11.88% | 21,700,874 | 12.18% | 0.30 |
13,737,537 | 7.73% | 13,692,124 | 7.68% | -0.04 |
For more information see Active Sites

Developer | September 2014 | Percent | October 2014 | Percent | Change |
---|---|---|---|---|---|
Apache | 504,816 | 50.48% | 501,922 | 50.19% | -0.29 |
nginx | 200,526 | 20.05% | 203,439 | 20.34% | 0.29 |
Microsoft | 125,513 | 12.55% | 125,235 | 12.52% | -0.03 |
26,740 | 2.67% | 26,302 | 2.63% | -0.04 |

Developer | September 2014 | Percent | October 2014 | Percent | Change |
---|---|---|---|---|---|
Apache | 2,339,250 | 47.65% | 2,360,061 | 47.47% | -0.18 |
Microsoft | 1,516,088 | 30.88% | 1,525,278 | 30.68% | -0.20 |
nginx | 496,417 | 10.11% | 513,961 | 10.34% | 0.23 |
A recent spate of phishing attacks has taken to using the data URI scheme for evil.
Supported in most browsers, these special URIs allow the content of a phishing page
to be contained entirely within the URI itself, effectively eliminating the need
to host the page on a remote web server and adding an additional layer of indirection.
One of these attacks is demonstrated below, where a phishing campaign was used
to herd victims to a compromised site in the US, which then redirected them to a Base64-encoded
data URI. This particular example impersonates Google Docs in an attempt to steal email addresses and
passwords from Yahoo, Gmail, Hotmail, and AOL customers.
All of the attacks use Base64-encoded data URIs, rather than human-readable plain text, making it
harder for people, simple firewalls and other content filters to detect the malicious content.
Most phishing sites are
hosted on
compromised websites,
but can also be seen using purpose-bought domain names and bulletproof hosting packages that
have been paid for fraudulently. However, fraudsters can take advantage of open redirect
vulnerabilities to “host” these malicious data URIs without the need for conventional web hosting.
This situation is ideal for scenarios
such as malware delivery and social engineering attacks where no subsequent client-server interaction
is required, but phishing sites still need some way of transmitting their victim’s credentials to the fraudster.
Most phishing attacks that use data URIs resort to the traditional method of transmitting stolen credentials, i.e. POSTing them to a script on a remote web server. However, with no obvious phishing content being hosted on the remote web server, such scripts could be more difficult for third parties to take down; and as long as they remain functional, each one can continue to be used by any number of data URI attacks.
Another interesting example which impersonated an eBay login page is shown below.
If a victim is unfortunate enough
to fall for this particular phishing attack, his credentials will be transmitted to a PHP script hosted on a compromised web server in Germany.
This demonstrates an interesting deficiency in Google Chrome: If the
data URI is longer than 100,000 characters, then none of the Base64-encoded data within the URI
will be displayed in the address bar. Rather than truncating the URI, Chrome’s address bar will only display the string “data:”.
This behaviour could make it more difficult for wary victims to
report such attacks. Although the victim is viewing an eBay
phishing page, if he tries to copy the URI from the address bar in Chrome,
the clipboard will still only contain the string “data:”.
The Netcraft Extension provides protection against the redirects used in the phishing attacks above, and Netcraft’s open redirect detection service
can be used to identify website vulnerabilities which would allow fraudsters to easily redirect victims to
similar phishing content.
Rank | Performance Graph | OS | Outage hh:mm:ss |
Failed Req% |
DNS | Connect | First byte |
Total |
1 | Qube Managed Services | Linux | 0:00:00 | 0.004 | 0.086 | 0.023 | 0.046 | 0.046 |
2 | GoDaddy.com Inc | Linux | 0:00:00 | 0.013 | 0.149 | 0.012 | 0.200 | 0.205 |
3 | Memset | Linux | 0:00:00 | 0.013 | 0.111 | 0.055 | 0.132 | 0.217 |
4 | www.dinahosting.com | Linux | 0:00:00 | 0.013 | 0.242 | 0.080 | 0.159 | 0.159 |
5 | Swishmail | FreeBSD | 0:00:00 | 0.022 | 0.124 | 0.073 | 0.144 | 0.186 |
6 | ServerStack | Linux | 0:00:00 | 0.022 | 0.081 | 0.076 | 0.151 | 0.151 |
7 | Datapipe | FreeBSD | 0:00:00 | 0.030 | 0.102 | 0.016 | 0.032 | 0.048 |
8 | EveryCity | SmartOS | 0:00:00 | 0.030 | 0.083 | 0.054 | 0.107 | 0.107 |
9 | Logicworks | Linux | 0:00:00 | 0.030 | 0.143 | 0.073 | 0.152 | 0.340 |
10 | Pair Networks | FreeBSD | 0:00:00 | 0.030 | 0.219 | 0.082 | 0.166 | 0.579 |
Qube had the most reliable company site in September with only a single failed request. This is the fourth time this year that Qube has made it to first place, nudging ahead of Datapipe’s track record this year. Qube offers a Hybrid cloud service, where physical servers and equipment are integrated with its cloud hosting with a secure connection between the two networks.
The second most reliable hosting company site belonged to GoDaddy, the world’s largest domain registrar, and had only 3 failed requests in September. Memset and dinahosting also had only 3 failed requests and thus they were ranked by average connection times.
In third place is Memset. Memset was last ranked in the top 10 in June 2013 when it achieved 9th place with 6 failed requests. Memset offers its customers a Perimeter Patrol service, which involves regular scanning of Memset servers to highlight security vulnerabilities.
Linux was still the most popular operating system of choice, used by 6 of the top 10, followed by FreeBSD which was used by 3. EveryCity, however, uses SmartOS, a community fork of OpenSolaris geared towards cloud hosting using KVM virtualisation.
Netcraft measures and makes available the response times of around forty leading hosting providers’ sites. The performance measurements are made at fifteen minute intervals from separate points around the internet, and averages are calculated over the immediately preceding 24 hour period.
From a customer’s point of view, the percentage of failed requests is more pertinent than outages on hosting companies’ own sites, as this gives a pointer to reliability of routing, and this is why we choose to rank our table by fewest failed requests, rather than shortest periods of outage. In the event the number of failed requests are equal then sites are ranked by average connection times.
Information on the measurement process and current measurements is available.