Google unveils free DNS service

At this time. But it may morph into a data collection mechanism later on.

And someday, my Ubuntu environment may contain Mountain Dew branding. In the meantime…

It’s no paranoia to note that Google collects more data about internet users than any other company in the world. I mean, there’s Google ads on this page, so they know that I’m responding to this thread, if it comes down to it.

Giving them even more information is… well, it depends how much you trust Google. If they were actually evil, they have a lot of scope to do evil. But I’m mostly fine with 'em.

Google does collect a lot of user data, but to date they have handled it with great care and with a sense of responsibility. Until they break their own stated policies and user trust, I will continue to avail myself of their services.

I can see where he’s coming from, and if your ISP’s DNS servers work well enough, I wouldn’t bother switching.

I’ve been noticing timeouts on sites I haven’t hit recently through my ISP lately, however, so I’ll give Google’s DNS a shot.

If Google is evil we’re all going down. Might as well get some use out of the free crap they offer before they take over the universe.

If Google takes over the world, the world will be run better anyway, so I don’t really care.

There appears to be security on their DNS system, but it’s not filtering per se. I’d like to get some opinions on the difference.

The security they’re talking about there is for preventing people from hijacking DNS and giving you the wrong IP address for a hostname. That could be used in things like man-in-the-middle attacks where you type in a site’s address and it shows up like normal, but you’ve been secretly redirected to someone else’s machine and they’re capturing your data.

Filtering is for when you get the right address for a hostname, but the site itself is malicious and you were directed there by some other piece of malware, phishing, etc. DNS doesn’t really have anything to do with it because you get sent to the malicious site even if DNS is working properly. Filtering is an additional service on top of that, and one Google doesn’t want to get into for whatever reason.

When you send a request to a recursive DNS server, it needs to go out and query the appropriate authoritative server to find the result to give you. So, you ask Google who ‘www.misago.org’ is. Google asks the global servers what the DNS servers for .org are, uses one of those servers to ask what the DNS servers for .misago.org are, and finally asks one of those servers what the address for ‘www.misago.org’ is. Google then sends that information back to you, and caches it in case you (or someone else) asks again.

An attacker can try to break this process in some way to force a recursive DNS server to give you the wrong answer. These are generally “cache poisoning” attacks–that is, the attacker manages to get the wrong answer in the DNS server’s cache.

For example, an attacker might send a request for ‘www.misago.org’, and then immediately start sending responses to the DNS server. The DNS server sends misago.org’s DNS server a request, sees the response coming from the attacker, and uses the (incorrect) information in that response. The DNS server puts that incorrect information in its cache, and now anyone who asks it for the address of www.misago.org gets the wrong answer.

Good DNS servers defend against attacks like this.

The page you’re linking to is detailing various steps Google is taking to defend against attacks of this nature. This is all in service of returning accurate results to queries. It has nothing to do with deliberately returning inaccurate results to defend against malicious sites–that is, declaring that www.misago.org contains malware, and returning a different address for it to prevent people from going there.

Note that Google already provides non-DNS-based malware protection services, which are used by Chrome, Firefox, and Safari.

That’s true, and they also do it on their own search results.

Their philosophy seems to be that the low-level protocol should perform exactly to the specifications and higher-level problems should be handled at a higher level. From a developer’s standpoint it certainly makes sense – returning incorrect information can obscure the actual root error in various scenarios. With the code I’ve worked on, if a user specified a bad hostname it would normally immediately raise a fatal error pointing out the problem, but if it silently resolves to some redirect site instead, it would go into a retry loop instead, and the user is puzzled because it appears to be running but nothing’s happening…

Not really. Google envisions a full cloud computing world. Instead of an integrated stack of hardware, OS and application they will be decoupled. There are already tons of examples of this happening.

In this new world a company will host the infrastructure for the applications. The engine that processes all the required workloads. Google wants to be that company. To get there they have to build a foundation that encourages developers and companies to trust them with the base infrastructure. Its similar to a mainframe receiving processing requests from various terminals but on a much larger and decentralized scale.

So I dont think its about web browsing. Google has another target in mind. they think intelligent workload management and cloud computing is the future and they are trying to dominate that space by using their talent and the ridiculous amount of money they generate from their search engine.

Interesting explanation.

Not really - he doesn’t explain how DNS affects search result speeds and to be honest, I’m not seeing it. Once you hit Google’s front page you’ve already cached the IP for the site so there’s no real gain from having faster DNS resolution for subsequent connections to the Google site (since your PC has already cached the hostname to IP mapping).

I’m subscribing to the theory that it’s all about the Cloud. Oh, and of course, Google only knows what you’re looking at on the web if you use their search engine to find it. If you use their DNS servers to resolve hostnames to IPs then they have some idea (i.e., they know the hostname but not the content you wanted) what you’re looking at even if you’re not using their search engine to get to it.

But not Internet Explorer, which I need to leave on my system for testing reasons, and which my idiot extended family uses on their laptops when they visit. Configuring the router to use OpenDNS cuts down on the chances of going to a malware-infested site no matter who’s browsing or what they’re using.

Yeah, it sounds like you should stick to OpenDNS if that meets your needs better. After testing it for a couple days, I’m impressed with the Google service’s performance and low overhead. I think I’ll stick with it for awhile.

Microsoft has their own malware protection stuff for us idiot IE users. Not to mention features like running in protected mode so that any malware that manages to get through is limited in scope in terms of how much it can impact the system.

IE isn’t perfect for everyone and there are plenty of reasons to use alternative browsers (Firefox’s plugins, Chrome’s JS speed, etc), but if you still have the idea in your head that IE is bad because it is insecure you’re living in the past.

IE gets targeted by everything, and it’s especially bad when my idiot extended family refuses to upgrade to the latest versions. If you think I was making a blanket statement about all IE users being idiots you obviously haven’t met my extended family.

I don’t think IE is inherently less secure than other browsers any more than I think MacOS is inherently more secure than Windows, but you can’t deny the fact that IE and Windows are big fat targets for malware programmers due to their market share.

I’d love to see a direct comparison between GoogDNS and OpenDNS (the latter of which I currently use).