captain holly java blog

Javascript silo proposals

Posted in Uncategorized by mcgyver5 on March 26, 2010

This post was inspired by a recent Security Now podcast (transcript) that featured John Graham-Cumming and reviews ways to secure Javascript by doing things like separating namespaces and working with subsets of Javascript.

Javascript code commonly lives all together in the same namespace. That is, if you visit a page with included Javascript code from several sources, the code from one can, by default, call code and change code from the others.

Not only can other scripts access fields and functions of Javascript objects, alter their values, but they can replace functions themselves with functions of their own.
This is true most of the time and it is a huge security problem.
A common perception from the web is that this doesn’t matter, that security “is a server-side concern”, that fears are overblown, that any restrictions on Javascript will screw up the web 2.0 “revolution”. Whoa.
Consider what a third party script on your page can do:

  1. pull in additional scripts from anywhere
  2. make requests to to the server
  3. Request information from the user
  4. Get around same origin policy and share information with anywhere

Think of the ad-based attacks against the New York Times a year ago and then the attacks against Yahoo, Fox News and Google from the other day. These are examples of the well known sites. Think of all the second and third tier sites that will place active content on their pages for a fee. Attackers can just purchase ads!

Closures
Javascript does have a way for a programmer to silo their script. This is called a closure. Closures require some “leveling up” in Javascript skill and this entire article is a must-read. I could not really grasp “closures” until I read an article about closures that did not use the word “closure”. A closure is a function specially defined inside another function. These “inner functions” can be treated like private java methods.
Simple Javascript Closure Example:
normal Javascript without closures (all functions are “public”)

function getUsername(){
 return username;
}

as part of an object:

var f = function(){
  var username = "tim"
function getUsername(){
   return username;
}
}

any code on the page can call f.getUsername(); and get the username

but when using closures

f = function(){
   
    // this function is made private
   function getCookie_private(){
        alert(document.cookie);
   }
   return {

         // a public variable:
         var screenName = "Sue-ellen";
        // a public function
         function publicStuff{
              alert("hello everyone!");
         }
}();

that syntax looks weird because of the parenthesis at the end. These are important because it makes this entire function run immediately, constructing itself and then
returning the block after the “return” keyword as THE function. Except this function now has some private parts accessible only by itself!

using this idea, you can create namespaces with these anonymous functions:

com.wordpress.captainholly = function(){

///  all kinds of public and private functions and fields
....
}();


This is explained here.

Closures make it so an “inner function always has access to the vars and parameters of its outer function, even after the outer function has returned.” This makes it possible for com.wordpress.captainholly private methods to keep their knowledge of the object they are part of so that
com.wordpress.captainholly.currentUser.screenName is not a fake idea.

This article carries this idea further to create “Durable” objects — objects that, even though they have publicly accessible methods cannot have them switched out by another script.

Web security cannot rely on every Javascript programmer both understanding and consistently using closures. We need something more. Some products have come out that attempt to silo and otherwise secure Javascript on the fly.

A Google project called Caja is a safe subset of JavaScript. It drastically rewrites code and supposedly isolates functions properly so that other scripts can’t call them. They have a nice testbed that shows the rewritten code and the browser behavior that results.

ADsafe is a technology promoted by Douglas Crockford himself. AdSafe encapsulates included code, forcing it to interact with ADSafe as a proxy object between it and the rest of the page. External code is only allowed access to the ADsafe Object. The ADsafe object then addresses the rest of the page, the DOM etc. in a safe manner.
A sampling of what Adsafe brings, enhanced by Douglas Crockford’s Powerpoint presentation:

  • No access to the “document” object
  • take away most of the access to DOM objects through subscripts. With adsafe, you have to use ADSAFE.get() and ADSAFE.set() instead of someControl[‘importantControl’]
  • No using the “this” keyword
  • some of the other keywords not allowed: apply, arguments, call, callee, caller, eval, prototype, valueOf
  • Dom interface is query based and scope of queries is limited to the content of the div of the third party widget
  • Guest code has no access to any DOM node.
  • Only the “guest” code has to operate through Adsafe. The home site’s code can still do anything.

Another proposal for buttoning down Javascript is Mozilla’s Content Security Policy, CSP, which proposes to restrict the way JavaScript is used in the browser.
The CSP specs have a long list of features that give granular control over such things as :

  1. A general “allow” directive that lets the website owner define which remote sites can provide resources.
  2. An ancestor-pages directive that allows the website owner to restrict other sites from framing the site. Used properly, this cancels out a big vector for cross site scripting attacks.
  3. More granular directives that allow the site owner to define which domains may provide object, css, image, and Javascript elements to the page in question.
  4. Where to report unauthorized access attempts

Settings are controlled through a header. Using a response.setHeader() in java, here are a few examples:
To allow only resources originating from same domain:

response.setHeader("x-content-security-policy", "allow 'self'");

To allow no other site to act as a frame for the page being loaded:

response.setHeader("x-content-security-policy", "frame-ancestors self");

The CSP can be toggled off and on through the about:config interface. It won’t affect pages that don’t have a CSP header and pages that have CSP headers can still run in browsers that don’t implement CSP. Of course, since clients can’t be sure that the web site will enforce CSP and the web site can’t be sure that the browser will implement it, this just amounts to an interesting proto-standard that might help us understand where the web will have to move some day.

Users can protect themselves
While waiting for browsers and servers to implement these protections, individual users can protect themselves by using the NoScript Firefox extension. First of all, Noscript prohibits Javascript from sites you haven’t approved from running at all. On top of that, Noscript includes a module of ABE (Application Boundary Enforcer) that does something very similar to Mozilla’s CSP. When you first visit a website and decide to allow Noscript to allow scripting on this website, ABE takes over and enforces firewall rules about what outside resources, if any, may interact with the current site.

This post is just about ways to secure Javascript, which is just a part of overall browser security. For a complete treatment of browser security, see the excellent and constantly evolving Google browser security handbook.

Advertisements

dbvisualizer – the very handy SQL client

Posted in Uncategorized by mcgyver5 on March 24, 2010

There are a lot of things to love about dbvisualizer. The tool is stable and fast. It has easy installation, a great SQL editor, built-in support (and drivers) for many databases, quick object and data editing, and most of all simplicity. Somehow these guys made a super flexible but simple and intuitive tool. I compiled a list of all the things I liked and disliked about the program. This list includes some nitpicking. Their people are really good about accepting bug reports and feature requests and I’ve nitpicked to them.

  • One of the most tedious things about any SQL client is drilling down to get to the object you need. DBVisualizer helps by allowing you to drag your most used database objects to a favorites bar for easy access. Favorites items have the same context menus as the regular item in the tree. Opening an item from the favorites bar opens the object tree to the correct place so that it is then easy to access other items near it. This feature gets the most enthusiasm from my coworkers. It is somewhat hidden at first as you need to show/hide the favorites toolbar from the view menu. Unfortunately, there does not seem to be keystroke access to the favorites bar.
  • The grid interface for editing data is well done. It has commonly accepted keystrokes for searching, editing cells, deleting rows, and saving changes.
  • The first upgrade process felt disconcerting because it wasn’t clear if running the exe was going to upgrade an existing install, erase my settings, or what. It created a brand new DBVisualizer install and left the old one and still imported my settings. Not expected behavior. The next day I mistakenly clicked the shortcut to the old version and was asked to upgrade again. “Pretty aggressive release schedule”, I thought.
  • There is complete support for exporting user settings (connections, bookmarks, preferences). This helps when migrating to a new machine or welcoming new employees.
  • Initial install was easy and it automatically detected all my drivers. It also includes the MySQL driver, again because this is a commercial product. This is an improvement over SQuirreL, which for better or worse, cannot package the MySQL driver due to licensing issues.
  • This review would not be complete without comparing it to Navicat, another popular cross-database multi-platform SQL Client known mostly as a MySQL client. Navicat has two powerful features not present in DBVisualizer. This is server performance monitoring and job scheduling. It would be nice to schedule backups inside DBVisualizer, but for the most part, the databases I connect to are managed by other people (DBAs) who perform backups. As a developer, this is not a key feature for me. Navicat is slightly more expensive than DBVisualizer. (~$200 for DB Visualizer and ~ 375 for Navicat) . This was the one feature that is compelling about Navicat. It would be nice to monitor server loads. I’m not even sure I’d have the rights to access that info, though.
  • Explain Plan is beautiful. Unfortunately, I’m using it mostly with PostgreSQL and DBVisualizer does not have explain plan for PostgreSQL. Navicat does have explain plan for its supported databases including PostgreSQL. PGAdmin III and SQuirreL have limited implementations. None match the clarity of the DBVisualizer explain plans. Except not…. for PostgreSQL.
  • DBVisualizer is missing some PostgreSQL specific syntax such as vacuum and analyze are not well supported. Compare this to SQuirreL, which has Vacuum and Analyze in the context menu for each table.
  • Another example of the flexibility of DBVisualizer is the ability to create Folders in the database tab. I organized all the disparate databases my applications use into Folders to make the list more manageable. I found that favorites will not follow as you change your folder structure. So, make your folders first and then your shortcuts in the favorites bar.
  • DBVisualizer isn’t afraid to package proprietary packages such as the yWorks (for the references graphs) and Install4j.
  • DBVisualizer has a monitor feature. This monitor is the kind that monitors your data over time, as in: “this table is growing at 1100 records per week”. This tool is more complex than I can manage for this post.
  • The SQL Editor is really well done. The autocomplete saves tons of time. jumping back and forth between a “SQL Builder” and SQL Editor is doable
  • You can use variables in the SQL Editor. This makes the query ask you for fill-in values when you run it. Makes repeated querying very easy.
  • In addition to data editing, it is really easy to edit table structure. An “Alter Table” Dialogue provides controls to change all table aspects and generates an SQL statement for you. A problem I encountered with PostgreSQL is that In Alter Table –> Constraints –> Drop constraint,
    The generated SQL produces a syntax error about the name of the constraint . PostgreSQL requires a slightly different syntax for dropping a non null constraint which this tool does not account for as I write this. I reported this to them through the forums and they said they would fix it.
  • By the way, the forums and documentation are really good. I’ve posted 8 or so posts to the forums as I was learning this tool and writing this post. They were all answered within an hour or two… by employees. Not only can you get answers, but find out a whole lot more about this product.
  • Also with PostgreSQL, it can be hard to edit keys or delete duplicate records postgreSQL has no rowid (oid) unless you specify one when building the table. One suggestion I have is a way to automatically add OID. Compare this to pgAdminIII – well, no comparison because pgAdminIII has no live editing of tables. So… compare this with SQuirreL, which has a clunky editing interface but still has the ability to delete an identical row. It noticed the duplicates, alerted me to the fact and then went ahead and deleted ONE of the duplicates. I wonder how it knew which to delete? This scenario is exactly the same for MySQL in both tools. The real solution is, of course, to always have a unique key.
  • I also inadvertently set my row limit to 4 for the data grid. No idea how it happened, but it stayed that way and as I navigated around, it affected all my tables. I don’t like the way that worked since it never LOUDLY told me that I wasn’t seeing everything. I now have my row display limit set higher, but if a table exceeds that, I sometimes find myself wondering where the hell my data went. There is no control like “show me the next page of results”.
  • Importing and exporting data via CSV was a bit clunky. I was thrown off by the presence of the previous action’s logs in my import dialog. As another user reported in the forums, it can be hard to know if you ran the import already. The import tool also allowed me to try and import an SQL file as CSV. That did not go well.
  • Using it with embedded hsqldb. It was easy to load the hsql.jar file as the driver. It was less easy to know how to point to the “database file” because there is no actual file. Instead, you are supposed to point it at theName.properties and then remove the “.properties”
  • There are plenty of configuration options in the Tool Settings dialog. There are even more in a file called dbvis-custom.prefs, where you can disable features and force JDBC to do unexpected things. Making uneducated changes to this file and others in the same directory could really screw up your install. And since it is a java application, there is a whole galaxy of startup options.