Stephen Smith's Blog

Musings on Machine Learning…

Archive for the ‘Security’ Category

Installing the New Sage 300 Web UIs Securely

with 5 comments

Introduction

Sage 300 2016 comes with new Web UIs. With beta release I talked about how to install these, but I didn’t get into the details of securing your setup to be exposed to the Internet. If you just follow the instructions from the last blog post, then you are ok in a protected LAN environment, but need a number of additional steps to go beyond that. A common question is how I set this up in a secure manner so that these new features won’t be exploited by hackers.

Most people will probably just setup Sage 300 running on their local network. If you don’t expose the web server to the internet, then your security concerns are the same as they are today. You are just regulating what bits of information your local users are allowed to see. Generally (hopefully) you aren’t as worried about your own employees hacking your network. The big concern for security here is usually social engineering which really requires good education to prevent. Note however that we have seen sites where people have added Internet access for all their employees, but unwittingly exposed their network to the Internet. It’s never a bad time to re-evaluate your current security to see if there are any weaknesses.

A common way to extend to the Internet if via VPN connections. This usually works well for some devices like laptops but then very badly for others like tablets. If you need better performance and don’t want to worry about supporting VPN clients on a whole variety of devices, then using the standard Internet security protocols is a better way to go. All that being said, if your needs are simple, VPN is a good way to go.

For Sage 300 we’ve taken security very seriously and are involving security consideration into all parts of our Software Development Methodology. Additionally we commissioned a third party security audit of our product. From this audit we then made a number of changes to tighten up our security further. This way we’ve been looking for and being careful about SQL Injection attacks and cross site scripting attacks, among others.

For any site you should do some sort of threat risk modeling perhaps like: http://www.owasp.org/index.php/Threat_Risk_Modeling. Generally this sort of exercise gets you thinking about what you are trying to protect and what the possible threats are. Even if you do something simple like:

  • Identify bad guys – young hackers, disgruntled ex-employees, competitors, etc.
  • Identify assets – databases that you want protected, servers that should be secure, etc.
  • Identify risks – having your data stolen, having your website vandalized, having your data modified.

Then you can develop plans to protect your assets and to watch for your adversaries. You should perform this exercise even if you don’t have any web servers and feel you have a very protected environment.

A lot of security isn’t a matter of being perfect, just being better than others. This way hackers will come across your web site, quickly see it has security in place and then move on to easier targets. Hackers tend to employ automated scripted scanning tools to search the Internet for unprotected servers, just starting by being HTTPS and not having any other ports open, sets the bar quite high for hackers and the scanning tool will keep scanning.

Nmap/Zenmap

When you expose a web server to the Internet, your first line of defense is the firewall. The firewall’s job is to hide all the internally running processes from the Internet, such as SQL Server or Windows Networking. Basically you want to ensure that the only things people can access from the outside are HTTP and HTTPS (these are ports 80 and 443 respectively). This way the only things a hacker can attack are these ports. Generally hackers are looking for other ports that have been left open for convenience like RDP or SQL Server and then will try to attack these.

A great tool to test if any ports have been left open is Nmap/Zenmap. You run this tool from outside your network (perhaps from home) to see what ports are visible to the Internet. Below is a screen shot of running this tool against www.yahoo.com. We see that ports 80 and 443 are open as expected but so are ports 25 and 53 (which are for email authentication and DNS). Since there are 4 ports, as a hacker if I have an exploit for any one of these I can give it a try. Obviously the fewer ports open, the better. Ideally only port 443 for HTTPS (though port 80 is often left open to give a better error message to use HTTPS or to redirect people to HTTPS automatically).

It is well worth running Nmap so you don’t have any surprises, especially since configuring firewalls can be complicated.

nmap

Qualsys and CloudFlare

Zenmap is nice because it’s simple and free. However there are more sophisticated tools available that you might want to consider. For instance Qualsys is a very good commercial security scanner which will do a deeper analysis than Zenmap. If you website is protected by authentication, you might want to run Qualsys against a test system with authentication turned off, then it can do a much more thorough scan of all your web pages (i.e. find vulnerabilities that are only visible if you are successfully logged in).

Another protective layer is to put your site behind CloudFlare. Among other things, this will provide protection against distributed denial of service DDoS attacks. This is where hackers enlist thousands (or millions) of zombie computers to all access your site at once, bringing it down.

HTTPS

Now with your site doesn’t have any unneeded open ports, we need to ensure the web site is only accessed in a secure manner. As a first step we only access it through HTTPS. This encrypts all communications, ensuring privacy and validates that users are talking to the right server avoiding man-in-the-middle attacks.

To turn on HTTPS you need a server digital certificate. If you already have one, then great you are all set. If you don’t have one then you can purchase one from companies like VeriSign.

To turn on HTTP for a web site in IIS, go to the IIS management console, select the “Default Web Site” and choose “Bindings…” from the right hand side. Then add a binding for https, at this point you need to reference you digital certificate for your server.

bindings

As a further step, you should now choose “SSL Settings” in the middle panel and check the “Requre SSL” checkbox. This will cause IIS to reject an HTTP:// requests and only accept HTTPS:// ones.

sslsetting

Other IIS Settings

If you browse the Internet there are many other recommended IIS settings, but generally Microsoft has done some good work making the defaults good. For instance by default virtual directories are read-only so you don’t need to set that. Also remember that Sage 300 doesn’t store any valuable data in IIS, Sage 300 only stores some bitmaps, style sheets and static html files here. So if someone “steals” the files in IIS, it doesn’t really matter, this isn’t where your valuable accounting data is stored. We just want to ensure someone can’t vandalize your web site by uploading additional content or replacing what you have there.

One thing that security experts do recommend is that you replace all the generic IIS error messages, this is so the hacker doesn’t learn the exact HTTP error code or help recognize your exact server/IIS version. You can either edit or replace these pages which are located under C:\inetpub\custerr by language code, or you can configure IIS to redirect to Sage 300’s generic error message rather than use the stock error messages (ie /Sage300/Core/Error). You do this from the server’s error message icon in the IIS manager.

iiserrorconfig

Database Setup

The new Web UIs honors the security settings, set from the Security button in Database Setup. These should be set according to the screen shot below. The most important setting is to disable a user account after x failed password attempts. This prevents automated programs from being able to try every possible password and eventually guessing the correct one. With the settings below and automated program can only try 3 passwords every 30 minutes which will usually get the hacker to move on to find a less secure site to try to hack.

Also ensure security is turned on for each system database, or you don’t need a password to login. Further make sure you change the ADMIN password first since everyone knows the default one.

secsettings

Update 2015/08/15: Its been pointed out to me that a good practice for Database Setup is for each database to have its own DBO and password. Then anyone getting access to one database doesn’t get access to any other. This includes creating a separate DBO and password for the Portal database.

Vigilance

It is generally good practice to remain vigilant. Every now and then review the logs collected by IIS to see if there is a lot of strange activity, like strange looking URLs or login attempts being aimed at your server. If there is, chances are you are being attacked or probed and want to keep an eye on it. If it is very persistent you might want to work with your ISP or configure your Firewall to block the offending incoming IP addresses entirely.

Summary

The important steps are to:

  • Configure IIS for HTTPS (SSL).
  • Disable HTTP (require SSL).
  • Set more stringent security restrictions in Database Setup
  • Do an NMap port scan of your server.

Plus follow normal good IT practices like applying Windows Updates and not running services you don’t need. Practices you should follow whether running a web site or not. Then keep an eye on the IIS logs to see if you are being probed or attacked.

These steps should keep your data and your server safe.

PS

This article is an update to this 2010 article I did for the 6.0A Portal. Now that we have a new Web technology stack a lot of these previous articles will need to be updated for the new technologies and for what has happened in the last five years.

Written by smist08

August 8, 2015 at 4:57 pm

More Thoughts on Security

leave a comment »

Introduction

Last week I blogged on some security topics that were prompted by the Heartbleed security hole. Heartbleed was hot while it lasted, but in the end most servers were quickly patched and not a lot of damage was reported. Now this last week Heartbleed was completely pushed aside by the latest Internet Explorer security vulnerability. A lot of the drama of this problem was caused by speculation on whether Microsoft would fix it for Windows XP. Although the problem existed in all versions of Windows and IE, it was assumed that Microsoft would fix it fairly quickly for new versions of Windows, but leave Windows XP vulnerable.

The IE Problem

Microsoft’s Internet Explorer has had a history of problems with letting rogue web sites take over people’s computers by downloading and executing nasty code. The first cases of this was that IE would run ActiveX controls, which basically are compiled programs downloaded to your computer and then run in the Browser’s process space. These led to all sorts of malicious programs and viruses. First Microsoft tried to make ActiveX controls “signed” by a trusted company, but generally these caused so many problems that people have to be very careful which ActiveX controls to allow.

internet-explorer-ie10

With ActiveX controls blocked, malicious software writers turned to other ways to get their code executed inside IE. A lot of these problems date back to Microsoft’s philosophy in the early 90s of having code execute anywhere. So they had facilities to execute code in word processing documents, and all sorts of other things. Many of the new malicious software finds old instances of this where Microsoft unexpectedly lets you run code in something that you wouldn’t expect to run code. Slowly but surely these instances are being plugged one by one through Windows Updates.

The next attack surface is to look for bugs in IE. If you’ve ever tried running an older version of IE under Bounds Checker, you would see all sorts of problems reported. Generally a lot of these allow attackers to exploit buffer overrun problems and various other memory bugs in IE to get their code loaded and executing.

Another attack surface is common plugins that seem to always be present in IE like for rendering PDF documents or for displaying Adobe Flash based websites or using Microsoft Silverlight. All these plugins have had many security holes that have allowed malicious code to execute.

Plugging these holes one by one via Windows update is a continuing process. However Microsoft has taken some proactive steps to make hacking IE harder. The have introduced things like more advanced memory protections and ways to randomize memory buffer usage to make it harder for hackers to exploit things. However they haven’t trimmed down the functionality that leads to such a large attack surface.

internet-explorer-vml-bug-zero-day-vulnerability

The latest exploit that was reported in the wild last week got around all Microsoft’s protections and allowed a malicious web site to take over any version IE on any version of Windows that browsed that site. Then the malicious web site could install software to steal information from the affected computer, install a keyboard logger to catch typed passwords or install e-mail spam generation software.

Why the Fuss?

This new exploit was a fairly typical IE exploit, so why did it receive so much attention? One reason is that after Heartbleed, security is on everyone’s mind. The second is that Microsoft has ended support for Windows XP and publicly stated it would not release any more security updates. So the thinking was that this was the first serious security flaw that wouldn’t be patched in Windows XP and havoc would result.

However Microsoft did patch the problem after a few days, and they did patch the problem on Windows XP as well. After all Windows XP still accounts for about a third of the computers browsing the Internet today. If all of these were harnessed for a Denial of Service attack or started to send spam, it could be quite serious.

People also question how serious it is since you have to actually browse to the malicious web site. How do you get people to do this? One way is when URLs expire, sometimes someone malicious can renew it and redirect to a bad place. Another way is to register URLs with small spelling mistakes from real websites and get unwary visitors that way. Another approach is to place ads on sites that just take the money without validating the legality of the ad or what it links to. Sending spam with the bad URLs is another common approach to lure people.

How to Protect Yourself

Here are a few points you can adopt to make your life safer online:

  • Use supported software, don’t use old unsupported software like Windows XP. Windows 7 is really good, at least upgrade to that. If your computer isn’t connected to the Internet then it doesn’t really matter.
  • Make sure Windows Update is set to automatically keep your computer up to date.
  • Don’t click on unknown attachments in e-mails
  • If you receive spam with a shortened or suspicious URL link, don’t click on it.
  • Go through the add-ons in your browser and disable anything that you don’t know you use regularly (including all those toolbars that get installed).
  • When browsing unfamiliar sites on the web, use a safer browser like Google Chrome. Nothing is foolproof but generally Chrome has a better history than most other browsers.
  • Make sure you have up to date virus scanning software running. There are several good free ones including AVG Free Edition.
  • Make sure you have Windows Firewall turned on.
  • Don’t run server program you don’t need. You probably don’t need to be running an FTP server or an e-mail server. Similarly don’t run a whole bunch of database servers you aren’t using, or stop them when not in use.
  • Don’t trust popup Windows from unfamiliar or suspicious websites. I.e. if suddenly a Window pops up telling you to update Java or something, it’s probably a fake and going to install something bad. Always go to a company’s main site of something you are going to install.
  • Never give personally identifiable data to unknown websites, they have no good reason to know your birthday, phone number or mother’s maiden name.
  • Don’t use the same password on all websites. For websites that you care about have a good unique password.
  • Be distrustful of URLs that are sort of right, but not quite (often it’s better to go through Google than to spell a URL directly). Often scammers setup URLs with common spelling errors of popular sites to get unsuspecting victims.

Summary

There are a lot of bad things out on the Internet. But with some simple precautions and some common sense you can avoid the pitfalls and have an enjoyable web browsing experience.

 

Written by smist08

May 3, 2014 at 4:25 pm

Some Thoughts on Security

with 2 comments

Introduction

With the recent Heartbleed security exploit in the OpenSSL library a lot of attention has been focused on how vulnerable our computer systems have become to data theft. With so much data travelling the Internet as well as travelling wireless networks, this has brought home the importance of how secure these systems are. With a general direction towards an Internet of Things this makes all our devices whether our fridge or our car possibly susceptible to hackers.

I’ll talk about Heartbleed a bit later, but first perhaps a bit of history with my experiences with secure computing environments.

Physical Isolation

My last co-op work term was at DRDC Atlantic in Dartmouth, Nova Scotia. In order to maintain security they had a special mainframe for handling classified data and to perform classified processing. This computer was located inside a bank vault along with all its disk drives and tape units. It was only turned on after the door was sealed and it was completely cut off from the outside world. Technicians were responsible for monitoring the vault from the outside to ensure that there was absolutely no leakage of RF radiation when classified processing was in progress.

After graduation from University my first job was with Epic Data. One of the projects I worked on was a security system for a General Dynamics fighter aircraft design facility. This entire building was built as a giant Faraday cage. The entrances weren’t sealed, but you had to travel through a twisty corridor to enter the building to ensure there was not line for radio waves to pass out. Then surrounding the building was a large protected parking lot where only authorized cars were allowed in.

Generally these facilities didn’t believe you could secure connections with the outside world. If such a connection existed, no matter how good the encryption and security measures, a hacker could penetrate it. The hackers they were worried about weren’t just bored teenagers living in their parent’s basements, but well trained and financed hackers working for foreign governments. Something like the Russian or Chinese version of the NSA.

Van Eck Phreaking

A lot of attention goes to securing Internet connections. But historically data has been stolen through other means. Van Eck Phreaking is a technique to listen to the RF radiation from a CRT or LCD monitor and to reconstruct the image from that radiation. Using this sort of technique a van parked on the street with sensitive antenna equipment can reconstruct what is being viewed on your monitor. This is even though you are using a wired connection from your computer to the monitor. In this case how updated your software is or how secure your cryptography is just doesn’t matter.

Everything is Wireless

It seems that every now and then politicians forget that cell phones are really just radios and that anyone with the right sort of radio receiver can listen in. This seems to lead to a scandal in BC politics every couple of years. This is really just a reminder that unless something is specifically marked as using some sort of secure connection or cryptography, it probably doesn’t. And then if it doesn’t anyone can listen in.

It might seem that most communications are secure now a days. Even Google search switches to always use https which is a very secure encrypted channel to keep all your search terms a secret between yourself and Google.

But think about all the other communication channels going on. If you use a wireless mouse or a wireless keyboard, then these are really just short range radios. Is this communications encrypted and secure? Similarly if you use a wireless monitor, then it’s even easier to eavesdrop on than using Van Eck.

What about your Wi-Fi network? Is that secure? Or is all non-https traffic easy to eavesdrop on? People are getting better and better at hacking into Wi-Fi networks.

In your car if you are using your cell phone via blue tooth, is this another place where eavesdropping can occur?

Heartbleed

Heartbleed is an interesting bug in the OpenSSL library that’s caused a lot of concern recently. The following XKCD cartoon gives a good explanation of how a bug in validating an input parameter caused the problem of leaking a lot of data to the web.

heartbleed_explanation

At the first level, any program that receives input from untrusted sources (i.e. random people out on the Internet) should very carefully and thoroughly valid any input. Here you can tell it what to reply and the length of the reply. If you give a length much longer than what was given then it leaks whatever random contents of memory were located here.

At the second level, this is an API design flaw, that there should never have been such a function with such parameters that could be abused thus.

At the third level, what allows this to go bad is a performance optimization that was put in the OpenSSL library to provide faster buffer management. Before this performance enhancement, this bug would just have caused an application fault. This would have been bad, but been easy to detect and wouldn’t have leaked any data. At worst it would have perhaps allowed some short lived denial of service attacks.

Mostly exploiting this security hole just returns the attacker with a bunch of random garbage. The trick is to automate the attack to repeatedly try it on thousands of places until by fluke you find something valuable, perhaps a private digital key or perhaps a password.

password-heartbleed-thumb-v1-620x411

Complacency

The open source community makes the claim that open source code is safer because anyone can review the source code and find bugs. So people are invited to do this to OpenSSL. I think Heartbleed shows that security researcher became complacent and weren’t examining this code closely enough.

The code that caused the bug was checked in by a trusted coder, and was code reviewed by someone knowledgeable. Mistakes happen, but for something like this, perhaps there was a bit too much trust. I think it was an honest mistake and not deliberate sabotage by hackers or the NSA. The source code change logs give a pretty good audit of what happened and why.

Should I Panic?

In spite of what some reporters are saying, this isn’t the worst security problem that has surfaced. The holy grail of hackers is to find a way to root computers (take them over with full administrator privileges). This attack just has a small chance of providing something to help on this way and isn’t a full exploit in its own right. Bugs in Java, IE, SQL Server and Flash have all allowed hackers to take over peoples computers. Some didn’t require anything else, some just required tricking the user into browsing a bad web site. Similarly e-mail or flash drive viruses have caused far more havoc than this particular problem. Another really on-going security weakness is caused by government regulations restricting the strength of encryption or forcing the disclosure of keys, these measures do little to help the government, but they really make the lives of hackers easier. I also think that e-mail borne viruses have wreaked much more havoc than Heartbleed is likely to. But I suspect the biggest source of identity theft is from data recovered from stolen laptops and other devices.

Another aspect is the idea that we should be like gazelle’s and rely on the herd to protect us. If we are in a herd of 100 and a lion comes along to eat one of us then there is only a 1/1000 chance that it will be me.

This attack does highlight the importance of some good security practices. Such as changing important passwords regularly (every few months) and using sufficiently complex or long passwords.

All that being said, nearly every website makes you sign in. For web sites that I don’t care about I just use a simple password and if someone discovers it, I don’t really care. For other sites like personal banking I take much more care. For sites like Facebook I take medium care. Generally don’t provide accurate personal information to sites that don’t need it, if they insist on your birthday, enter it a few days off, if they want a phone number then make one up. That way if the site is compromised then they just get a bunch of inaccurate data on you. Most sites ask way too many things. Resist answering these or answer them inaccurately. Also avoid overly nosey surveys, they may be private and anonymous, unless hacked.

The good thing about this exploit, seems to be that it was discovered and fixed mostly before it could be exploited. I haven’t seen real cases of damage being done. Some sites (like the Canadian Revenue Services) are trying to blame Heartbleed for unrelated security lapses.

Generally the problems that you hear about are the ones that you don’t need to worry so much about. But again it is a safe practice to use this as a reminder to change your passwords and minimize the amount of personally identifiable data out there. After all dealing with things like identity theft can be pretty annoying. And this also help with the problems that the black hat hackers know about and are using, but haven’t been discovered yet.

Summary

You always need to be vigilant about security. However it doesn’t help to be overly paranoid. Follow good on-line practices and you should be fine. The diversity of computer systems out there helps, not all are affected and those that are, are good about notifying those that have been affected. Generally a little paranoia and good sense can go a long way on-line.

Written by smist08

April 26, 2014 at 6:51 pm

User Roles and Security in Sage 300 ERP

with 7 comments

Introduction

Role based security and user roles are terms that are in vogue right now in many ERP systems. Although Sage 300 ERP doesn’t use this terminology, it is essentially giving you the same thing. This blog looks a bit at how you setup Sage 300 ERP application security and how it matches role based security.

Users

First you create your Sage 300 ERP users. This is a fairly straight forward process using the Administrative Services Users function.

user1

Here you create your users, set their language, initial password and a few other security related items.

Security Groups

Security Groups are your roles. For each application you define one of these for each role. For instance below we show a security group for the A/R Invoice Entry Clerk role. In this definition we define exactly which functions are required for this role.

secgrp

Some roles might involve functions from several applications in this case you would need a security group for each application, but they can all be assigned together for the role.

User Authorizations

User Authorizations is where you assign the various roles to your users. Below I’ve assigned myself to the A/R Clerk role.

userauth

If multiple applications are involved then you would need to add a group id for each application that makes up the role.

Thus we can create our users. We can create our roles which are security groups in Sage 300 ERP terminology and then assign them to users in User Authorizations. As you can see below signing on as STEVE now results in a much more uncluttered desktop with just the appropriate tasks for my role.

desksec

Further Security

As you can see above in the Users screen there are quite a few security options to choose from depending on your needs. One thing not to forget is that there are a number of system wide security options that are configured from the Security… button in Database Setup.

dbsec

Also remember to enable application security for the system database for you companies. For many small customers, perhaps application security isn’t an issue. I’ve also seen sites where everyone just logs in as ADMIN. But if you have several users and separation of duties is important then you should be running with security turned on.

dbsec2

Where is Security Implemented?

In the example above we see how security has affected what the user sees on their desktop. Generally from a visual point of view we hide anything a user does not have access to. This means setting up security is a great way of uncluttering people’s workspaces. However this is a visual usability issue, we don’t want people clicking on things and getting errors that they aren’t allowed. Much better to just provide a cleaner slate.

But this isn’t really security, perhaps at most it’s a thin first layer.  The real security is in the business logic layers. All access to Sage 300 functions go through the business logic layer and this is where security is enforced. This way even if you run macros, run UIs from outside the desktop, find a way to run an import to something you don’t have access to, it will all fail if you don’t have permission.

Summary

Sage 300 ERP security is a good mechanism to assign users to their appropriate roles and as a result simplify their workspace. This is important in accounting where separation of duties is an important necessity to prevent fraud.

Setting up Sage ERP Accpac 6.0A Securely

with 7 comments

Sage ERP Accpac 6.0A comes with a new Web Portal (https://smist08.wordpress.com/2009/12/03/the-sage-erp-accpac-6-0a-portal/) along with Web based screen integration to SageCRM (https://smist08.wordpress.com/2009/12/17/sage-erp-accpac-6-0-quote-to-orders/). A common question is how I set this up in a secure manner so that these new features won’t be exploited by hackers.

Most people will probably just setup Accpac 6.0 and/or SageCRM running on their local network. If you don’t expose the web server to the internet, then your security concerns are the same as they are today. You are just regulating what bits of information your local users are allowed to see. Generally (hopefully) you aren’t as worried about your own employees hacking your network. The big concern for security here is usually social engineering (http://en.wikipedia.org/wiki/Social_engineering_(security)) which really requires good education to prevent. Note however that we have seen sites where people have added Internet access for all their employees, but unwittingly exposed their network to the Internet. It’s never a bad time to re-evaluate your current security to see if there are any weaknesses.

For Sage ERP Accpac 6 we’ve taken security very seriously and are involving security consideration into all parts of our Software Development Methodology. Additionally we commissioned a third party security audit of our product. From this audit we then made a number of changes to tighten up our security further. This way we’ve been looking for and being careful about SQL Injection attacks (http://en.wikipedia.org/wiki/SQL_injection) and cross site scripting attacks (http://en.wikipedia.org/wiki/Cross-site_scripting), among others.

For any site you should do some sort of threat risk modeling perhaps like: http://www.owasp.org/index.php/Threat_Risk_Modeling. Generally this sort of exercise gets you thinking about what you are trying to protect and what the possible threats are. Even if you do something simple like:

  • Identify bad guys – young hackers, disgruntled ex-employees, competitors, etc.
  • Identify assets – databases that you want protected, servers that should be secure, etc.
  • Identify risks – having your data stolen, having your website vandalized, having your data modified.

Then you can develop plans to protect your assets and to watch for your adversaries.

A lot of security isn’t a matter of being perfect, just being better than others. This way hackers will come across your web site, quickly see it has security in place and then move on to easier targets. Hackers tend to employ automated scripted scanning tools to search the Internet for unprotected servers (http://en.wikipedia.org/wiki/Port_scanner), just starting by being HTTPS and not having any other ports open, sets the bar quite high for hackers and the scanning tool will keep scanning.

Nmap/Zenmap

When you expose a web server to the Internet, your first line of defense is the firewall. The firewall’s job is to hide all the internally running processes from the Internet, such as SQL Server or Windows Networking. Basically you want to ensure that the only things people can access from the outside are HTTP and HTTPS (these are ports 80 and 443 respectively). This way the only things a hacker can attack are these ports. Generally hackers are looking for other ports that have been left open for convenience like RDP or SQL Server and then will try to attack these.

A great tool to test if any ports have been left open is Nmap/Zenmap (http://www.nmap.org/). You run this tool from outside your network (perhaps from home) to see what ports are visible to the Internet. Below is a screen shot of running this tool against www.yahoo.com. We see that ports 80 and 443 are open as expected but so are ports 25 and 53 (which are for email authentication and DNS). Since there are 4 ports, as a hacker if I have an exploit for any one of these I can give it a try. Obviously the fewer ports open, the better. Ideally only port 443 for HTTPS (though port 80 is often left open to give a better error message to use HTTPS or to redirect people to HTTPS automatically).

It is well worth running Nmap so you don’t have any surprises, especially since configuring firewalls can be complicated.

HTTPS

Now with your site protected so the only thing that can be attacked is the web site, we need to ensure it is only accessed in a secure manner. As a first step we only access it through HTTPS. This encrypts all communications, ensuring privacy and validates that users are talking to the right server avoiding man-in-the-middle attacks (http://en.wikipedia.org/wiki/Man-in-the-middle_attack).

To turn on HTTPS you need a server digital certificate. If you already have one, then great you are all set. If you don’t have one then you can purchase one from companies like VeriSign (http://www.verisign.com/).

To turn on HTTP for a web site in IIS, go to the IIS management console, select the “Default Web Site” and choose “Bindings…” from the right hand side. Then add a binding for https, at this point you need to reference you digital certificate for your server.

As a further step, you should now choose “SSL Settings” in the middle panel and check the “Requre SSL” checkbox. This will cause IIS to reject an HTTP:// requests and only accept HTTPS:// ones.

Other IIS Settings

If you browse the Internet there are many other recommended IIS settings, but generally Microsoft has done some good work making the defaults good. For instance by default virtual directories are read-only so you don’t need to set that. Also remember that Accpac doesn’t store any valuable data in IIS, Accpac only stores some bitmaps, style sheets and static html files here. So if someone “steals” the files in IIS, it doesn’t really matter, this isn’t where your valuable accounting data is stored. We just want to ensure someone can’t vandalize your web site by uploading additional content or replacing what you have there. The valuable data is stored in the database and only accessible through Tomcat, not directly from IIS.

Database Setup

The new Web Portal honors the security settings, set from the Security button in Database Setup. These should be set according to the screen shot below. The most important setting is to disable a user account after x failed password attempts. This prevents automated programs from being able to try every possible password and eventually guessing the correct one. With the settings below and automated program can only try 3 passwords every 30 minutes which will usually get the hacker to move on to find a less secure site to try to hack.

Vigilance

It is generally good practice to remain vigilant. Every now and then review the logs collected by IIS to see if there is a lot of strange activity, like strange looking URLs or login attempts being aimed at your server. If there is, chances are you are being attacked or probed and want to keep an eye on it. If it is very persistent you might want to work with your ISP or configure your Firewall to block the offending incoming IP addresses entirely.

Summary

The important steps are to:

  • Configure IIS for HTTPS (SSL).
  • Disable HTTP (require SSL).
  • Set more stringent security restrictions in Database Setup
  • Do an NMap port scan of your server.

Plus follow normal good IT practices like applying Windows Updates and not running services you don’t need. Practices you should follow whether running a web site or not. Then keep an eye on the IIS logs to see if you are being probed or attacked.

These steps should keep your data and your server safe.

Written by smist08

November 20, 2010 at 4:31 am

Posted in sage 300, Security

Tagged with , ,

Sage ERP Accpac 6 Security

with 2 comments

With version 6, Accpac will still be an on-premise deployed application. Even though Accpac will now be a Web Based application, customers can still deploy it on their LAN and do not need to expose the Accpac application to the greater World Wide Web. By not exposing the application to the Web and keeping it all behind a firewall and/or DMZ, their data will be very safe.

However more adventurous customers will want to expose Accpac to the Web. They will want employees to be able to login from home, airports, hotels or on the road. Probably from places (like many Hotels) where VPN is blocked by a local firewall. They will want their data just as safe as before, but much more accessible. For these customers especially, but really all customers, we have to do extensive security testing of Accpac to make sure their data is safe. Generally for security we want to ensure the service is available, all transactions are confidential and that the transactions can’t be tampered with.

Accpac will be setup to do all communications through a secure connection called Transport Layer Security (TLS) (previously called Secure Socket Layer (SSL)). This is a very secure method to protect the communication between two computers. It will prevent people spying on the network from reading the information or tampering with the information that is transmitted on the network. It also provides a high level of authentication so you know who you are talking to. This does mean that customers will need to purchase a server digital certificate so that remote clients can ensure they are communicating with the correct service and that an intermediary hasn’t been installed in-between (man in the middle attack).

TLS does not protect the Browser memory or the Browser User Interface. Malicious web pages may be able to steal data from our user interface forms (cross site scripting attacks (XSS)). Bad user input may be able to cause bad side affects by interfering with our business logic (SQL Injection attacks). We have to test our software to ensure the browser side of our web pages are secure and that malicious user input is caught and dealt with.

There are attacks that are outside the scope of our application. If customers don’t follow good practices for maintaining and configuring their servers, then perhaps they can be attacked independently of Accpac. If the customer has malware like a keystroke logger program installed (perhaps by a virus) on their computer, then that program can steal their passwords. These are threats even today with desktop applications. Hence the importance of corporate security practices, like virus checkers and reduced privilege users.

A form of attack that technology can’t solve is “social engineering” attacks. Say someone phoning a customer and persuading them they are the support department and need the customer’s password for some reason. These types of attacks are usually the easiest and most successful. Some sort of awareness training is required for employees to be aware of these and to know to never give out sensitive information like their password over the phone, or via any other means.

Other attacks aren’t intended to steal anything, but to just take your system down. These are denial of service attacks. A hacker could say setup hundreds of computers (real or virtual) to make invalid login requests to your web server. None would ever succeed, but the load of rejecting these, could make your system unusably slow. Or perhaps they can find a way to crash your web server or application. Then the hacker could blackmail you, so he’ll stop. Or maybe he doesn’t care, and is only just doing it because he can. Or maybe a competitor is trying to put you out of business. These are certainly very serious attacks that must be guarded against.

Security testing is fun, because the testers become hackers and have to find ways to break into the software. They get to use new techniques like fuzz testing to find problems (http://www.owasp.org/index.php/Category:OWASP_JBroFuzz). They get to study criminals to learn their techniques to ensure we are safe from them. Security tends to be a journey; hackers are always inventing new techniques to gain access. Often testers will use “hacker” tools downloaded from the Internet to ensure they can’t be used to compromise our application. The tester study the traffic on the wire with tools like WireShark (http://www.wireshark.org/) to study all the packets on the network. There are many tools to scan your application and server for vulnerabilities. Source code has to be reviewed and tested to ensure clever user input can’t cause problems from things like SQL Injection attacks (http://en.wikipedia.org/wiki/SQL_injection).

Generally we want to ensure that when installed properly on a Web Server using TLS that Accpac is a very hard to crack application. We will need to publish best practices for installing and configuring servers. Generally newer server operating systems turn everything off by default so there isn’t much for hackers to latch onto. Mainly its up to the operators to ensure only a minimum of services are installed, that all patches are installed and to monitor the server logs for strange activity. Hopefully this will mean making Accpac available to remote Web users will be fairly easy and safe. But as always with security it always pays to be vigilant.

Written by smist08

March 6, 2010 at 10:52 pm

Posted in Security

Tagged with , , , ,

On Paranoia and Security

leave a comment »

I bought Bruce Schneier’s boxed set of three books: “Applied Cryptography”, “Practical Cryptography” and “Secrets and Lies”. Hopefully reading all this will make me sufficiently paranoid to deal with the security threats we’ll be facing as we move into the SaaS world. Bruce says that originally when he wrote “Applied Cryptography” he thought all the security problems on the Internet could be solved by Mathematics. That a few powerful cryptographic algorithms would solve all the security problems out there. He now realizes that this isn’t the case. That there are so many other weak links to be exploited, like bad implementations, lack of vigilance, human error, etc. In fact many people feel that if they are connected to a web site via SSL or TLS that they are fully secure. However this just isn’t the case.

SSL and TLS only protect the connection between the client computer and the server. They don’t protect the client computer. They don’t secure or encrypt data stored in the Browser’s memory. They don’t force you to use secure passwords. They don’t force you to check the validity of the root certificate authority used by a server. They don’t force you to use the maximum encryption settings possible. They don’t force you to run anti virus and spybot software.

Generally the security business is described as a “Red Queen’s Race”. This means the people trying to protect systems are running harder and harder just to stay in the same place. It seems the advances made by hackers are very impressive. Even now that most crytographic algorithm’s patents have expired and governments aren’t trying to supress them anymore as military secrets, that it will take much more than mathematics to provide a secure Internet.

Another point he makes is that it is possible to create a secure operating system. But since there is no liability to software vendors when break-ins occur, that there is no real motivation for anyone to make a more secure operating system. For instance it would cost Microsoft billions to really address the problems in Windows, but since there isn’t any liability, besides a bit of bad press, why would they? As it is they are content just to spend a bit of time releasing Windows Updates they discover holes found by hackers. But what about all the holes that hacker’s have found and not told them about?

But in spite of all the negativity, it is possible to create a reasonably secure system (ie secure enough that hackers will look elsewhere for easier targets). With a reasonable amount of vigilance, following of best practices and intelligence, you can run a secure system. But you have to stay alert and not believe that SSL and a firewall are all protecting.

Written by smist08

March 28, 2009 at 4:08 am

Posted in Security

Tagged with