Kiuwan logo

Are Some Programming Languages More Secure Than Others?

programming languages graphic

Security-related bugs can turn up in any programming language, but some are more prone to issues than others. Some newer languages are designed to make such errors harder. Others have “features” that are convenient but encourage coding that’s easy to exploit.

Some languages fare much worse than others in the Common Weakness Enumeration (CWE). It’s partly a matter of how developers use these languages and how widely used they are, but some languages are inherently prone to certain weaknesses.

Rather than picking on specific languages, let’s look at the language features that lead to security problems and then note some examples.

Pointers running wild

Buffer overflow exploits are the basis of some well-known attacks on software. The trick is to get the code to read and write a block of data beyond its allocated bounds. Out there could be executable code, URLs, confidential data, and other items that an outsider can alter by sending the right data through a public user interface.

Attacking the runtime stack is a popular way to exploit buffer overflow, since its structure is predictable.

The most notorious language for this problem is C. It lets code allocate data blocks and set pointers, and it does absolutely nothing to check the pointers’ validity. Dennis Ritchie created it as a low-level language suited for many purposes, but especially for systems programming. In the days of slow computers, every instruction saved meant a performance improvement, and leaving out bounds checking made tight loops faster. It also meant that defective code could overwrite any part of memory that didn’t have hardware protection, but that was less important before the Internet.

C++ and Objective-C have the same problem, since they can do anything C does, but they have alternatives that decrease the need for raw pointers. Assembly language shares the issue, though little coding is done down at that level anymore.

Run-time interpreted code

Several popular languages are designed to run under an interpreter. It’s convenient, since code can be loaded or generated on the fly and doesn’t need to be compiled. Well-known examples are Python, PHP, Perl, and JavaScript. Some of them even let the interpreter execute text strings as source code.

There are several problems here.

First, someone who can alter the code files can change their behavior. When they’re stored on a Web server with poor security, well-known attack techniques can alter or replace the code. The new code can take over the server and steal information or inflict malware on users.

Second, websites which allow uploads can accept executable code from strangers if they aren’t designed carefully. Unrestricted file uploading is on the OWASP Top Ten list of vulnerabilities, and the ability to upload executable code is the main reason.

Third, languages that support the execution of text strings allow malware obfuscation. Scanning software looks for malicious patterns, but if the hostile code isn’t explicitly in the source file, it’s hard to catch. Tricks like Base-64 encoding make it hard to tell that a string is executable code at all, much less that it’s harmful.

Another issue, though it’s more two-edged, is that run-time interpreted code is often visible to the public. Hostile parties can analyze it and find weaknesses. On the positive side, “white hat” analysts can do the same and report vulnerabilities the +*at might not be caught otherwise.

The maintainers of languages like PHP have been deprecating the features that are most amenable to hiding malicious code, but the problem still exists.

Error-prone variables

Some languages don’t have definite rules on variable initialization. If code initializes a variable without an explicit value, its value may differ from one implementation to another, or even one run to another.

Global variables are a useful but dangerous feature. It’s hard to track what code changes them. Malicious code can exploit bugs in their handling.

Lack of type safety

Properly managed type conversions, such as turning integers into floating-point numbers, don’t pose a security problem. What creates a risk is completely unchecked conversions. This brings us back to C. An assignment can convert a number to a pointer, or vice versa. It can add an integer to a pointer. Being able to do that tempts programmers to engage in some convenient shortcuts. But if an intruder can put a nasty value (such as a negative one) into a harmless-looking integer, the way is open to hacks similar in effect to buffer overflow.

Timing issues

Some modern languages have built-in multithreading. It improves efficiency but makes execution non-deterministic. Some problems may turn up only when certain actions overlap in different threads. Java is the most popular language with full built-in support for the feature.

Completely debugging multi-threaded code is difficult. It could contain bugs that manifest themselves only once in a million runs, and hostile code could patiently wait for that situation.

Multithreaded code can be vulnerable to denial-of-service attacks. The right sequence of inputs might induce an application to generate thread after thread without dismissing them, using up all of memory and dragging processing to a halt.

Safer languages

Developers of languages have recognized these problems and created new ones that reduce the risks. Features to look for include these:

  • Clean language design, with built-in security features.
  • Explicit rules for variable initialization.
  • Compilation rather than run-time interpretation.
  • Memory management built into the language.
  • A strong set of features for handling concurrency.
  • Type-safe conversions.
  • Design for running in a virtual machine or sandbox.
  • Good error handling, whether by exceptions or other means.

Rust is a leading example of a language that supports security and low-level programming at the same time. It allows “unsafe” features when they’re needed but requires the coder to mark them as such, so they can get special attention and won’t be introduced by accident.

Haskell, a functional language, has a reputation for being strong on security. Most type conversions have to be explicit. It has no global variables, and functions can’t have side effects.

Kotlin is a Java variant with an emphasis on safety. It makes it difficult to accidentally use null objects, and it has strong support for type safety.

Remember, though, that no language can keep developers from writing bad code. All software development requires careful coding and testing, and third-party code can contain unknown risks. Switching to a safer language helps, but making security part of the development process is the most important thing. Kiuwan can help you to assess the security of your software and eliminate vulnerabilities, whatever language you use for development.

In This Article:

Request Your Free Kiuwan Demo Today!

Get Your FREE Demo of Kiuwan Application Security Today!

Identify and remediate vulnerabilities with fast and efficient scanning and reporting. We are compliant with all security standards and offer tailored packages to mitigate your cyber risk within the SDLC.

Related Posts

A Guide to Code Portability-updated

A Guide to Code Portability

As applications need to operate across multiple environments, code portability has emerged as a topic of focus for developers. This guide will help you understand what code portability is and…
Read more
© 2024 Kiuwan. All Rights Reserved.