std::string

I started to make my writeup on 35C3-CTF’s task stringmaster1, but as I progressed I realized I’ll need another blog post to cover nuances of std::string without overstretching amount of input for a potential reader. Here we go, std::string byte-after-byte.

std::string in gdb

Imagine such a program:

example_fixed.png

Being compiled in given manner:

string_example_comp

Let’s start it in gdb and decompile main function:

dump_fixed.png

I highlighted 3 addresses: where do they point to?

As you probably know (and if you don’t, here’s a link to the Wikipedia):

The .data segment contains any global or static variables which have a pre-defined value and can be modified. That is any variables that are not defined within a function (and thus can be accessed from anywhere) or are defined in a function but are defined as static so they retain their address across subsequent calls.

Strings’ values that we hardcoded must be provided from somewhere, and addresses that disassembled code utilizes seems to point into .data segment. We can verify it by executing info variables in gdb, and scrolling to surroundings of this address:

data_start.png

As you see, this address is above the __data_start symbol (it has lower address), so it must be declared in the .data section.
Figuring it out would be easier by calling nm string_example, out of gdb (gdb prints much more than we need).
But I am driving off topic, let’s focus on the strings themself.

Let’s breakpoint at some address at the end of the program, after we initialized all three of strings with given values and printed strings’ addresses:

asdasd

So strings’ addresses are:

s1 - 0x7fffffffda90
s2 - 0x7fffffffda70
s3 - 0x7fffffffda50

std::string takes 32 bytes on my x86_64 computer – one can verify that by running a program that prints sizeof std::string.
Since the strings are declared one after another, by printing 3 * 32 bytes after the address of first string, we’ll see all 3 of them:

addresses.png
x/24x 0x7fffffffda90 means: print next 24 [4 byte chunks] after 0x7fffffffda90 address.
You can simply calculate it by [bytes you want to be printed, [32*3] in our case / 4].

Here’s where the action starts.
Since we know that each std::string occupies 32 bytes, I’ll colorize them by different colors and label by variable name:

addresses_2 fixed.png

We know lengths of our strings, which are:

s1 - "123" - 3 - 0x3
s2 - "123456789" - 9 - 0x9
s3 - "1234567890abcdefgh!@#" - 21 - 0x15

Can we identify such bytes on the image? Yes. That’s the 4’th column on the image above:

addresses_3_fixed.png

From looking at the sources (we’ll cover them later) I know, that length should be 64 bit value, so length takes 2 columns (2 x 4 bytes = 64 bits).

What else can be identified?
Individual characters we put into the strings.

Look at the first and second column – 0x34333231 and then 0x38373635, and then 0x39, when converted from ascii values present what that string contain:

decode.png

Marking our finding on the image with ‘d’ character, as an abbreviation from ‘data’:

addresses_4.png

But wait, look at the s3, the longest one – where’s the data we supplied? It doesn’t appear in the same manner as on the other strings… We’ll come to this in a second.
In the meantime – look at the first 4 bytes of our strings, on the second column.
In s2 and s1 it appears, that this value points (stores an address of) to the ‘d’ section.
So this value must be the pointer to the string’s data!
Again, mark our finding to the image with ‘p’, as from ‘pointer’:

addresses_5.png

And that answers the question stated just before – s3‘s data is stored away from the actual string, under 0x00614c20. Printing it reveals the string that we put into s3 before:

values_ee.png

Which make:

sssssss.png

That makes a question: why are some strings stored locally, and some externally, on the heap?*
And a question that watchful reader would state – what’s stored on the column we didn’t mark, between 4th and 8th bytes of std::string?

*We know, that address 0x00614c20 is on the heap, since we can check heap start/end addresses via info proc mappings in gdb:
info proc ma.png

0x00614c20 is bigger than 0x603000, but smaller than 0x635000.

 

std::string in sources

Answer to those questions lies in std::string’s sources.

You can access them, i.e by opening them in your IDE, like CLion – press Ctrl + N and type string – it will look for class definition.
Other way is just printing it from the command line, like:

pygmentize /usr/include/c++/8/bits/basic_string.h

One way or another, we’ll find std::string definition. The chunk which interested me is:

stdstring.png

It defines string’s fields that we’ve marked on images. There’s string’s length, pointer to its data, an array for local data (defined as a union of either capacity or array of 15 bytes) – and the field we couldn’t figure out – allocator_type. Let’s mark it on the image:

That makes sense! The s1 and s2 strings we declared, which both had their data stored locally, have the same bytes in the data_allocator field, and data_allocator of s3 is zeroed.

addresses_6.png

So there’s a different string allocator used, depending on string’s length. Local buffer’s size is 15 bytes, so if we try to allocate a bigger string, like in case of s3, it’s going to allocate on heap instead. This optimization has its name and is called:

small string optimization

If you want to read more about it, here are the sources I used:

https://stackoverflow.com/questions/21694302/what-are-the-mechanics-of-short-string-optimization-in-libc

https://stackoverflow.com/questions/10315041/meaning-of-acronym-sso-in-the-context-of-stdstring/10319672#10319672

https://stackoverflow.com/questions/27631065/why-does-libcs-implementation-of-stdstring-take-up-3x-memory-as-libstdc/28003328#28003328

https://blogs.msmvps.com/gdicanio/2016/11/17/the-small-string-optimization/

There’s more – std::string::npos

As the cpprerefence states:

This is a special value equal to the maximum value representable by the type size_type. The exact meaning depends on context, but it is generally used either as end of string indicator by the functions that expect a string index or as the error indicator by the functions that return a string index.

Note

Although the definition uses 1size_type is an unsigned integer type, and the value of npos is the largest positive value it can hold, due to signed-to-unsigned implicit conversion. This is a portable way to specify the largest value of any unsigned type.

On my x86_64 platform, given program:

nposs

prints:

np

it’s 16 times F, since:

byte = 8 bit, max value is 0b11111111 = 2 ^ 7 + 2^6 + … + 2^0 = 128 + 64 + … + 1 = 255
byte in hex = 0xff, max value is (F * 16 + F) = 15 * 16 + 15= 255.

so in other words – it’s 8 bytes = 64 bits.

But I’m mentioning it since there can be programs that use it without sanitization, like:

printing_npos_add

Which prints:

comp

As the condition we provided lacked checking if index is equal to npos, we’ve overwritten the length of some_string, and consequently made it to try printing all
0x5800000000000003 bytes that proceed address pointed by some_string‘s data pointer.

Let’s make 2 another strings in this program, and check if same thing happen to their lengths:

npos_src_2

That prints:

repeatable

So it’s repeatable! But why? How come, that

some_arbitrary_string[std::string::npos]

points to its length, always?

Well, as you probably know – variables can be overflown, and pointers also.
I’ll give you a short example of unsigned char overflow and then, pointer overflow – since they work in the same way:

overflow_ex.png

It prints:

diff

If we’re adding some value, to the value that’s already a maximum value for this platform, we’ll end up with… ‘some value’ minus 1!

As you learned before in this post, std::string’s length is 8 bytes before its local data. So if we’ll overflow pointer to the local data by adding max value it can hold, we’ll end up 1 byte before its local data, in length’s area, and that’s why some_string[std::string::npos] will always point to its length(…’s last byte)!

Conclusion

I wish I could just find blog post like this on the internet instead of writing it myself.

My 35C3 CTF writeup II – 1996

Introduction

After cloning the repository and proceeding to the distrib folder, we find a binary alongside a C++ source file, which contents are:

1996_src

Example usage of given binary:

1996_pwd

As we have an access to the source code we can spot some facts:

  • binary is presumably compiled with no stack protector, which means there’s no extra code for preventing buffer overflows
  • there’s a spawn_shell function, which is never called
  • input buffer size is 1024 bytes

Another hint is the title – “1996 – It’s 1996 all over again!” – in 1996 Aleph One wrote an article for Phrack, called “Smashing The Stack For Fun and Profit” which introduced masses to the stack buffer overflow attack.

Notice: Trying to compile it on your own with included Makefile may result in a binary still blocking stack overflows – happened in my case, since my system by default passed stack-protecting flags. Try to hack shipped binary to save your time.

Vector of attack – stack overflow

For introduction to stack overflow attack I delegate you to the article I mentioned above – it’s linked in the resources section.
What we’ll need to perform it:

  • (virtual) address of the spawn_shell function
  • offset to the return pointer, in bytes

First one can be retrieved by running our binary in gdb:

gdb_spawn_shell

As you see, it’s 0x400897.
Now we need to figure out the offset. It must be at least [1024 + 8] bytes, since on the stack, there’s a 1024 byte buf array and there must be a pointer to the stack frame which is 8 bytes on x86_64 architecture.
From now, we can either check the value manually or with gdb.

Manual way looks like this:

Let’s write a script that prints 1024 characters and pipe its output to the 1996 binary:

1996_py

It worked, but still not caused segmentation fault or illegal instruction.
Gradually adding more bytes (more A’s) would reveal, that the offset is 1048 bytes:

segfault

gdb way looks like this:

We open the binary via gdb and disassemble main function:

gdb_disassemble_main.png

We set a breakpoint at 0x0000000000400954, since after this instruction the stack will be cleared out (watch LiveOverflow’s videos linked in the resources to know how to identify that):

gdb_break

Run the program, type whatever on input so program would stop at our breakpoint.
Then examine registers EBP and ESP:

gdb_run

At this point, EBP should contain the address of the beginning of the stack and ESP should contain the address of the top of the stack.
Let’s substract those addresses to eventually have the offset:

0xffffdaa0 – 0xffffd690 = 0x410 = 1040.

But, as the EBP points at the beginning of a 8 byte address, after which is the return address we want to overwrite, we need to add 8 bytes to our value, so it becomes 1048.

Final attack

As we now know both offset and spawn_shell address we can feed it to 1996 binary with python:attack

We wrote spawn_shell address in reversed notation since gdb printed address in big endian notation, not little endian which is used by x86 processor.
Shell has been successfully spawned. We can print the flag:

attack_succ

Summary

Calculating stack size was a good exercise for knowing how calling a function works in assembly, that return address and stack frame address are put on the stack and popped in the end, and how to examine those in gdb. All needed resources are linked below.

Bonus:

As I said, 0xffffdaa0 is the address, where stack frame address lies.
Let’s break where we breakpointed before, and print surroundings of this address:

surroundings.png

Command I typed means:

print in hexadecimal the next 64 bytes after [top of the stack pointer + 1024 bytes]

So we see the 24 bytes (first row and a half) before the return address and return address itself – 0x00400897 which is value we wrote.

Resources

My 35C3 CTF writeup I – Poet

Introduction

This year’s Chaos Communication Congress featured an entry-level CTF contest alongside the original one. For everyone who don’t know what a CTF is or where to start, a talk followed:

I decided to publish my own solutions alongside description for educational purposes, though you’ll probably find other writeups on people’s blogs and repositories if you don’t find it clear.

We’ll start by cloning tasks from Github:
35c3poet2

And proceed to the poet/distrib subdirectory. What we’ll find there is a binary, which on execution asks us to type some strings to stdin, and loops on incorrect answer:
35c3poet

That’s for the introduction. How did I exploit it?

Vector of attack – Buffer overflow

After checking if score depends of anything, what came to my mind was to try to overflow input buffer, typing enormous quantities of characters and checking what happens, if anything.

I pasted a big chunk of characters to the first one, for the poem’s content, but nothing happened.

I typed more characters than expected to the poem’s author buffer (not, that I knew how many characters it expected; again I just guessed that a possible vector of attack may be by overflowing) and eureka, score counter got unexpected, non zero value:

poet3

Next thing I did, was to find out exactly how many characters do I need to overflow. I just iterated over how many characters I typed, starting with aaa, then aaaa, then aaaaa, etc.
After a few tries I decided to try with 33 and 65 as it was a multiplicity-of-32 overflown by 1 and someone could set that multiplicity-of-32 value for memory alignment.
I found it, buffer size was 64; after typing 65 characters for the first time I got this score value changed:

poet3211

So the next thing I checked: does the score change, depending on what character is the 65th character? I tried with zero:

poem123

So for the ‘a’ it’s 97 and for the zero it’s 48… Let’s have a look at the ASCII table:

Score matches value of the character I typed! Maybe I could use that to make score having value of 1 million, so it would print the flag for me?

Well, the last clue I needed was – when the value stops to change; how many characters proceeding after the 64 have an impact on the score. Spoiler: 4.

After that I deduced, that score must be stored in a 4 byte integer, and what I needed to figure out was… How do I write 1 million in binary. Then I needed to left pad it with zeroes to 32 bits, and write it down in 8-bit brackets, and check ASCII value of characters I just put into brackets. Here’s a drawing of what I did:

asd1.jpg

 

asd2.jpgWhat I meant was “making it 1 million from decimal to binary”

So after figuring out that it was [aaa overflowing sequence] + [@] + [B] + [^O] (non printable character) I typed this into the program and got the flag:

flag

Summary

Analysis having the source code, aka why overflowing buffer caused changing score value?

Afterwards, I looked at the source code of the poem binary. As you see, score field was declared just after the author buffer. Why does it matter? Because for the compiler, there’s no abstraction for structures. Under the hood, it looks as though someone allocated [1024 + 64 + 4] bytes of continuous memory when created an instance of this struct. It’s how humans interact with this structure, referring to certain bytes by aliases (text, author, score) makes it less intuitive to understand why the trick worked.

poem_struct


 

PS: The proper way to run these tasks is by Docker, since It may be handy to write some scripts that would automate buffer overflow since you could use sockets for communication.

[X11] Further tinkering with X11 root window – Conky

This post reffers to the one before and I recommend you to read it:
https://dbeef.lol/2018/12/26/writing-x11-screensaver-with-c-opengl/

Root windows

As we made through our attempt to write a screensaver for XScreenSaver server we stepped on a concept of virtual root windows.
As the Wikipedia states:

 The virtual root window is also used by XScreenSaver: when the screensaver is activated, this program creates a virtual root window, places it at the top of all other windows, and calls one of its hacks (modules), which finds the virtual root window and draws in it.

Our program found the virtual root window that XScreenSaver created when 1 minute of idleness passed and used its window handle to do OpenGL calls. But as you probably imagine, if we can find the root window, which is your desktop, we can draw whatever we want over it. And apparently, that’s how widgets (aka screenlets) work.
There’s a program called Conky (named after a doll from Trailer Park Boys TV series),
that does exactly the same thing. As its FAQ states:

Conky is a program which can display arbitrary information (such as the date, CPU temperature from i2c, MPD info, and anything else you desire) to the root window in X11. Conky normally does this by drawing to the root window, however Conky can also be run in windowed mode (though this is not how conky was meant to be used).

As the concept by which Conky’s screenlets work is similiar to our screensaver’s, let’s install it and check anatomy of screenlets.

Prerequisites

Process of installing either from sources or packages is described on their wiki:
https://github.com/brndnmtthws/conky/wiki/Installation
…and I’m assuming that when you type ‘conky‘ in your terminal, it starts Conky process.
I’m using Ubuntu 16.04 with Compiz.

What’s optional is conky-manager, you may build it from source:
https://github.com/teejee2008/conky-manager

Sample screenlet

As Conky has built-in support for Lua scripts, it doesn’t mean that conky configuration files are written in Lua (they use Lua syntax since Conky 1.10). They’re more like configuration files that define what is drawn and where, and to get some values, or draw something, they can call Lua scripts. They can also call bash scripts. For some values they need no Lua nor bash, because they are handled by Conky itself.

Conky developers even distribute Conky-related Lua tutorial:
https://github.com/brndnmtthws/conky/wiki/Lua

Making a sample screenlet consists of:

  1. Writing ~/.conkyrc file, possibly copy-pasting some portions of configuration from other screenlets because it’s redundant.
    Possible configuration settings are defined here:
    http://conky.sourceforge.net/docs.html
  2. (optional) Writing Lua script or bash script that you may want to call in it, maybe also putting some images/other resources into script directory. You can refer to the tutorial linked above.

Example conkyrc with Lua script (not mine):

https://github.com/zenzire/conkyrc

One particularly creative conky

As a fan of Thinkpads, when I’ve seen this:

I immediately downloaded scripts this guy shared and set them up on my own Ubuntu.
What was needed:

  1. Download this config file and save it as ~/.conkyrc file:
    https://pastebin.com/6VKhNRUJ
  2. Download shell scripts and save them under ~/.bin directory:
    https://pastebin.com/u/u0xpsec
  3. Edit those sh scripts, they relate to /home/u0xpsec directory, change it to your own.
  4. In terminal, type ‘conky’.

I experienced one annoying bug; on clicking desktop, this conky disappeared.
Link below helped disappearing on desktop click, but didn’t help disappearing on alt+tab to desktop (which hides everything):
https://ubuntuforums.org/showthread.php?t=1717351

What helped me on disappearing on ‘hide all windows’ was the tip i found on Arch Linux Wiki:
https://wiki.archlinux.org/index.php/conky

Using Compiz: If the ‘Show Desktop’ button or key-binding minimizes Conky along with all other windows, start the Compiz configuration settings manager, go to “General Options” and uncheck the “Hide Skip Taskbar Windows” option.

To download Compiz configuration settings manager type:

sudo apt-get install compizconfig-settings-manager

And run it via ‘ccsm’.

Alas, this conky took ~2% of CPU (quering system status is costly), so you may think twice before installing it.

Links

Conky has its own subreddits:

https://www.reddit.com/r/Conkyporn
https://www.reddit.com/r/Conky

And you can find more screenlets on deviantart, i.e this one:

https://www.deviantart.com/makisekuritorisu/art/Steins-Gate-Divergence-Meter-Clock-Conky-script-547301330

Which is also very amusing.

[X11] Writing X11 screensaver with C++ & OpenGL

tl;dr

Download XScreenSaver. In your binary you can’t use glfw to create window, use GLX instead, because you have to hook up to the virtual root window.
https://github.com/dbeef/x11-opengl-screensaver/

Prerequisites

As of Ubuntu 11.10 screensaver server isn’t placed in the distro (from that moment on it supports only screen blanking), to enjoy graphical screensavers we’ve got to install it for ourselves:

xscreensaver

After installing, place where screensavers are stored is /usr/lib/xscreensaver,
where listing shows some default ones:

Screenshot from 2018-12-26 23-12-16

They’re ordinary executable files:

penrose

…and when running one of them they create a window with a screensaver displayed:

penrose

Development

That’s great, or that’s more what I thought, because as I dropped there one of my OpenGL programs (I put them on my Github) naively thinking that every arbitrary binary can be set up as a screensaver I ran:

xscreensaver-demo

Which launches tool for choosing & setting up a screensaver from /usr/lib/xscreensaver:

properties

As I eventually selected my program (it’s called Screensaver on the list above), it occured to me that there are 2 problems.

  1. My program does not show in the little squared window of xscreensaver-demo when selected; it just runs in a new window, unlike screensavers shipped with package.
  2. When 1 minute passes and XScreenSaver launches my own screensaver, all what I see are logs from my screensaver on some black screen, not the window it was supposed to create (as I’ve seen for a splitsecond when moved a mouse, window was indeed created, but it wasn’t floating on top of the others despite hints I passed to glfw, this black screen was shadowing it).

What these problems have in common?

There must be some parent-window to hook up when launching my screensaver, so it wouldn’t just run in a new window but rather take a handle from another process. Looks like I can’t just select any arbitrary program and expect it to work as a screensaver, pity.

A look at the ‘Root window’ Wikipedia article confirmed my assumptions:

The virtual root window is also used by XScreenSaver: when the screensaver is activated, this program creates a virtual root window, places it at the top of all other windows, and calls one of its hacks (modules), which finds the virtual root window and draws in it.

Down the rabbit hole

I needed to get some example code of screensavers that are shipped within the XScreenSaver or any other working artifacts. I got a clever, concise example from Github:

https://github.com/RobertZenz/xscreensavers

Compiled it:

gcc

and it actually worked as other examples!

lavanet

So that’s the way that RobertZenz did it in his lavanet screensaver:

  1. He included a header called vroot.h, which is an abbreviation from virtual root window. Root window is the window which is the bottom-most, it’s a parent to every other window. As Wikipedia states:

    In the X Window System, every window is contained within another window, called its parent. This makes the windows form a hierarchy. The root window is the root of this hierarchy. It is as large as the screen, and all other windows are either children or descendants of it.

    File’s content is 106 lines, more than a half of it is  description which I’ll just put here for clarity because it describes what it does better than I would:
    carbon (5)

  2. In lavanet.c, vroot.h is used it this way:carbon (4)
    Rest of the code is GLX calls and lavanet logics which is not important for us.

OK! Looks like I can’t just make a new window with glfw.
I need to get that root window first.

At this point I hoped that there’s a way in glfw to create a native X11 Window, configure it (with vroot.h) and pass it to the glfw, since glfw exposes some native calls:

https://www.glfw.org/docs/latest/group__native.html

…but I was wrong. There’s just no way. To get GLFWwindow object, you’ve got to call glfwCreateWindow and it’s the only way.
There’s even an issue opened on Github in 2013 which was active through the years and the last answer is from 20 days before I wrote this post:

https://github.com/glfw/glfw/issues/25

It was exactly the same problem I was facing, but feature that would provide passing native handle is still not shipped.

What’s left?

Since I used glfw only for convenience (it abstracts creating window so developer wouldn’t bother writing platform-specific branches) I could use GLX to get that native window handle.

GLX is like an interface for OpenGL calls that can talk with an X11 server.
As Windows has its own windowing system, it has its own equivalent of GLX, called WGL.
If you’re getting confused, reffer to the Wikipedia and this answer on StackOverflow:

https://stackoverflow.com/questions/40543176/does-opengl-use-xlib-to-draw-windows-and-render-things-or-is-it-the-other-way-a

The glfw way (code before)

My previous code I used to create windows with glfw looked like this:

carbon (2)

Not much, right? There were some glfw calls on program start:

carbon (3)

But that’s all.

The GLX way (code after)

I got window handle exactly the same way as in the lavanet example.
Then, in my main loop, I couldn’t do anymore:

swapbuffers

So I replaced it with:

swapx

(I also change classname to WrapperWindow so it wouldn’t conflict with X11 window).

I had a window, but I still needed to register a graphical context. And that’s where this very helpful post came:
http://apoorvaj.io/creating-a-modern-opengl-context.html

So the class after changes looked like that:

carbon (1)

Building

As it was inside my opengl-playground CMake project that displayed textured cubes, I simply built it and copied resulting binary into /usr/lib/xscreensaver/.

Then, typed xscreensaver-demo, selected my screensaver and could preview it – it worked.

Conclusion

Looks like it’s not that hard to make a screensaver for X11, just make sure you create a native window by your own, the rest is just ordinary OpenGL.

I created a separate CMake project with this X11 screensaver afterwards and put it on Github, so you could try it for yourself. For clarity, I cut the logics, so it only fills screen with colours every some amount of time. OpenGL and X11 are the only dependencies.

https://github.com/dbeef/x11-opengl-screensaver/

By the way, this time I used a service called carbon:
https://carbon.now.sh
It saved my time – generates images from source code.

Sources (or just interesting related stuff to read)

How to make an X11 screensaver with python:
https://alvinalexander.com/python/python-screensaver-xscreensaver-linux

https://unix.stackexchange.com/questions/220389/x11-controlling-root-window-or-setting-a-window-to-be-the-background-window-wal

https://stackoverflow.com/questions/2431535/top-level-window-on-x-window-system

https://en.wikipedia.org/wiki/GNOME_Screensaver
https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)
https://en.wikipedia.org/wiki/Screensaver
https://en.wikipedia.org/wiki/Root_window
https://pl.wikipedia.org/wiki/Mesa_3D
https://pl.wikipedia.org/wiki/GLX
https://www.khronos.org/opengl/wiki/Programming_OpenGL_in_Linux:_GLX_and_Xlib
https://softwareengineering.stackexchange.com/questions/162486/linux-opengl-programming-should-i-use-glx-or-any-other
https://stackoverflow.com/questions/40543176/does-opengl-use-xlib-to-draw-windows-and-render-things-or-is-it-the-other-way-a

https://github.com/porridge/xscreensaver/blob/debian-5.10-3ubuntu4/README.hacking
http://www.dis.uniroma1.it/~liberato/screensaver/
https://github.com/gamedevtech/X11OpenGLWindow

A curious consequence of passing undefined pointer to printf

Late in the night I was writing something for SpelunkyDS. Mistakenly, I passed a pointer to printf, that was uninitialised, which definition looked like that:

Screenshot from 2018-11-18 12-10-32

Obviously, the held_sprite_width had undefined value. It could point to anything, if not set to nullptr. In get_sprite_width, all I called was:

Screenshot from 2018-11-18 12-16-46

What would it print? No, not just some rubbish, as you would expect.
It printed a “FAKE SKELETON”.

Screenshot from 2018-11-18 12-18-44.png

But hang on, why would it print a “FAKE SKELETON” anyway?
At first, I thought, that the pointer was just pointing to a const char literal that the FakeSkeleton class uses (that class is totally unrelated to the code above, it just happened that the pointer happened to be pointing somewhere in the FakeSkeleton’s memory area). Here’s some code of the FakeSkeleton class:

Screenshot from 2018-11-18 12-26-03

…but after editing the function I was sure that the printf didn’t simply use the literal “FAKE_SKELETON\n”, it called the FakeSkeleton::print_typename_newline as a whole!

I edited the function a bit:

Screenshot from 2018-11-18 12-36-13

Which caused:

Screenshot from 2018-11-18 12-36-19

Well, It could also be a compile-time optimization, where the printf(“FAKE_SKELETON%i\n”, 666) would be substituted with puts(“FAKE_SEKELETON666\n”), but I’m not sure of that.
https://stackoverflow.com/questions/37435984/why-doesnt-gcc-optimize-this-call-to-printf

If I set the width pointer to nullptr before printf, the whole effect would vanish and zeroes would be printed.

Links #1

Those articles/posts got me interested lately.
Most of them are about optimizing C++ code.

Data alignment in terms of performance

https://softwareengineering.stackexchange.com/questions/328775/how-important-is-memory-alignment-does-it-still-matter
https://lemire.me/blog/2012/05/31/data-alignment-for-speed-myth-or-reality/

http://www.catb.org/esr/structure-packing/
The clang compiler has a -Wpadded option that causes it to generate messages about alignment holes and padding. Some versions also have an undocumented -fdump-record-layouts option that yields more information.

Dynamic & Static inheritance in terms of performance:

Click to access svsd.pdf

http://www.thinkbottomup.com.au/site/blog/C%20%20_Mixins_-_Reuse_through_inheritance_is_good
Another problem with this approach is the use of virtual functions. We have virtual functions calling virtual functions when we are trying to something relatively simple! It should be noted that the compiler can not generally inline virtual functions and there is some overhead in calling a virtual function compared to calling a non-virtual function. This runtime hit seems unreasonable, but how can we overcome it?


https://en.wikipedia.org/wiki/Mixin
https://en.wikipedia.org/wiki/Virtual_method_table
https://en.wikipedia.org/wiki/Barton%E2%80%93Nackman_trick
https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern
https://stackoverflow.com/questions/20783266/what-is-the-difference-between-dynamic-and-static-polymorphism-in-java

 

Object Oriented Programming pitfalls:

https://www.gamedev.net/blogs/entry/2265481-oop-is-dead-long-live-oop/

Click to access Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf

https://en.wikipedia.org/wiki/Entity%E2%80%93component%E2%80%93system

Programming patterns:

http://gameprogrammingpatterns.com/data-locality.html
http://gameprogrammingpatterns.com/bytecode.html
http://gameprogrammingpatterns.com/type-object.html
https://en.wikipedia.org/wiki/Mediator_pattern
https://en.wikipedia.org/wiki/Service_locator_pattern

C++ coding principles:

https://en.wikipedia.org/wiki/SOLID

Other

https://stackoverflow.com/questions/109710/how-does-the-likely-unlikely-macros-in-the-linux-kernel-works-and-what-is-their

Having a look at the sources of Haven & Hearth MMO client (I)

Intro

If you’ve never heard of the H&H, here’s a snippet from their page:

Haven & Hearth is a MMORPG (Massive Multiplayer Online Roleplaying Game) set in a fictional world loosely inspired by Slavic and Germanic myth and legend. The game sets itself apart from other games in the genre in its aim to provide players with an interactive, affectable and mutable game world, which can be permanently and fundamentally changed and affected through actions undertaken by the players. Our fundamental goal with Haven & Hearth is to create a game in which player choices have permanent and/or lasting effects and, thus, providing said players with a meaningful and fun gaming experience.

But what’s special in my opinion is:

  • it’s developed by a team of two Swedes
  • it’s developed in Java using JOGL
  • client’s code is open and the game itself is free, which means there are many alternative clients today.

You can find original sources’ license here:

http://www.havenandhearth.com/portal/doc-src

Following by the link to their official git repository and notes from developers.
However, the source I will be dealing with will come from the ‘Amber’ client:

https://github.com/romovs/amber

It provides some additional functionalities and is regularly updated:

amber_client_commits

Downloading and building from sources

>> git clone https://github.com/romovs/amber.git 
>> cd amber-1.68.0
>> ant

At this moment some weird errors may occure, but if so, just type ant once more and it should build it successfully. Now, to run, type:

>> cd build/
>> java -jar hafen.jar -U http://game.havenandhearth.com/hres/ game.havenandhearth.com

You’ll end up with a menu screen. However, (at least) to me it wasn’t over, because after logging in an error popped op:

Screenshot from 2018-10-06 13-42-35

I got to the file on the top of the stacktrace (Buff.java), found the line, had a guess what’s wrong and corrected the thing from this:

Screenshot from 2018-10-06 13-43-20To this:Screenshot from 2018-10-06 13-43-28

Then build the whole thing once again with ant and I finally managed to log in.

My brief modifications

Camera zoom

So I proceeded through the code for some time after that. I’ve been thinking what would be easy to do and came with an idea of simply enabling maximum camera zoom for a starter. It proved to be easy.

Cameras are managed in the MapView.java,
there’s a chfield function there, which I modified in the following way:

Screenshot from 2018-10-06 14-20-54

Btw, these comments are mine. I hardly ever stumbled on an existing comment, but if so, they’re mostly some rants/hacks over Java. Anyways, I rebuilt the project and got the camera scrolled up to the orbit with arrow buttons – it worked. I could see the whole area of the map around the player:

Screenshot from 2018-10-06 14-18-51

Detaching camera from the player

The following idea was:

  • Space bar would toggle detaching
  • If detached, one could move the camera to the point on the map, by simply click on some place at the map, but the player still wouldn’t move there
  • Space bar pressed once again would attach camera to the player and focus on it

As I said, cameras are managed in the MapView.java.
There’s a function that returns object which defines the player:

Screenshot from 2018-10-06 14-47-27

And a function, that basing on that, returns player’s current coordinates:

Screenshot from 2018-10-06 14-48-16

…which I guessed, is used also by the camera, so I created an object that cached player’s position and updated it when:

  • detached mode is on
  • left click on map occurs

So getcc looks like this now:

Screenshot from 2018-10-06 14-56-15
I injected some of my code into the existing function that handles clicking, that is ‘hit’ function. Parts of it, before editing, looks like this:Screenshot from 2018-10-06 14-58-17And after adding my code, it starts with:Screenshot from 2018-10-06 14-59-19To toggle detaching mode, I edited the ‘keydown’ function, which starts with:Screenshot from 2018-10-06 15-00-37I just added an additional branch to the if-else tree:Screenshot from 2018-10-06 15-01-57

That’s all. I rebuilt it and recorded my modifications that you could watch it:

Summary

Next time we will tackle the networking code (which fun parts I already found, reside in the Session.java).

My article in this month’s “Programista” magazine

It covers basics of making homebrew for the Nintendo DS (using C++ and devkitPro’s libnds). There are 3 examples:

  • One that makes NDS a wireless controller for the PC
  • One that makes NDS a wireless microphone recording station
  • One that is simply a Pong game

These examples’ code is on my Github:

If that made you interested,  look for the magazine in the “Empik” stores or buy a PDF on the “Programista” webpage.

Recompiling Ubuntu clock to display in hex

Recently I was thinking what would be a good idea to enforce learning fast
hex<->decimal calculation in my head. Obviously, I thought about reading current time in hex, but there is no checkbox for that in the vanilla ubuntu indicator-datetime service (silly Canonical, not including hex time option).

Why not compile a version, that supports that, though? Here’s my modified version:

Screenshot from 2018-09-03 09-05-14
Hex hour to decimal: 09:05:14

And here’s how I did this:

I Download sources

My Ubuntu is 16.04 LTS, which is important because there are different sources for each major version. Anyway, I found sources here:

>> https://launchpad.net/ubuntu/+source/indicator-datetime

This link in particular provides sources for Ubuntu 15 / 16:

>> https://launchpad.net/ubuntu/+source/indicator-datetime/15.10+16.04.20160406-0ubuntu1

II Download dependencies

<in unzipped source folder>
>> mkdir build && cd build
>> cmake ..

My computer lacked certain packages, as I recognized when reading cmake’s log.
What I did in this case was googling “ubuntu package <package name>”, which moved me to the packages.ubuntu, i.e for libecall I found this package:

>> https://packages.ubuntu.com/xenial/libecal1.2-dev

so I typed

>> sudo apt-get install libecall1.2-dev in the terminal.

I did this for every missing package untill cmake returned successfully.

III Modify sources

open the formatter.cpp at void update_header() and modify the function so it would look like:

Screenshot from 2018-09-03 10-12-37

IV Compile

<in build folder> 
>> make

V Stop the current indicator-datetime.service, run our own for testing

I recommend you copy the existing indicator-datetime-service, so you could recover if you change your mind.

Screenshot from 2018-09-03 10-35-59

VI Reboot so the deamon would run our service

PS: I am not affiliated with Ubuntu nor Canonical, it’s just tinkering with their GNU sources.