DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 6

The following changes will be covered in this post:

  • Improved skybox.
  • Bugfix for a bug related to the rendering of walls introduced with functionality to look up and down.
  • Red transparent screen effect added to act as a damage indicator for the player when enemy contact occurs.

Skybox Improvement


Due to the pattern and size of the image used for the skybox, an issue would occur where the image would suddenly switch to a different position. Although not game-breaking, it was somewhat jarring. To improve this, two changes needed to be made to the image used for the skybox:

  • the Image X (horizontal) resolution needed to be changed to be the same as the display window resolution (In this case, 1920 pixels).
  • The image needed to be replaced with a seamless image, i.e., the two sides of the image aligned to create an infinitely repeating pattern of clouds.

Wall Rendering Bugfix
A bug was introduced with the functionality for the player to look up and down that caused the rendering of walls to get miss-aligned if the player’s point of view was not vertically centered and the player was close to the wall in question. The image below shows an example of how the bug manifests:

This results from the game engine’s limitations and the lack of a z-axis for proper spatial positioning of items. To get around this, I added auto vertical centering of the player’s field of view every time the player moves. This will not completely fix the issue but will make it occur far less frequently.

To implement this change I added the following method in the Player class (in the playert.py file):

def level_out_view(self):
        if (self.HALF_HEIGHT - BASE_HALF_HEIGHT) > 50:
            self.HALF_HEIGHT -= 50
        elif self.HALF_HEIGHT - BASE_HALF_HEIGHT < -50:
            self.HALF_HEIGHT += 50
        else:
            self.HALF_HEIGHT = BASE_HALF_HEIGHT

And updated the keys_control method in the player class as follows:

def keys_control(self, object_map,enemy_map):
        sin_a = math.sin(self.angle)
        cos_a = math.cos(self.angle)
        keys = pygame.key.get_pressed()

        if keys[pygame.K_ESCAPE]:
            exit()
        if keys[pygame.K_w]:
            nx = self.x + player_speed * cos_a
            ny = self.y + player_speed * sin_a
            self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
            if nx == self.x or ny == self.y:
                self.play_sound(self.step_sound)
            self.level_out_view()
        if keys[pygame.K_s]:
            nx = self.x + -player_speed * cos_a
            ny = self.y + -player_speed * sin_a
            self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
            if nx == self.x or ny == self.y:
                self.play_sound(self.step_sound)
            self.level_out_view()
        if keys[pygame.K_a]:
            nx = self.x + player_speed * sin_a
            ny = self.y + -player_speed * cos_a
            self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
            if nx == self.x or ny == self.y:
                self.play_sound(self.step_sound)
            self.level_out_view()
        if keys[pygame.K_d]:
            nx = self.x + -player_speed * sin_a
            ny = self.y + player_speed * cos_a
            self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
            if nx == self.x or ny == self.y:
                self.play_sound(self.step_sound)
            self.level_out_view()
        if keys[pygame.K_e]:
            self.interact = True
            self.level_out_view()
        if keys[pygame.K_LEFT]:
            self.angle -= 0.02
            self.level_out_view()
        if keys[pygame.K_RIGHT]:
            self.angle += 0.02
            self.level_out_view()

Player Visual Damage Indicator
A Visual Damage Indicator is a way to let the player know he is taking damage. This will become more relevant at a later stage when the concept of health points is implemented, but for now, it provides a way of showing when the enemy is in touching range of the player.
The number of enemies has also been increased to three to increase the chances of a damage event.


The Visual Damage Indicator is implemented by drawing a semi-transparent red rectangle over the screen whenever a collision between the player and the enemy is detected.

To check for these collisions a new function was added in the common.py file as below:

def check_collision_enemy(x, y, map_to_check, margin):
    location = align_grid(x, y)
    if location in map_to_check:
        #  collision
        return True

    location = align_grid(x - margin, y - margin)
    if location in map_to_check:
        #  collision
        return True

    location = align_grid(x + margin, y - margin)
    if location in map_to_check:
        #  collision
        return True

    location = align_grid(x - margin, y + margin)
    if location in map_to_check:
        #  collision
        return True

    location = align_grid(x + margin, y + margin)
    if location in map_to_check:
        #  collision
        return True

    return False

This function is called from the keys_control function in player class:

self.hurt = check_collision_enemy(self.x, self.y, enemy_map, HALF_PLAYER_MARGIN)

In the drawing.py file the background method in the Drawing class was updated as follows:

def background(self, angle, half_height, hurt):
        sky_offset = -1 * math.degrees(angle) % resX
        print (sky_offset)
        self.screen.blit(self.textures['S'], (sky_offset, 0))
        self.screen.blit(self.textures['S'], (sky_offset - self.textures['S'].get_width(), 0))
        self.screen.blit(self.textures['S'], (sky_offset + self.textures['S'].get_width(), 0))
        pygame.draw.rect(self.screen, GREY, (0, half_height, resX, resY))
        if(hurt):
            RED_HIGHLIGHT = (240, 50, 50, 100)
            damage_screen = pygame.Surface((resX, resY)).convert_alpha()
            damage_screen.fill(RED_HIGHLIGHT)
            self.screen.blit(damage_screen, (0, 0, resX, resY))

RED_HIGHLIGHT is a Tuple with four values stored in it. The first three values represent the RGB color code, and the last value indicates transparency level, with 0 being completely transparent and 255 completely opaque.
The convert_alpha method tells Pygame to draw the rectangle to the screen applying the transparency effect.

Here is a video of the effect in action:

The source code for everything discussed in the post can be downloaded here and the executable here.

DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 6

Make a Rubber Ducky with a Raspberry Pico

I am taking a slight detour from the Raycasting series of posts (don’t worry, the next post in the series is coming soon) to cover another small project I have been working on, creating a Rubber Duckly using a Raspberry Pico and CircuitPython.


A Rubber Ducky is a keystroke injection tool that is often disguised as a USB flash drive to trick an unsuspecting victim into plugging it into their computer. The computer recognizes the Rubber Ducky as a USB keyboard (and mouse if required), and when it is plugged in, it executes a sequence of pre-programmed keystrokes, which will be executed against the target computer, as if the user did it. This attack thus exploits the security roles and permissions assigned to the user logged in at the time.
This is a good time to note that using a Rubber Ducky for dubious intents is illegal and a terrible idea, and I take no responsibility for the consequences if anyone chooses to use what they learn here to commit such acts.

To create the Rubber Ducky described in this post, you will need four things:
1. A Rasberry Pico
2. A Micro USB Cable
3. CircuitPython
4. The Adafruit HID Library of CircuitPython

First, you will need to install CircuitPython on your Raspberry Pico. This link will provide all the instructions and downloads you will require to do this.
Next, you will need to install the Adafruit HID Library. Instructions on how to do this can be found here.

Now that all the pre-requisites are installed and configured, the source code below can be deployed using the process described in the first link. The Source code below executes a sequence of keystrokes that opens Notepad on the target computer and type out a message. Just note that the keystrokes are slowed down significantly to make what is happening visible to the user Typically, this will not be done with a Rubber Ducky.

import board
import digitalio
import time
import usb_hid
from adafruit_hid.keyboard import Keyboard
from adafruit_hid.keycode import Keycode

kbd = Keyboard(usb_hid.devices)

led = digitalio.DigitalInOut(board.LED)
led.direction = digitalio.Direction.OUTPUT
led.value = True
time.sleep(10)
while True:
    kbd.press(Keycode.GUI, Keycode.R)
    time.sleep(.09)
    kbd.release_all()
    kbd.press(Keycode.N)
    time.sleep(.09)
    kbd.release(Keycode.N)
    time.sleep(.09)
    kbd.press(Keycode.O)
    time.sleep(.09)
    kbd.release(Keycode.O)
    time.sleep(.09)
    kbd.press(Keycode.T)
    time.sleep(.09)
    kbd.release(Keycode.T)
    time.sleep(.09)
    kbd.press(Keycode.E)
    time.sleep(.09)
    kbd.release(Keycode.E)
    time.sleep(.09)
    kbd.press(Keycode.P)
    time.sleep(.09)
    kbd.release(Keycode.P)
    time.sleep(.09)
    kbd.press(Keycode.A)
    time.sleep(.09)
    kbd.release(Keycode.A)
    time.sleep(.09)
    kbd.press(Keycode.D)
    time.sleep(.09)
    kbd.release(Keycode.D)
    time.sleep(.09)
    kbd.press(Keycode.ENTER)
    time.sleep(.09)
    kbd.release(Keycode.ENTER)
    time.sleep(.09)

    kbd.press(Keycode.H)
    time.sleep(.09)
    kbd.release(Keycode.H)
    time.sleep(.09)
    kbd.press(Keycode.E)
    time.sleep(.09)
    kbd.release(Keycode.E)
    time.sleep(.09)
    kbd.press(Keycode.L)
    time.sleep(.09)
    kbd.release(Keycode.L)
    time.sleep(.09)
    kbd.press(Keycode.L)
    time.sleep(.09)
    kbd.release(Keycode.L)
    time.sleep(.09)
    kbd.press(Keycode.O)
    time.sleep(.09)
    kbd.release(Keycode.O)
    time.sleep(.09)
    kbd.press(Keycode.ENTER)
    time.sleep(.09)
    kbd.release(Keycode.ENTER)
    time.sleep(100)

led.value = False

Here is a video of the Rubbert Ducky in Action:

Make a Rubber Ducky with a Raspberry Pico

DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 5

The following additions and changes to the game engine will be covered:

  • Adding the ability for the player to look up and down. (As requested by Matthew Matkava)
  • Addition of basic spatial sound to the enemy.
  • Bugfix that relates to the player footstep sounds.

Player Looking Up and Down

First, we will look at adding the ability for the player to look up and down, or more accurately, add the illusion of looking up and down. As mentioned in the first post in this series, the game engine being developed is not actually 3D but rather a pseudo-3D rendering of a 2D game world. This means, in essence, that there is no Z-axis (up and down) in the game world. Thus the player looking up and down is simply an illusion and does not affect the game in any way.

All sprite and walls rendered in the game engine, as well as the point where the skybox ends and the floor starts, use a pre-defined horizon as a reference point to determine the position and height of the items to be drawn to the screen, up to this point the horizon used was half of the game window resolution and was defined in the settings.py file with the name HALF_HEIGHT:

HALF_HEIGHT = resY // 2

To create the illusion of the player looking up and down, we will move this horizon up and down based on the users’ inputs:

The first thing we need to do is rename HALF_HEIGHT in the settings.py file, as it will still be required but only to determine the center of the game window:

BASE_HALF_HEIGHT = resY // 2

Next, a new horizon value needs to be declared, and this will be done inside the Player class in the player.py file as it will now be under the control of the player, so inside the Player __init__ method, the following has been added:

self.HALF_HEIGHT = BASE_HALF_HEIGHT

BASE_HALF_HEIGHT is set as a starting value, ensuring that the game starts with the player looking straightforward.

The mouse movement function in the Player class was updated to move newly defined Horizon value based on mouse movement:

def mouse_control(self):
    if pygame.mouse.get_focused():
        difference = pygame.mouse.get_pos()
        difference_x = difference[0] - HALF_WIDTH
        difference_y = difference[1] - BASE_HALF_HEIGHT
        pygame.mouse.set_pos((HALF_WIDTH, BASE_HALF_HEIGHT))
        self.angle += difference_x * self.sensitivity
        if (resY - resY / 4) >= self.HALF_HEIGHT >= resY/4:
           self.HALF_HEIGHT -= difference_y * self.look_sensitivity
        elif (resY - resY / 4) <= self.HALF_HEIGHT:
           self.HALF_HEIGHT = (resY - resY / 4)
        elif self.HALF_HEIGHT <= resY/4:
           self.HALF_HEIGHT = resY/4

Top and bottom boundaries are set to prevent the player from looking too far up and down, which can result in issues in the game engine.

All references to the original HALF_HEIGHT value defined in the settings.py file need to be changed to use the new Player.HALF_HEIGHT value. The locations of these references are as follows:

  • The background method in the Drawing class (prawing.py)
  • raycasting function (raycasting.py)
  • locate_sprite method in the SpriteBase class (sprite.py)

Basic Enemy Spatial Sound

Next, let us look at adding basic spatial sound to the enemy. The idea behind this implementation is that the sound the enemy makes gets louder as it gets closer to the player. This is implemented in the following method located in the Enemy class (enemy.py):

def play_sound(self,  distance):
    if not pygame.mixer.Channel(4).get_busy():
       volume = (1 / distance)*10
       pygame.mixer.Channel(4).set_volume(volume)
       pygame.mixer.Channel(4).play(pygame.mixer.Sound(self.sound))

Where the distance variable value is set to the distance between the enemy and the player.
The play_sound method is then called from the move function of the player class as per the code below:

def move(self, player, object_map, distance):
    new_x, new_y = player.x, player.y
    if self.activated:
        if player.x > self.x:
            new_x = self.x + ENEMY_SPEED
        elif player.x < self.x:
            new_x = self.x - ENEMY_SPEED

        if player.y > self.y:
            new_y = self.y + ENEMY_SPEED
        elif player.y < self.y:
            new_y = self.y - ENEMY_SPEED

        self.x, self.y = check_collision(self.x, self.y, new_x, new_y, object_map, ENEMY_MARGIN)
        if (self.x == new_x) or (self.y == new_y):
            self.moving = True
            self.play_sound(distance)
        else:
            self.moving = False

This will result in the enemy, while moving, making a sound with a loudness inversely proportional to the distance to the player.

Footstep Sound Bugfix

A bug in the previous version of the code resulted in the player only making footstep sounds if the player was not moving due to a collision with an object. The code below fixes this issue by checking if the players x or y positions changed, and if any of the two values have changed, the footstep sound will be played:

def keys_control(self, object_map):
    sin_a = math.sin(self.angle)
    cos_a = math.cos(self.angle)
    keys = pygame.key.get_pressed()
    if keys[pygame.K_ESCAPE]:
        exit()
    if keys[pygame.K_w]:
        nx = self.x + player_speed * cos_a
        ny = self.y + player_speed * sin_a
        self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
        if nx == self.x or ny == self.y:
            self.play_sound(self.step_sound)
    if keys[pygame.K_s]:
        nx = self.x + -player_speed * cos_a
        ny = self.y + -player_speed * sin_a
        self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
        if nx == self.x or ny == self.y:
            self.play_sound(self.step_sound)
    if keys[pygame.K_a]:
        nx = self.x + player_speed * sin_a
        ny = self.y + -player_speed * cos_a
        self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
        if nx == self.x or ny == self.y:
            self.play_sound(self.step_sound)
    if keys[pygame.K_d]:
        nx = self.x + -player_speed * sin_a
        ny = self.y + player_speed * cos_a
        self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
        if nx == self.x or ny == self.y:
            self.play_sound(self.step_sound)
    if keys[pygame.K_e]:
        self.interact = True
    if keys[pygame.K_LEFT]:
        self.angle -= 0.02
    if keys[pygame.K_RIGHT]:
        self.angle += 0.02

The source code for everything discussed in the post can be downloaded here and the executable here.

DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 5

DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 4

In this post, we will cover the following:

  1. Fixing a KeyError bug related to the door sprites.
  2. Refactoring the player collision detection algorithm so that enemies and other non-playable characters can also use it.
  3. Adding very basic enemy artificial intelligence and adding movement animation to the enemy, so the effect of a walking character is created.

Door Sprite KeyError Bug

Because doors had 16 angles, and angle to change the sprite image was calculated with:

sprite_angle_delta = int(360 / len(self.sprite_object)) 

So for 16 images, this would result in 22.5 degrees. The decimal 0.5 would be dropped because we use int(), and all operations dependent on the sprite_angle_delta uses an integer number.

This decimal loss results in a dead zone between 352 and 360 degrees that caused the KeyError.

To fix this, the number of sprite images was reduced to 8, as 16 was unnecessary for the purposes we require in this scenario.

Alternatively, the sprite_angle_delta could have been changed to a float variable, and all the dependent operations could have been modified accordingly to facilitate this. However, this would have added unnecessary complexity for the functionality required in the game.

Refactoring of Collision Detection Algorithm to be More Generic and Reusable

Firstly, the check_collision function was moved out of the Player class and into the common.py file. Next, the function was refactored as per the code below so that it returns either the existing x and y values (before the move) if a collision occurred or the new x and y values (after the move) if no collision was detected:

def check_collision(x, y, new_x, new_y, map_to_check, margin):
    location = align_grid(new_x, new_y)
    if location in map_to_check:
        #  collision
        return x, y

    location = align_grid(new_x - margin, new_y - margin)
    if location in map_to_check:
        #  collision
        return x, y

    location = align_grid(new_x + margin, new_y - margin)
    if location in map_to_check:
        #  collision
        return x, y

    location = align_grid(new_x - margin, new_y + margin)
    if location in map_to_check:
        #  collision
        return x, y

    location = align_grid(new_x + margin, new_y + margin)
    if location in map_to_check:
        #  collision
        return x, y

    return new_x, new_y

The Player keys_control method was modified as per below to facilitate the new check_collision function:

def keys_control(self, object_map):
        sin_a = math.sin(self.angle)
        cos_a = math.cos(self.angle)
        keys = pygame.key.get_pressed()
        if keys[pygame.K_ESCAPE]:
            exit()
        if keys[pygame.K_w]:
            nx = self.x + player_speed * cos_a
            ny = self.y + player_speed * sin_a
            self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
            if nx != self.x and ny != self.y:
                self.play_sound(self.step_sound)
        if keys[pygame.K_s]:
            nx = self.x + -player_speed * cos_a
            ny = self.y + -player_speed * sin_a
            self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
            if nx != self.x and ny != self.y:
                self.play_sound(self.step_sound)
        if keys[pygame.K_a]:
            nx = self.x + player_speed * sin_a
            ny = self.y + -player_speed * cos_a
            self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
            if nx != self.x and ny != self.y:
                self.play_sound(self.step_sound)
        if keys[pygame.K_d]:
            nx = self.x + -player_speed * sin_a
            ny = self.y + player_speed * cos_a
            self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
            if nx != self.x and ny != self.y:
                self.play_sound(self.step_sound)
        if keys[pygame.K_e]:
            self.interact = True
        if keys[pygame.K_LEFT]:
            self.angle -= 0.02
        if keys[pygame.K_RIGHT]:
            self.angle += 0.02

Where object_map is passed in from the main.py file and is created as follows:

object_map = {**sprites.sprite_map, **world_map}

object_map is thus a new dictionary that contains the values of the sprite_map and world_map dictionaries combined.

The check_collision function can now be easily used by enemies as well.

Basic Enemy Artificial Intelligence and Enemy Walking Animation

The enemy will, for now, only have very basic behavior and will try to move towards the player except if an obstacle is in the way.

A new Enemy class was created to accommodate this and is located in a new file called enemy.py.
The contents of the enemy.py file:

 from common import *


class Enemy:
    def __init__(self, x, y, subtype):
        self.x = x
        self.y = y
        self.subtype = subtype
        self.activated = False
        self.moving = False

    def move(self, player, object_map):
        new_x, new_y = player.x, player.y
        if self.activated:
            if player.x > self.x:
                new_x = self.x + ENEMY_SPEED
            elif player.x < self.x:
                new_x = self.x - ENEMY_SPEED

            if player.y > self.y:
                new_y = self.y + ENEMY_SPEED
            elif player.y < self.y:
                new_y = self.y - ENEMY_SPEED

            self.x, self.y = check_collision(self.x, self.y, new_x, new_y, object_map, ENEMY_MARGIN)
            if (self.x == new_x) or (self.y == new_y):
                self.moving = True
            else:
                self.moving = False

Sprites have also now been given types and subtypes to help assign appropriate behavior. Sprites are now configured as per this code:

self.list_of_sprites = {
      'barrel': {
        'sprite': pygame.image.load('assets/images/sprites/objects/barrel_fire/0.png').convert_alpha(),
        'viewing_angles': None,
        'shift': 0.8,
        'scale': (0.8, 0.8),
        'animation': deque(
          [pygame.image.load(f'assets/images/sprites/objects/barrel_fire/{i}.png').convert_alpha() for i in
           range(6)]),
        'animation_distance': 2000,
        'animation_speed': 10,
        'type': 'object',
        'subtype': 'barrel',
        'interactive': False,
        'interaction_sound': None,
      },
      'car': {
        'sprite': pygame.image.load(f'assets/images/sprites/objects/car.png').convert_alpha(),
        'viewing_angles': False,
        'shift': 0.3,
        'scale': (2.0, 2.0),
        'animation': [],
        'animation_distance': 0,
        'animation_speed': 0,
        'type': 'object',
        'subtype': 'car',
        'interactive': False,
        'interaction_sound': None,
      },
      'blank': {
        'sprite': [pygame.image.load(f'assets/images/sprites/enemy/blank/stand/{i}.png').convert_alpha() for i
               in
               range(8)],
        'viewing_angles': True,
        'shift': 0.1,
        'scale': (1.0, 1.0),
        'animation': deque(
          [pygame.image.load(f'assets/images/sprites/enemy/blank/walk/{i}.png').convert_alpha() for i in
           range(8)]),
        'animation_distance': 3000,
        'animation_speed': 6,
        'type': 'enemy',
        'subtype': 'blank',
        'interactive': False,
        'interaction_sound': None,
      },
      'sprite_door_y_axis': {
        'sprite': [pygame.image.load(f'assets/images/sprites/objects/door_v/{i}.png').convert_alpha() for i in
               range(8)],
        'viewing_angles': True,
        'shift': 0.01,
        'scale': (2.4, 1.4),
        'animation': [],
        'animation_distance': 0,
        'animation_speed': 0,
        'type': 'door',
        'subtype': 'door_y_axis',
        'interactive': True,
        'interaction_sound': pygame.mixer.Sound('assets/audio/door.wav'),
      },
      'sprite_door_x_axis': {
        'sprite': [pygame.image.load(f'assets/images/sprites/objects/door_h/{i}.png').convert_alpha() for i in
               range(8)],
        'viewing_angles': True,
        'shift': 0.01,
        'scale': (2.4, 1.4),
        'animation': [],
        'animation_distance': 0,
        'animation_speed': 0,
        'type': 'door',
        'subtype': 'door_x_axis',
        'interactive': True,
        'interaction_sound': pygame.mixer.Sound('assets/audio/door.wav'),
      },
    }

The update_sprite_map method has been modified to include enemy flags for where enemies are located. This will be used in the future when enemies can damage the player:

def update_sprite_map(self):
    self.sprite_map = {} # used for collision detection with sprites - this will need to move when sprites can move
    self.enemy_map = {}
    for sprite in self.list_of_sprites:
      if not sprite.delete and sprite.type != 'enemy':
        sprite_location = common.align_grid(sprite.x, sprite.y)
        self.sprite_map[sprite_location] = 'sprite'
      elif not sprite.delete and sprite.type == 'enemy':
        enemy_location = common.align_grid(sprite.x, sprite.y)
        self.enemy_map[enemy_location] = 'enemy'

The SpriteBase __init__, and locate_sprite methods had to be modified to implement the new enemy class and also implement logic to determine if the enemy is moving so that the images loaded under the animation variable could be used to create a walking animation.

Here is the code of the __init__, and locate_sprite methods:

def __init__(self, parameters, pos):
    self.sprite_object = parameters['sprite']
    self.shift = parameters['shift']
    self.scale = parameters['scale']
    self.animation = parameters['animation'].copy()
    self.animation_distance = parameters['animation_distance']
    self.animation_speed = parameters['animation_speed']
    self.type = parameters['type']
    self.subtype = parameters['subtype']
    self.viewing_angles = parameters['viewing_angles']
    self.animation_count = 0
    self.pos = self.x, self.y = pos[0] * GRID_BLOCK, pos[1] * GRID_BLOCK
    self.interact_trigger = False
    self.previous_position_y = self.y
    self.previous_position_x = self.x
    self.delete = False
    self.interactive = parameters['interactive']
    self.interaction_sound = parameters['interaction_sound']
    if self.type == 'enemy':
      self.object = Enemy(self.x, self.y, self.subtype)
    else:
      self.object = None

    if self.viewing_angles:
      sprite_angle_delta = int(360 / len(self.sprite_object)) # Used to determine at what degree angle to
      # change the sprite image- this is based on the number of images loaded for the item.
      self.sprite_angles = [frozenset(range(i, i + sprite_angle_delta)) for i in
                 range(0, 360, sprite_angle_delta)]
      self.sprite_positions = {angle: pos for angle, pos in zip(self.sprite_angles, self.sprite_object)}
      self.sprite_object = self.sprite_object[0] # set a default image until correct one is selected

  def locate_sprite(self, player, object_map):
    if self.object:
      self.object.move(player, object_map)
    dx, dy = self.x - player.x, self.y - player.y
    self.distance_to_sprite = math.sqrt(dx ** 2 + dy ** 2)

    theta = math.atan2(dy, dx)
    gamma = theta - player.angle

    if dx > 0 and 180 <= math.degrees(player.angle) <= 360 or dx < 0 and dy < 0:
      gamma += DOUBLE_PI

    delta_rays = int(gamma / DELTA_ANGLE)
    current_ray = CENTER_RAY + delta_rays
    self.distance_to_sprite *= math.cos(HALF_FOV - current_ray * DELTA_ANGLE)

    sprite_ray = current_ray + SPRITE_RAYS
    if 0 <= sprite_ray <= SPRITE_RAYS_RANGE and self.distance_to_sprite > 30:
      projected_height = min(int(WALL_HEIGHT / self.distance_to_sprite), resY * 2)
      sprite_width = int(projected_height * self.scale[0])
      sprite_height = int(projected_height * self.scale[1])
      half_sprite_width = sprite_width // 2
      half_sprite_height = sprite_height // 2
      shift = half_sprite_height * self.shift

      if self.interact_trigger:
        self.interact()
        if self.interaction_sound and not self.delete:
          if not pygame.mixer.Channel(3).get_busy():
            pygame.mixer.Channel(3).play(pygame.mixer.Sound(self.interaction_sound))

      if self.viewing_angles:
        if theta < 0:
          theta += DOUBLE_PI
        theta = 360 - int(math.degrees(theta))

        if self.type == "enemy":
          if self.object.activated:
            theta = 0

        for angles in self.sprite_angles:
          if theta in angles:
            self.sprite_object = self.sprite_positions[angles]
            break

      if self.animation and self.distance_to_sprite < self.animation_distance:
        if self.type == 'enemy':
          if self.object.moving:
            self.sprite_object = self.animation[0]
        else:
          self.sprite_object = self.animation[0]
        if self.animation_count < self.animation_speed:
          self.animation_count += 1
        else:
          self.animation.rotate()
          self.animation_count = 0

      sprite = pygame.transform.scale(self.sprite_object, (sprite_width, sprite_height))
      if not self.delete:
        if (self.type == 'enemy') and self.object:
          self.object.activated = True
          self.pos = self.x, self.y = self.object.x, self.object.y

        return {'image': sprite, 'x': (current_ray * SCALE - half_sprite_width),
            'y': (HALF_HEIGHT - half_sprite_height + shift), 'distance': self.distance_to_sprite}
      else:
        if (self.type == 'enemy') and self.object:
          self.object.activated = False
          self.pos = self.x, self.y = self.object.x, self.object.y
        None
    else:
      return None

The source code for everything discussed in the post can be downloaded here and the executable here.

DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 4

DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 3

In this post, the following features added to the game engine will be covered:

  1. Adding music to the game.
  2. Adding animated sprites.
  3. Fixed the distortion of wall textures if the player stands too close to them.
  4. Changed sprite scaling to handle height and width independently.
  5. Added Interactive sprites (Doors)

Music

To add music the following lines of code was added to the main.py file:

pygame.mixer.music.set_volume(0.05)
    pygame.mixer.music.load('assets/audio/music/Future Ramen_CPV1_Nexus Nights_Master_24_48k.mp3')
    pygame.mixer.music.play(-1)

The ‘-1’ parameter in the play function sets the music to loop, so when the track has completed playing, it will start playing from the beginning again.

Animated Sprites and Scaling of Sprites

To facilitate the additional values required to implement animated sprites as well as separate width and height scaling, the definition of the parameters of each sprite is now handled in a dictionary as below:

self.list_of_sprites = {
            'barrel': {
                'sprite': pygame.image.load('assets/images/sprites/objects/barrel_fire/0.png').convert_alpha(),
                'viewing_angles': None,
                'shift': 0.8,
                'scale': (0.8, 0.8),
                'animation': deque(
                    [pygame.image.load(f'assets/images/sprites/objects/barrel_fire/{i}.png').convert_alpha() for i in
                     range(6)]),
                'animation_distance': 2000,
                'animation_speed': 10,
                'type': 'barrel',
                'interactive': False,
                'interaction_sound': None,
            },
            'zombie360': {
                'sprite': [pygame.image.load(f'assets/images/sprites/enemy/zombie/{i}.png').convert_alpha() for i in
                           range(4)],
                'viewing_angles': True,
                'shift': 0.6,
                'scale': (1.1, 1.1),
                'animation': [],
                'animation_distance': 0,
                'animation_speed': 0,
                'type': 'zombie',
                'interactive': False,
                'interaction_sound': None,
            },
            'car': {
                'sprite': pygame.image.load(f'assets/images/sprites/objects/car.png').convert_alpha(),
                'viewing_angles': False,
                'shift': 0.3,
                'scale': (2.0, 2.0),
                'animation': [],
                'animation_distance': 0,
                'animation_speed': 0,
                'type': 'car',
                'interactive': False,
                'interaction_sound': None,
            },
            'blank': {
                'sprite': [pygame.image.load(f'assets/images/sprites/enemy/blank/{i}.png').convert_alpha() for i in
                           range(8)],
                'viewing_angles': True,
                'shift': 0.6,
                'scale': (1.0, 1.4),
                'animation': [],
                'animation_distance': 0,
                'animation_speed': 0,
                'type': 'blank',
                'interactive': False,
                'interaction_sound': None,
            },
            'sprite_door_y_axis': {
                'sprite': [pygame.image.load(f'assets/images/sprites/objects/door_v/{i}.png').convert_alpha() for i in range(16)],
                'viewing_angles': True,
                'shift': 0.01,
                'scale': (2.4, 1.4),
                'animation': [],
                'animation_distance': 0,
                'animation_speed': 0,
                'type': 'door_y_axis',
                'interactive': True,
                'interaction_sound': pygame.mixer.Sound('assets/audio/door.wav'),
            },
            'sprite_door_x_axis': {
                'sprite': [pygame.image.load(f'assets/images/sprites/objects/door_h/{i}.png').convert_alpha() for i in range(16)],
                'viewing_angles': True,
                'shift': 0.01,
                'scale': (2.4, 1.4),
                'animation': [],
                'animation_distance': 0,
                'animation_speed': 0,
                'type': 'door_x_axis',
                'interactive': True,
                'interaction_sound': pygame.mixer.Sound('assets/audio/door.wav'),
            },
        }

scale is now a tuple containing a value for width and height scaling values separately.

Additionally, the following values were added, which are related to animating of sprites:

animation – if the sprite is an animated sprite, this will contain a list of images used in rendering the animation. The images used for the animation are loaded into a double-ended queue. This is a queue structure where data can be added and removed from the queue at both ends.

animation_distance – at which distance from the player the animation will start being rendered.

animation_speed – the speed at which the animation will be played.

type – used to determine the type of the sprite.

The next two variables will be used later in this post when we discuss interactive sprites (doors), they are:

interactive – which is set for whether a sprite can be interacted with or not.

interaction_sound – This stores an audio file that will be triggered if interaction with the sprite is triggered.

The implementation of how sprites are scaled has been changed to scale the width and height of the sprite separately, this will allow for more accurate scaling as well as fixing distortion of sprites that have a non-symmetrical aspect ratio.

The below cade has been added in the sprite.py file:

    sprite_width = int(projected_height * self.scale[0])
    sprite_height = int(projected_height * self.scale[1])
    half_sprite_width = sprite_width // 2
    half_sprite_height = sprite_height // 2
    shift = half_sprite_height * self.shift

And when the sprite is returned by the locate_sprite function, the x and y values are now determined as follows:

return {'image': sprite, 'x': (current_ray * SCALE - half_sprite_width),
                        'y': (HALF_HEIGHT - half_sprite_height + shift), 'distance': self.distance_to_sprite}

The following logic has been added to the locate_sprite function in the sprite.py file to play the animation:

  if self.animation and self.distance_to_sprite < self.animation_dist:
                self.sprite_object = self.animation[0]
                if self.animation_count < self.animation_speed:
                    self.animation_count += 1
                else:
                    self.animation.rotate()
                    self.animation_count = 0

In the function above, the current sprite object that will be rendered to the screen is set to the first object in the double-ended queue, and if the sprite animation speed has been exceeded, the double-ended queue will then be rotated, i.e., the first item in the double-ended queue will be moved to the back of the queue.

Fix for the Distortion of Wall Textures

There was a distortion of wall textures that occurred if the player moved too close to the walls. The issue resulted because the wall height was larger than the screen height at that point, and this was rectified by modifying the raycasting function as per below:

            projected_height = int(WALL_HEIGHT / depth)

            if projected_height > resY:
                texture_height = TEXTURE_HEIGHT / (projected_height / resY)
                wall_column = textures[texture].subsurface(offset * TEXTURE_SCALE,
                                                           (TEXTURE_HEIGHT // 2) - texture_height // 2,
                                                           TEXTURE_SCALE, texture_height)
                wall_column = pygame.transform.scale(wall_column, (SCALE, resY))
                wall_position = (ray * SCALE, 0)

            else:
                wall_column = textures[texture].subsurface(offset * TEXTURE_SCALE, 0, TEXTURE_SCALE, TEXTURE_HEIGHT)
                wall_column = pygame.transform.scale(wall_column, (SCALE, projected_height))
                wall_position = (ray * SCALE, HALF_HEIGHT - projected_height // 2)

            x, y = wall_position
            walls.append(
                {'image': wall_column, 'x': x, 'y': y, 'distance': depth})

Interactive Doors (with Sound)

To implement interactivity in the game world, a few changes have to be implemented.

Firstly a new variable needed to be added to the Player class called interact. This is a Boolean value that will be set to true if the player presses the ‘e’ key. here is the updated player.py file:

from common import *
from map import *


class Player:
    def __init__(self):
        player_pos = ((map_width / 2), (map_height / 2))
        self.x, self.y = player_pos
        self.angle = player_angle
        self.sensitivity = 0.001
        self.step_sound = pygame.mixer.Sound('assets/audio/footstep.wav')
        self.interact = False
        pygame.mixer.Channel(2).set_volume(0.2)

    @property
    def pos(self):
        return (self.x, self.y)

    def movement(self, sprite_map):
        self.keys_control(sprite_map)
        self.mouse_control()
        self.angle %= DOUBLE_PI  # Convert player angle to 0-360 degree values

    def check_collision(self, new_x, new_y, sprite_map):
        player_location = align_grid(new_x, new_y)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Center Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = align_grid(new_x - HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Top Left Corner Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = align_grid(new_x + HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Top Right Corner Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = align_grid(new_x - HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Bottom Left Corner Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = align_grid(new_x + HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Bottom Right Corner Collision" + str(new_x) + " " + str(new_y))
            return

        if not pygame.mixer.Channel(2).get_busy():
            pygame.mixer.Channel(2).play(pygame.mixer.Sound(self.step_sound))
        self.x = new_x
        self.y = new_y

    def keys_control(self,sprite_map):
        sin_a = math.sin(self.angle)
        cos_a = math.cos(self.angle)
        keys = pygame.key.get_pressed()
        if keys[pygame.K_ESCAPE]:
            exit()
        if keys[pygame.K_w]:
            nx = self.x + player_speed * cos_a
            ny = self.y + player_speed * sin_a
            self.check_collision(nx, ny, sprite_map)
        if keys[pygame.K_s]:
            nx = self.x + -player_speed * cos_a
            ny = self.y + -player_speed * sin_a
            self.check_collision(nx, ny, sprite_map)
        if keys[pygame.K_a]:
            nx = self.x + player_speed * sin_a
            ny = self.y + -player_speed * cos_a
            self.check_collision(nx, ny, sprite_map)
        if keys[pygame.K_d]:
            nx = self.x + -player_speed * sin_a
            ny = self.y + player_speed * cos_a
            self.check_collision(nx, ny, sprite_map)
        if keys[pygame.K_e]:
            self.interact = True
        if keys[pygame.K_LEFT]:
            self.angle -= 0.02
        if keys[pygame.K_RIGHT]:
            self.angle += 0.02

    def mouse_control(self):
        if pygame.mouse.get_focused():
            difference = pygame.mouse.get_pos()[0] - HALF_WIDTH
            pygame.mouse.set_pos((HALF_WIDTH, HALF_HEIGHT))
            self.angle += difference * self.sensitivity

Next, we need to implement a new class called Interaction. This class is implemented in the interactions.py file.

In this class, a function called interaction_world_objects is defined. This function first checks if the player has pressed the interact button (‘e’) and, if so, iterates through each sprite in the game world, checking that the sprite’s distance from the player is within range. If the sprite is in range and it is an interactive sprite, the sprites interact_trigger variable will be set to true.

Here is the code contained in the interactions.py file:

from settings import *
from common import *


class Interactions:
    def __init__(self, player, sprites, drawing):
        self.player = player
        self.sprites = sprites
        self.drawing = drawing

    def interaction_world_objects(self):
        if self.player.interact:
            for obj in sorted(self.sprites.list_of_sprites, key=lambda obj: obj.distance_to_sprite):
                px, py = align_grid(self.player.x, self.player.y)
                sx, sy = align_grid(obj.x, obj.y)
                x_dist = px - sx
                y_dist = py - sy
                print('x distance : ' + str(x_dist))
                print('y distance : ' + str(y_dist))
                if obj.interactive:
                    if ((-INTERACTION_RANGE <= x_dist <= INTERACTION_RANGE) and (
                            -INTERACTION_RANGE <= y_dist <= INTERACTION_RANGE)) and not obj.interact_trigger:
                        obj.interact_trigger = True

Lastly, the sprite.py file needs to be updated. First, a check must be done in the locate_sprite function to see if the sprite’s interact_trigger value has been set to true:

 if self.interact_trigger:
                self.interact()
                if self.interaction_sound and not self.delete:
                    if not pygame.mixer.Channel(3).get_busy():
                        pygame.mixer.Channel(3).play(pygame.mixer.Sound(self.interaction_sound))

This calls the sprite’s interact function and plays the audio file associated with the sprites interaction.

The interact function as shown below determines the type of the sprite and performs some action based thereon:

    def interact(self):
        if self.type == 'door_y_axis':
            self.y -= 1
            if abs(self.y - self.previous_position_y) > GRID_BLOCK:
                self.delete = True
        elif self.type == 'door_x_axis':
            self.x -= 1
            if abs(self.x - self.previous_position_x) > GRID_BLOCK:
                self.delete = True

In the event of the x-axis and y-axis doors, the function moves the sprite to the side, creating the effect of a door opening.

The source code for everything discussed in the post can be downloaded here and the executable here.

The next thing to be implemented is NPC characters that move around the game world. Check for future posts on this topic.

DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 3

DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 2

In this post, the addition of the following features to the game engine will be covered:

  1. General Enhancements (Making the numbers of rays scale based on the resolution and splitting the raycasting and drawing functionality into separate functions).
  2. Addition of Sound (for now, only footsteps).
  3. Hiding the mouse cursor.
  4. Add a fullscreen flag (set to run game in fullscreen or window mode).
  5. Adding static flat sprites (same image from all directions).
  6. Convert player angle to 0-360 degree angle (removing the potential for negative values).
  7. Add collision detection with sprites.
  8. Implement a Z-buffer.
  9. Add multi-angle sprites (different images from different viewing angles).

General Enhancements

To scale the number of rays to the resolution the following logic is added in the settings.py file:

NUM_RAYS = int(resX / 4) # Would work with all standard resolutions

All logic related to drawing images to the screen has now been removed from the raycasting function and moved to the drawing.py file. This is done for future extensibility and to facilitate the drawing of items other than walls.

Sound

In order to add the sound of footsteps, the sound clip needs to be loaded into a variable:

step_sound = pygame.mixer.Sound('assets/audio/footstep.wav')

And then, every time the player moves, the clip is played, but first, a check is done to ensure the sound is not already playing. This is to avoid the sound playing over itself, resulting in an audio mess:

if not pygame.mixer.Channel(2).get_busy():
     pygame.mixer.Channel(2).play(pygame.mixer.Sound(step_sound))

Hide Mouse Cursor and Fullscreen

To hide the mouse cursor and add a fullscreen flag, the following code was added to the main.py file:

pygame.mouse.set_visible(False)
screen = pygame.display.set_mode((resX, resY), SET_FULLSCREEN)

With the SET_FULLSCREEN flag begging defined and set in the settings.py file.

Here is the source code with the changes up to this point.

Static Sprites

The next major thing added was static sprites.

Let us now examine how sprites are rendered in the game engine.

Sprites are image files (png files with transparency) that are scaled and positioned to create the appearance of a tangible object in the pseudo-3D world.

The image below illustrates the values at play for determining the sprite positioning and scaling:

Thus

gamma (γ) = theta (θ) – player angle(a)

and

theta (θ) = atan2 (player y, player x)

Where atan2 is used to determine the arctangent of point (y, x) in radians, it has a potential value between -π and π.

The distance from the player to the sprite is calculated as follows:

Distance to Sprite (d) = sqrt(player x ** 2 + player y ** 2) * cos((HALF_FOV – current_ray * DELTA_ANGLE)

where

current_ray = CENTER_RAY + delta_rays

and

delta_rays = int(gamma / DELTA_ANGLE)

and

CENTER_RAY = NUM_RAYS // 2 – 1

Here is the code of how this is implemented:

    def locate_sprite(self, player):
        dx, dy = self.x - player.x, self.y - player.y
        distance_to_sprite = math.sqrt(dx ** 2 + dy ** 2)

        theta = math.atan2(dy, dx)
        gamma = theta - player.angle

        if dx > 0 and 180 <= math.degrees(player.angle) <= 360 or dx < 0 and dy < 0:
            gamma += DOUBLE_PI

        delta_rays = int(gamma / DELTA_ANGLE)
        current_ray = CENTER_RAY + delta_rays
        distance_to_sprite *= math.cos(HALF_FOV - current_ray * DELTA_ANGLE)

        sprite_ray = current_ray + SPRITE_RAYS
        if 0 <= sprite_ray <= SPRITE_RAYS_RANGE and distance_to_sprite > 30:
            projected_height = min(int(WALL_HEIGHT / distance_to_sprite * self.scale), resY*2)
            half_projected_height = projected_height // 2
            shift = half_projected_height * self.shift

            sprite = pygame.transform.scale(self.sprite_object, (projected_height, projected_height))
            return {'image': sprite, 'x': (current_ray * SCALE - half_projected_height), 'y': (HALF_HEIGHT - half_projected_height + shift), 'distance': distance_to_sprite}
        else:
            return None

This logic for this is implemented in the sprite.py file.

For the above logic to function, the player.angle needs to have a value of 0 to 360. This is done by adding the following line to the movement function in the Player class:

self.angle %= DOUBLE_PI 

Here is the source code with static sprite feature added.

Sprite Collision Detection

The next feature added was collision detection with sprites. This functions in the same way as collision detection with walls.

A new dictionary similar to world_map was created called sprite_map, this is used to store the location of all sprites in the game world.

self.sprite_map = {} # used for collision detection with sprites
sprite_location = common.align_grid(sprite.x, sprite.y)
self.sprite_map[sprite_location] = 'sprite'

Next, the player collision detection function was updated as below:

    def check_collision(self, new_x, new_y, sprite_map):
        player_location = align_grid(new_x, new_y)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Center Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = align_grid(new_x - HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Top Left Corner Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = align_grid(new_x + HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Top Right Corner Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = align_grid(new_x - HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Bottom Left Corner Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = align_grid(new_x + HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
        if player_location in world_map or player_location in sprite_map:
            #  collision
            print("Bottom Right Corner Collision" + str(new_x) + " " + str(new_y))
            return

        if not pygame.mixer.Channel(2).get_busy():
            pygame.mixer.Channel(2).play(pygame.mixer.Sound(self.step_sound))
        self.x = new_x
        self.y = new_y

Here is the source code with sprite collision detection implemented.

Z-Buffer

A Z-buffer is implemented as a storage location for all items (walls and sprites) that have to be drawn to the screen. The content of the Z-buffer is sorted by depth, ensuring that items are rendered in the correct sequence, and items behind other items are thus not visible to the player.

The Z-buffer is implemented as a list of dictionaries, with the structure of the dictionary defined as follows:

{'image': value, 'x': value, 'y':value, 'distance': value}

All walls and sprites to be drawn to the screen are added to the Z-buffer and sorted by distance from the player, starting with the items with the largest distance.

In the drawing.py file the following method is then used to sort the Z-buffer and draw its contents to the screen:

    def world(self, zbuffer):
        zbuffer = sorted(zbuffer, key=lambda k: k['distance'], reverse=True)
        # Sort items by distance to ensure they are drawn in correct sequence, i.e. an item is not drawn in front
        # another if it is closer than other object.
        for item in zbuffer:
            self.screen.blit(item['image'], (item['x'], item['y']))

Multi-Angle Sprites

Multi-angle sprites are sprites where the image rendered to the screen changes based on the player’s viewing angle. This gives the illusion of a 3D object with a front, sides, and a back.

Instead of loading a single image for the sprite, multiple images are loaded into a list as follows:

[pygame.image.load(f'assets/images/sprites/enemy/zombie/{i}.png').convert_alpha()for i in range(4)]

In the above code snippet, four images are loaded (front, left, right, and back), which will result in a choppy rotation effect. Ideally, at least eight images (i.e., angles) would be used for a smoother effect.

In the constructor of the SpriteBase class the following is added:

        if not static:
            sprite_angle_delta = int(360 / len(self.sprite_object))  # Used to determine at what degree angle to
            # change the sprite image- this is based on the number of images loaded for the item.
            self.sprite_angles = [frozenset(range(i, i + sprite_angle_delta)) for i in range(0, 360, sprite_angle_delta)]
            self.sprite_positions = {angle: pos for angle, pos in zip(self.sprite_angles, self.sprite_object)}
            self.sprite_object = sprite_object[0]  # set a default image until correct one is selected

This is used to set the angles at which the image should be changed based on the number of images present and also set the sprite position based on the different angles.

The only other code needed to make this function is adding the following to the locate_sprite function in the SpriteBase class :

            if not self.static:
                if theta < 0:
                    theta += DOUBLE_PI
                theta = 360 - int(math.degrees(theta))

                for angles in self.sprite_angles:
                    if theta in angles:
                        self.sprite_object = self.sprite_positions[angles]
                        break

The above code selects the correct image from the list based on the player’s viewing angle.

Here is the complete sprite.py file with all the above changes included:

import common
from settings import *


class Sprites:
    def __init__(self):
        self.sprite_types = {
            'clock': pygame.image.load('assets/images/sprites/objects/Clock.png').convert_alpha(),
            'zombie': pygame.image.load('assets/images/sprites/enemy/zombie.png').convert_alpha(),
            'zombie360': [pygame.image.load(f'assets/images/sprites/enemy/zombie/{i}.png').convert_alpha()for i in range(4)],
        }

        self.list_of_sprites = [
            SpriteBase(self.sprite_types['clock'], True, (5, 10), 0.6, 1.1),
            SpriteBase(self.sprite_types['zombie'], True, (5, 12), 0.6, 1.1),
            SpriteBase(self.sprite_types['zombie360'], False, (14, 10), 0.6, 1.1),
        ]

        self.update_sprite_map()

    def update_sprite_map(self):
        self.sprite_map = {}  # used for collision detection with sprites - this will need to move when sprites can move
        for sprite in self.list_of_sprites:
            sprite_location = common.align_grid(sprite.x, sprite.y)
            self.sprite_map[sprite_location] = 'sprite'


class SpriteBase:
    def __init__(self, sprite_object, static, pos, shift, scale):
        self.sprite_object = sprite_object
        self.static = static
        self.pos = self.x, self.y = pos[0] * GRID_BLOCK, pos[1] * GRID_BLOCK
        self.shift = shift
        self.scale = scale

        if not static:
            sprite_angle_delta = int(360 / len(self.sprite_object))  # Used to determine at what degree angle to
            # change the sprite image- this is based on the number of images loaded for the item.
            self.sprite_angles = [frozenset(range(i, i + sprite_angle_delta)) for i in range(0, 360, sprite_angle_delta)]
            self.sprite_positions = {angle: pos for angle, pos in zip(self.sprite_angles, self.sprite_object)}
            self.sprite_object = sprite_object[0]  # set a default image until correct one is selected

    def locate_sprite(self, player):
        dx, dy = self.x - player.x, self.y - player.y
        distance_to_sprite = math.sqrt(dx ** 2 + dy ** 2)

        theta = math.atan2(dy, dx)
        gamma = theta - player.angle

        if dx > 0 and 180 <= math.degrees(player.angle) <= 360 or dx < 0 and dy < 0:
            gamma += DOUBLE_PI

        delta_rays = int(gamma / DELTA_ANGLE)
        current_ray = CENTER_RAY + delta_rays
        distance_to_sprite *= math.cos(HALF_FOV - current_ray * DELTA_ANGLE)

        sprite_ray = current_ray + SPRITE_RAYS
        if 0 <= sprite_ray <= SPRITE_RAYS_RANGE and distance_to_sprite > 30:
            projected_height = min(int(WALL_HEIGHT / distance_to_sprite * self.scale), resY*2)
            half_projected_height = projected_height // 2
            shift = half_projected_height * self.shift

            if not self.static:
                if theta < 0:
                    theta += DOUBLE_PI
                theta = 360 - int(math.degrees(theta))

                for angles in self.sprite_angles:
                    if theta in angles:
                        self.sprite_object = self.sprite_positions[angles]
                        break

            sprite = pygame.transform.scale(self.sprite_object, (projected_height, projected_height))
            return {'image': sprite, 'x': (current_ray * SCALE - half_projected_height), 'y': (HALF_HEIGHT - half_projected_height + shift), 'distance': distance_to_sprite}
        else:
            return None

Here is the source code with the Z-buffer and multi-angle sprites implemented.

Load Maps From File

To make changing the map and loading different maps easier, the map layout is now defined in a text file and loaded when needed.

The map.py file has been modified as per below:

from settings import *

game_map = []
with open('map/map01.txt') as f:
    for line in f:
        game_map.append(line.strip())

# map size
map_height = len(game_map) * GRID_BLOCK
map_width = len(game_map[0]) * GRID_BLOCK

world_map = {}
for j, row in enumerate(game_map):
    for i, char in enumerate(row):
        if char != '0':
            if char == '1':
                world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '1'
            elif char == '2':
                world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '2'
            elif char == '3':
                world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '3'

The line.strip() function is used to remove the newline character from the end of each line.

The text file where the map is defined is shown in the image below:

Here is the source code where map loading from a file is implemented.

The next things I am going to be working on is adding moving sprites and also adding interactive elements to the game world, including doors that open and close. Keep an eye out for future posts that will cover new features I have implemented.

DEVELOPING A RAYCASTING ‘3D’ ENGINE GAME IN PYTHON AND PYGAME – PART 2

Developing a Raycasting ‘3D’ Engine Game in Python and PyGame – PART 1

I have started developing a raycasting game in Python (using PyGame) as a learning exercise and to get a better understanding of the math and techniques involved.

Raycasting is a graphic technique used to render pseudo-3D graphics based on a 2D game world. The best-known example of a raycasting engine used in a computer game is probably Wolfenstein 3D, developed by id Software in 1992.

So firstly, here are some resources I used to upskill and get my head around the topic:

YouTube tutorial series by Standalone Coder. These videos are in Russian, but the YouTube subtitles do a good enough job to follow along.

YouTube tutorial series by Code Monkey King.

Lode’s Computer Graphics Tutorial.

Lastly, I recommend the book Game Engine Black Book: Wolfenstein 3D by Fabien Sanglard, it is not an easy read, but it gives excellent insight into the development of Wolfenstein 3D and a great deal of information into the intricate details of Raycasting and texture mapping.

The Basics of Raycasting

The first thing to understand is that Raycasting is not true 3D, but rather rendering a 2D world in pseudo 3D. Therefore, all movement and game positions consist of only x and y positions, with no height or z positions.

The entire game world consists of a grid, with some blocks in the grid being populated with walls and others being empty. An example of this is shown in the picture below:

In the current version of the game, the world map is implemented as a list of strings, where each character in the string represents a block in the grid. The ‘0’ character represents an empty block, and all other numbers represent a wall. The numbers ‘1’, ‘2’, and ‘3’ are used to show different wall textures according to the different numbers, something covered later in this post.

game_map = [
    '11111111111111111111',
    '10000000000003330001',
    '10011100000000000001',
    '10030000000000000001',
    '10020000000000300001',
    '10020001110000000001',
    '10000330000000000001',
    '10000330000000000001',
    '10000330000000000001',
    '10000330000000000001',
    '10000330000000000001',
    '10000330000000000001',
    '10020000000000300001',
    '10020001110000000001',
    '10000330000000000001',
    '10000330000000000001',
    '10020000000000300001',
    '10020001110000000001',
    '10000330000000000001',
    '11111111111111111111'
]

This is then converted into a dictionary as follows:

world_map = {}
for j, row in enumerate(game_map):
    for i, char in enumerate(row):
        if char != '0':
            if char == '1':
                world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '1'
            elif char == '2':
                world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '2'
            elif char == '3':
                world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '3'

The player is placed on this grid with a x and y coordinates determining the player’s position on the grid. Along with the x and y coordinates, the player also has a viewing angle, i.e., a direction the player is facing.

Now that we have the foundation in place, we can get to the raycasting.

To understand this concept, imagine a line originating from the player and heading off in the direction the player is facing.

Now, this is not an endless line, but rather a line that keeps expanding from one world grid line to the next. (this is done with a for loop).

At every point where this ‘ray’ intersects a grid line on the game world, a check is done to determine if the grid line in question is a wall or not.

If it is a wall, the loop expanding the line is stopped, and the x and y coordinates where the wall was intersected will be noted. We will use this a bit later when drawing the pseudo-3D rendering of the world.

The above is the simplest form of raycasting. However, a single ray will not give us a usable amount of information to do the pseudo-3D render with. This is where a player’s FOV (field of view) and more rays come in.

The Player FOV is an angle on the game world originating at the player and extending out in a triangular form. This determines where the player’s visible range at present begins and ends. For this game, I will use a FOV of 60% (i.e., pi/3).

To change the FOV, the following can be used as a guide:

RadiansDegrees
π / 630°
π / 4 45°
π / 360°
π / 290°
π 180°

Within this FOV, several rays will be generated, exactly as per the single one in the example discussed earlier.

In this game, a value of 480 rays has been defined, which will be generated within the FOV, so the process above for a single ray will be repeated 480 times, with each ray cast having its angle increased by a marginal amount from the previous ray.

The angle of the first ray will be determined as follows:

Starting angle = Player Angle – Half the FOV

Where Player Angle is defined as the center point of direction player is facing.

For each subsequent ray, the angle of the ray will be increased by a delta angle calculated as followed:

Delta Angle = FOV/Number of Rays

This will allow for a sufficient set of information to draw a pseudo-3D rendering from.

To see how this is implemented, please look at lines 6 to 39 in the raycasting.py file.

Sine and Cosine functions are used to determine the intersecting coordinates, and if you require a refresher on these functions, I recommend this web article from mathisfun.com.

For calculating the y coordinate where the ray intersects with a wall, the following formula is used:

y = (player y) + depth * sin(ray angle)

And to calculate the x coordinate where the ray intersects with a wall, the following formula is used:

x = (player x) + depth * cos(ray angle)

For depth value in the above formulas, a sequence of numbers would usually be looped through, starting at 0 and ending at some defined maximum depth.

The above formulas would then be executed at each new depth level to get the corresponding x and y coordinates.

This does provide the desired results, but it is not very optimized.

To improve the performance of this operation, the Digital Differential Analyzer (DDA) algorithm will be used. At a high level, the DDA algorithm functions by not checking every pixel of the 2D game world for an intersection of a ray and a wall but only checking on the grid lines of the 2D world (the only place where walls can occur).

To implement the DDA algorithm, we are going to need four extra variables in conjunction with the Player x and y coordinates, namely:

dx and dy – these two variables will determine the step size to the next grid line. Based on the direction of the angle, these either have the value of 1 or -1.

gx and gy – This will be the x and y coordinates of the grid lines that will be iterated through, starting with the grid line the closest to the player x and y position. The initial value is determined using the following function, located in the common.py file:

def align_grid(x, y):
    return (x // GRID_BLOCK) * GRID_BLOCK, (y // GRID_BLOCK) * GRID_BLOCK

This will ensure that the returned x and y coordinates are located on the closet grid line (based on game world tile size). For reference, the // operator in Python is floor division and rounds the resulting number to the nearest whole number down.

To determine the depth to the next y-axis grid line, the following equation will be used:

Depth Y = (gx – player x) / cos (ray angle)

And to determine the depth of the next x-axis grid line, this equation is used:

Depth X = (gy – player y) / sin (ray angle)

The below two code blocks implement what was just described, the first block of code is to determine intersections with walls on the y axis of the world map:

        # checks for walls on y axis
        gx, dx = (xm + GRID_BLOCK, 1) if cos_a >= 0 else (xm, -1)
        for count in range(0, MAX_DEPTH, GRID_BLOCK):
            depth_y = (gx - px) / cos_a
            y = py + depth_y * sin_a
            tile_y = align_grid(gx + dx, y)
            if tile_y in world_map:
                # Ray has intersection with wall
                texture_y = world_map[tile_y]
                ray_col_y = True
                break
            gx += dx * GRID_BLOCK

And the next block of code is to determine intersections with walls on the x axis of the world map:

        # checks for walls on x axis
        gy, dy = (ym + GRID_BLOCK, 1) if sin_a >= 0 else (ym, -1)
        for count in range(0, MAX_DEPTH, GRID_BLOCK):
            depth_x = (gy - py) / sin_a
            x = px + depth_x * cos_a
            tile_x = align_grid(x, gy + dy)
            if tile_x in world_map:
                # Ray has intersection with wall
                texture_x = world_map[tile_x]
                ray_col_x = True
                break
            gy += dy * GRID_BLOCK

texture_x and texture_y are used to store the index of the texture to display on the wall. We will cover this later in this post.


Now that we have the raycasting portion covered, which is the most complex, we can focus on simply rendering the pseudo-3D graphics to the screen.

At a very high level, the basic concept of how the pseudo-3D graphics will be created, is to draw a rectangle for every ray that has intersected a wall. The x position of the rectangle will be based on the angle of the ray. The y position will be determined based on the distance of the wall from the player, with a width of the rectangle equal to the distance between the rays (calculated with Window resolution width / Number of Rays) and a user-defined height.

This will create a very basic pseudo-3D effect, and it would be much nicer using textured walls.

To implement textured walls the concept remains the same, but instead of just drawing rectangles, we will copy a small strip from a texture image and draw that to the screen instead.

In the code blocks above, there were two variables texture_x and texture_y. Where a wall intersection did occur these variables will contain a value of ‘1’, ‘2’ or ‘3’ based on the value in the world map, and these correspond to different textures that are loaded in a dictionary as follows:

textures = {
                    '1': pygame.image.load('images/textures/1.png').convert(),
                    '2': pygame.image.load('images/textures/2.png').convert(),
                    '3': pygame.image.load('images/textures/3.png').convert(),
                    'S': pygame.image.load('images/textures/sky.png').convert()
                   }

Firstly the correct section of the texture needs to be loaded based on the ray’s position on the wall. This is done as follows:

wall_column = textures[texture].subsurface(offset * TEXTURE_SCALE, 0, TEXTURE_SCALE, TEXTURE_HEIGHT)

Depending if it is for a x-axis or y-axis wall, the follwoing values will be as follows:

For a x-axis wall:

texture = texture_x

offset = int(x) % GRID_BLOCK

Where x is the x coordinate of the wall intersection.

And for a y-axis wall:

texture = texture_y

offset = int(y) % GRID_BLOCK

Where y is the y coordinate of the wall intersection.

Next, the section of the texture needs to be resized correctly based on its distance from the player as follows:

wall_column = pygame.transform.scale(wall_column, (SCALE, projected_height))

Where the values are determined as below:

projected_height = min(int(WALL_HEIGHT / depth), 2 * resY)

resY = Window Resolution Height

For a x-axis wall:

depth = max((depth_x * math.cos(player_angle – cur_angle)),0.00001)


For a y-axis wall:

depth = max((depth_y * math.cos(player_angle – cur_angle)),0.00001)

The last thing to do then is to draw the resized texture portion to the screen:

sc.blit(wall_column, (ray * SCALE, HALF_HEIGHT - projected_height // 2))

The above operations of copying a section of a texture, resizing it, and drawing it to the screen is done for every ray that intersects a wall.

The last thing to do and by far the least complex is to draw in the sky box and the floor. The sky box is simply an image, loaded in the texture dictionary under the ‘S’ key, which is drawn to the screen. The sky box is drawn in three blocks:

        sky_offset = -5 * math.degrees(angle) % resX
        self.screen.blit(self.textures['S'], (sky_offset, 0))
        self.screen.blit(self.textures['S'], (sky_offset - resX, 0))
        self.screen.blit(self.textures['S'], (sky_offset + resX, 0))

This ensures that no gap appears as the player turns and creates the impression of an endless sky.

Lastly, for the floor, a solid color rectangle is drawn as below:

pygame.draw.rect(self.screen, GREY, (0, HALF_HEIGHT, resX, HALF_HEIGHT)) 

For reference, the following PyGame functions are used in the game up to this point:

pygame.init
Used to initialize pygame modules and get them ready to use.

pygame.display.set_mode
Used to initialize a window to display the game.

pygame.image.load
Used to load an image file from the supplied path into a variable to be used when needed.

pygame.Surface.subsurface
Used to get a copy of a section of an image (surface) based on the supplied x position,y position, width, and height values.

pygame.transform.scale
Used to resize an image (surface) to the supplied width and height.

pygame.Surface.blit
Used to draw images to the screen.

pygame.display.flip
Used to update the full display Surface to the screen.

pygame.Surface.fill
Used to fill the display surface with a background color.

pygame.draw.rect
Used to draw a rectangle to the screen (used for the floor).

Also used pygame.key.get_pressed, pygame.event.get and pygame.mouse methods for user input.

Collision Detection

Because the game plays out in a 2D world, collision detection is rather straightforward.

The player has a square hitbox, and every time the player inputs a movement, the check_collision function is called with the new x and y positions the player wants to move to. The function then uses the new x and y positions to determine the player hitbox and check if it is in contact with any walls; if so, the move is not allowed. Otherwise, the player x and y positions are updated to the new positions.

Here is the check_collision function that forms part of the Player class:

 def check_collision(self, new_x, new_y):
        player_location = mapping(new_x , new_y)
        if player_location in world_map:
            #  collision
            print("Center Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = mapping(new_x - HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
        if player_location in world_map:
            #  collision
            print("Top Left Corner Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = mapping(new_x + HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
        if player_location in world_map:
            #  collision
            print("Top Right Corner Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = mapping(new_x - HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
        if player_location in world_map:
            #  collision
            print("Bottom Left Corner Collision" + str(new_x) + " " + str(new_y))
            return

        player_location = mapping(new_x + HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
        if player_location in world_map:
            #  collision
            print("Bottom Right Corner Collision" + str(new_x) + " " + str(new_y))
            return

        self.x = new_x
        self.y = new_y

Here is a video of the current version of the game in action:

The current version of this game is still a work in progress, but if you are interested, the source code can be downloaded here and the executable here.

Some of the next things on the to-do list are loading levels from the file, adding sprites to the game world, and adding some interactive world items, such as doors that open and close.

I will keep creating posts on this topic as I progress with this project.

Developing a Raycasting ‘3D’ Engine Game in Python and PyGame – PART 1

REVIEW – Audio-Technica ATH-M40x

A few months ago I had to replace my daily driver headphones after my Samson Z55 headphones broke after nearly four years of everyday use (the bracket connecting one of the ear cups snapped off). After doing some research and being unable to source another Samson Z55, I decided on getting the Audio-Technica ATH-M40x.

The ATH-M40x are closed-back dynamic headphones with 40mm rare earth magnet drivers, with an impedance of 35 ohms, making them very easy to power.

The headphones have a frequency response of 15 – 24,000 Hz and are tuned flat for incredibly accurate sound monitoring across the entire frequency range, thus making them excellent studio reference headphones.

The headphone consists of a mainly plastic construction with a folding design, making them easy to pack away in a travel bag.

As with most decent headphones, the ATH-M40x has a detachable cable. The one thing to note is that the cable connects to the headphones via a 2.5mm jack, instead of a 3.5mm jack, as with many headphones.

The ATH-M40x headphones have a very comfortable fit, except for the included ear pads, which I found too small and caused unpleasant pressure on my ears, a common problem I have found with most earpads included with headphones. I resolved this issue by replacing the earpads with the Brainwavz Hybrid Memory Foam Ear Pads, available on Amazon for around $25.

I enjoy the sound quality and tuning of the ATH-M40x, and after a few months of usages, I am impressed by the quality they offer, especially at the $99 price point. Although the ATH-M40x will not be to everyone’s tastes, especially for people who prefer heavier bass, I can highly recommend them for anyone looking for a comfortable neutral headphone.

The Audio-Technica ATH-M40x is available on Amazon for $99.

REVIEW – Audio-Technica ATH-M40x

3D PRINTING REVIEW – FILLAMENTUM PLA EXTRAFILL

Fillamentum is a Czech Republic-based company specializing in the manufacturing of high-quality 3D printing filaments. Their PLA filament, which they call PLA Extrafill. The filament is made of natural ingredients and can be biodegraded by industrial composting. PLA Extrafill is also safe for food contact applications.

Fillamentum PLA Extrafill is more expensive than many other companies PLA filaments, costing approximately $26 (USD) for 750 grams of filament compared to approximately $28 (USD) for 1kg of CCTREE filament.

Extrafill is available in diameters of 1.75 mm and 2.85 mm (with a diameter tolerance of +-0.05mm), and in a wide variety of colors, I used “Traffic Black” for this review.

As with all PLA-based filaments, it has a recommended printing temperature of 190-210°C.

I experienced a great deal of difficulty successfully printing this PLA, far more than any other PLA I have used in the past. The PLA Extrafill kept clogging the 3D printer hot end with every single print. I tried various setting profiles in Cura. However, the result was always a clogged hot end. This was the case until I dropped the default retraction distance in CURA by a third, and this rectified the clogging hot end issue and allowed me to complete a few successful prints. However, reducing the retraction distance did result in a great deal of striking, more than any other PLA I have ever used. I did manage to reduce this by changing the travel and retraction speeds and reducing the print temperature to 180°C.

Here are some photos of my attempts to print the 3DBenchy model. They illustrate nicely the difficulties encountered.

As I kept refining the settings, I managed to get better results and eliminated more of the print issues I experienced.

Here are some pictures of a Judge Dredd bust with only slight drooping issues around the helmet.

I also printed a Desk organizer to store my 3D print finishing tools.

I finally managed to refine my setting to the point where I could print miniatures with a great level of detail.

The Above picture shows the miniatures next to a AA battery for scale.

If anyone is interested in the Cure settings used to print these miniatures, you can download my Cura settings profile here. This was configured on Cura 4.8.0.

Fillamentum PLA Extrafill is capable of producing excellent results if you put in the work. However, I do feel that given the difficulties experienced with the filament and the results being no better than other less expensive filaments, for example, eSun PLA+, I find Fillamentum PLA Extrafill extremely difficult to recommend.

3D PRINTING REVIEW – FILLAMENTUM PLA EXTRAFILL

2021 PROJECTS

In this post, I will cover some projects I have worked on over the last few months and some projects I have planned for the future.

Bipedal Robot


I am currently busy building a bipedal robot based on this Instructables post by K.Biagini. I used his design as a foundation and added additional components and functionality (such as arms and a Piezo for sound).

I had to modify his 3D models to achieve what I wanted. Here are links to download my modified 3d Models:
– Body Extension (to fit in the extra components) – Link
– Modified Head – Link
– Arms – Link

Here is a list of all the electronic components used:
– 1x Arduino Nano
– 6x micro servos
– 2 x push buttons
– 1x mini toggle switch
– 1x 9v Battery
– 1x ultrasonic sensor (HC-SR04)
– 1x RGB LED
– 1x Piezo

These components are connected as follows:

Pinout configuration of Arduino Nano:

Pin NumberConnected Hardware
2Ultrasonic Sensor Echo Pin
3RGB LED Red Pin
4Push Button 1
5RGB LED Green Pin
6RGB LED Blue Pin
7Push Button 2
8Servo Signal Pin (Right Hip)
9Servo Signal Pin (Right Ankle)
10Servo Signal Pin (Left Hip)
11Piezo
12Servo Signal Pin (Left Ankle)
13Ultrasonic Sensor Trigger Pin
14 (A0)Servo Signal Pin (Left Arm)
15 (A1)Servo Signal Pin (Right Arm)

This is still an in-progress project and is not done, Especially from a coding perspective on the Arduino, but once I have completed this project, I will create a post containing the complete source code.

Rotary Control

I needed a rotary control for another project discussed below, so I decided to build one as per this Post on the Prusa Printers blog. It is based on an Arduino Pro Micro and uses Rotary Encoder Module.

I modified the code available on the Prusa blog to mimic keyboard WASD inputs. Turning the dial left and right will input A and D, respectively. Pressing in the dial control push button will switch to up and down inputs, thus turning the dial left and right will input W and S.
Here is the modified code (Based on Prusa Printers blog post code):

#include <ClickEncoder.h>
#include <TimerOne.h>
#include <HID-Project.h>

#define ENCODER_CLK A0 
#define ENCODER_DT A1
#define ENCODER_SW A2

ClickEncoder *encoder; // variable representing the rotary encoder
int16_t last, value; // variables for current and last rotation value
bool upDown = false;
void timerIsr() {
  encoder->service();
}

void setup() {
  Serial.begin(9600); // Opens the serial connection
  Keyboard.begin();
  encoder = new ClickEncoder(ENCODER_DT, ENCODER_CLK, ENCODER_SW); 

  Timer1.initialize(1000); // Initializes the timer
  Timer1.attachInterrupt(timerIsr); 
  last = -1;
} 

void loop() {  
  value += encoder->getValue();

  if (value != last) { 
    if (upDown)
    {
    if(last<value) // Detecting the direction of rotation
        Keyboard.write('s');
      else
        Keyboard.write('w');
    }
    else
    {
      if(last<value) // Detecting the direction of rotation
        Keyboard.write('d');
      else
        Keyboard.write('a');
    }
    last = value; 
    Serial.print("Encoder Value: "); 
    Serial.println(value);
  }

  // This next part handles the rotary encoder BUTTON
  ClickEncoder::Button b = encoder->getButton(); 
  if (b != ClickEncoder::Open) {
    switch (b) {
      case ClickEncoder::Clicked: 
        upDown = !upDown;
      break;      
      
      case ClickEncoder::DoubleClicked: 
        
      break;      
    }
  }

  delay(10); 
}

I use the rotary control with a Raspberry Pi to control a camera pan-tilt mechanism. Here is a video showing it in action:

I will cover the purpose of the camera as well as the configuration and coding related to the pan-tilt mechanism later in this post.

Raspberry Pi Projects

Raspberry Pi and TensorFlow lite

TensorFlow is a deep learning library developed by Google that allows for the easy creation and implementation of Machine Learning models. There are many articles available online on how to do this, so I will not focus on how to do this.

At a high level, I created a basic object identification model created on my windows PC and then converted the model to a TensorFlow lite model that can be run on a Raspberry pi 4. When the TensorFlow lite model is run on the Raspberry Pi, a video feed is shown of the attached Raspberry Pi camera, with green blocks around items that the model has identified with a text label of what the model believes the object is, as well as a numerical percentage which indicates the level of confidence the model has in the object identification.

I have attached a 3inch LCD screen (in a 3D printed housing) to the Raspberry Pi to show the video feed and object identification in real-time.

The Raspberry Pi Camera is mounted on a pan-tilt bracket which is controlled via two micro servos. As mentioned earlier, the pan-tilt mechanism is controlled via the dial control discussed earlier. The pan-tilt mechanism servos are driven by an Arduino Uno R3 connected to the Raspberry Pi 4 via USB. I initially connected servos straight to Raspberry Pi GPIO pins. However, this resulted in servo jitter. After numerous modifications and attempted fixes, I was not happy with the results, so I decided to use an Arduino Uno R3 to drive the servos instead and connect it to the Raspberry Pi Via USB. I have always found hardware interfacing significantly easier with Arduino and also the result more consistent.

Here is a diagram of how the servos are connected to the Arduino Uno R3:

Below is the Arduino source code I wrote to control the servos. Instructions are sent to the Arduino through serial communication via USB, and the servos are adjusted accordingly.

#include <Servo.h>
#define SERVO1_PIN A2
#define SERVO2_PIN A3

Servo servo1;
Servo servo2;
String direction;
String key;
int servo1Pos = 0;
int servo2Pos = 0;

void setup()
{
  servo1Pos = 90;
  servo2Pos = 90;
  Serial.begin(9600);
  servo1.attach(SERVO1_PIN);
  servo2.attach(SERVO2_PIN);

  servo1.write(30);
  delay(500);
  servo1.write(180);
  delay(500);
  servo1.write(servo1Pos);
  delay(500);
  servo2.write(30);
  delay(500);
  servo2.write(150);
  delay(500);
  servo2.write(servo2Pos);
  delay(500);
  Serial.println("Started");
  servo1.detach();
  servo2.detach();
}

String readSerialPort()
{
  String msg = "";
  if (Serial.available()) {
    delay(10);
    msg = Serial.read();
    Serial.flush();
    msg.trim();
    Serial.println(msg);
  }
  return msg;
}

void loop()
{
  direction = "";
  direction = readSerialPort();
  //Serial.print("direction : " + direction);
  key = "";

  if (direction != "")
  {
    direction.trim();
    key = direction;

    servo1.attach(SERVO1_PIN);
    servo2.attach(SERVO2_PIN);

    if (key == "97")
    {
      if (servo2Pos > 30)
      {
        servo2Pos -= 10;
      }
      servo2.write(servo2Pos);
      delay(500);
      Serial.print("A");
    }

    else if (key == "115")
    {
      if (servo1Pos < 180)
      {
        servo1Pos += 10;
      }
      servo1.write(servo1Pos);
      delay(500);
      Serial.print("S");
    }

    else if (key == "119")
    {
      if (servo1Pos > 30)
      {
        servo1Pos -= 10;
      }
      servo1.write(servo1Pos);
      delay(500);
      Serial.print("W");
    }

    else if (key == "100")
    {
      if (servo2Pos < 150)
      {
        servo2Pos += 10;
      }
      servo2.write(servo2Pos);
      delay(500);
      Serial.print("D");
    }

    delay(100);
    servo1.detach();
    servo2.detach();
  }

}

On the Raspberry Pi, the following Python script is used to transfer the rotary control input via serial communication to the Arduino:

# Import libraries
import serial
import time
import keyboard
import pygame

pygame.init()
screen = pygame.display.set_mode((1, 1))

with serial.Serial("/dev/ttyACM0", 9600, timeout=1) as arduino:
    time.sleep(0.1)
if arduino.isOpen():
    done = False
while not done:
    for event in pygame.event.get():
    if event.type == pygame.QUIT:
    done = True
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_s:
    arduino.write('s'.encode())

if event.key == pygame.K_w:
    arduino.write('w'.encode())

if event.key == pygame.K_a:
    arduino.write('a'.encode())

if event.key == pygame.K_d:
    arduino.write('d'.encode())
time.sleep(0.5)

arduino.Close();
print ("Goodbye")

The next thing I want to implement on this project is face tracking using TensorFlow lite with automated camera movement.

Raspberry Pi Zero W Mini PC

I built a tiny PC using a Raspberry Pi Zero W combined with a RII RT-MWK01 V3 wireless mini keyboard and a 5 inch LCD display for Raspberry Pi with a 3D printed screen stand.


It is possible to run Quake 1 on the Raspberry Pi Zero following the instructions in this GitHub, and it runs great.

Raspberry Pi Mini Server Rack

I have 3D printed a mini server rack and configured a four Raspberry Pi Cluster consisting of three raspberry Pi 3s and one Raspberry Pi 2. They are all networked via a basic five-port switch.

I am currently busy with a few different projects using the Pi cluster and will have some posts in the future going into some more details on these projects.

I developed a little Python application to monitor my different Raspberry Pis and show which ones are online (shown in green) and offline (shown in red).

The application pings each endpoint every 5 seconds, and it is also possible to click on an individual endpoint to ping it immediately. The list of endpoints is read from a CSV file, and it is easy to add additional endpoints. The UI is automatically updated on program startup with the endpoints listed in the CSV file.

Here is the Python source code of the application:

import PySimpleGUI as sg
import csv
import time
import os
from apscheduler.schedulers.background import BackgroundScheduler


def ping(address):
    response = os.system("ping -n 1 " + address)
    return response


def update_element(server):
    global window
    global layout
    response = ping(server.address)
    if response == 0:
        server.status = 1
        window.Element(server.name).Update(button_color=('white', 'green'))
        window.refresh()
    else:
        server.status = 0
        window.Element(server.name).Update(button_color=('white', 'red'))
        window.refresh()


def update_window():
    global serverList
    for server in serverlist:
        update_element(server)


class server:
    def __init__(self, name, address, status):
        self.name = name
        self.address = address
        self.status = status


serverlist = []

with open('servers.csv') as csv_file:
    csv_reader = csv.reader(csv_file, delimiter=',')
    line_count = 0
    for row in csv_reader:
        if line_count == 0:
            line_count += 1
        else:
            serverlist.append(server(row[0], row[1], 0))
            line_count += 1

layout = [
    [sg.Text("Server List:")],
]

for server in serverlist:
    layout.append([sg.Button('%s' % server.name, 
                    button_color=('white', 'orange'), 
                    key='%s' % server.name)])

window = sg.Window(title="KillerRobotics Server Monitor", 
                    layout=layout, margins=(100, 30))
window.finalize()
scheduler = BackgroundScheduler()
scheduler.start()

scheduler.add_job(update_window, 'interval', seconds=5, id='server_check_job')

while True:
    event, values = window.read()
    if event == sg.WIN_CLOSED:
        scheduler.remove_all_jobs()
        scheduler.shutdown()
        window.close()
        break
    elif event in [server.name for server in serverlist]:
        scheduler.pause()
        update_element([server for server in 
                         serverlist if server.name == event][0])
        scheduler.resume()

Raspberry Pi Pico

I ordered a few Raspberry Pi Picos on its release, and thus far, I am very impressed with this small and inexpensive microcontroller.

The Raspberry Pi Pico sells for $4 (USD) and has the following specifications:
– RP2040 microcontroller chip designed by Raspberry Pi
– Dual-core Arm Cortex-M0+ processor, flexible clock running up to 133 MHz
– 264KB on-chip SRAM
– 2MB on-board QSPI Flash
– 26 multifunction GPIO pins, including 3 analogue inputs
– 2 × UART, 2 × SPI controllers, 2 × I2C controllers, 16 × PWM channels
– 1 × USB 1.1 controller and PHY, with host and device support
– 8 × Programmable I/O (PIO) state machines for custom peripheral support
– Low-power sleep and dormant modes
– Accurate on-chip clock
– Temperature sensor
– Accelerated integer and floating-point libraries on-chip

It is a versatile little microcontroller that nicely fills the gap between Arduino and similar microcontrollers and the more traditional Raspberry Pis or similar single board computers.
I have only scratched the surface of using the Pico on some really basic projects, but I have quite a few ideas of using it on some more interesting projects in the future.

3D Printing

I ran into some problems with my 3D printer (Wanhao i3 Mini) over the last few months. The First problem was that half of the printed LCD display died, which was an annoyance, but the printer was still usable. The next issue, which was significantly more severe, was that the printer was unable to heat up the hot end.

My first course of action was to replace both the heating cartridge and the thermistor to ensure that neither of those components were to blame, and unfortunately, they were not. After some diagnostics with a multimeter on the printer’s motherboard, I determined that no power was passing through to the heating cartridge connectors on the motherboard.

I ordered a replacement motherboard and installed it, and the 3D printer is working as good as new again. When I have some more time, I will try and diagnose the exact problem on the old motherboard and repair it.
Here are photos of the old motherboard I removed from the printer:

Below are some photos of a few things I have 3D printed the last few months:

2021 PROJECTS