The following changes will be covered in this post:
Improved skybox.
Bugfix for a bug related to the rendering of walls introduced with functionality to look up and down.
Red transparent screen effect added to act as a damage indicator for the player when enemy contact occurs.
Skybox Improvement
Due to the pattern and size of the image used for the skybox, an issue would occur where the image would suddenly switch to a different position. Although not game-breaking, it was somewhat jarring. To improve this, two changes needed to be made to the image used for the skybox:
the Image X (horizontal) resolution needed to be changed to be the same as the display window resolution (In this case, 1920 pixels).
The image needed to be replaced with a seamless image, i.e., the two sides of the image aligned to create an infinitely repeating pattern of clouds.
Wall Rendering Bugfix A bug was introduced with the functionality for the player to look up and down that caused the rendering of walls to get miss-aligned if the player’s point of view was not vertically centered and the player was close to the wall in question. The image below shows an example of how the bug manifests:
This results from the game engine’s limitations and the lack of a z-axis for proper spatial positioning of items. To get around this, I added auto vertical centering of the player’s field of view every time the player moves. This will not completely fix the issue but will make it occur far less frequently.
To implement this change I added the following method in the Player class (in the playert.py file):
And updated the keys_control method in the player class as follows:
def keys_control(self, object_map,enemy_map):
sin_a = math.sin(self.angle)
cos_a = math.cos(self.angle)
keys = pygame.key.get_pressed()
if keys[pygame.K_ESCAPE]:
exit()
if keys[pygame.K_w]:
nx = self.x + player_speed * cos_a
ny = self.y + player_speed * sin_a
self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
if nx == self.x or ny == self.y:
self.play_sound(self.step_sound)
self.level_out_view()
if keys[pygame.K_s]:
nx = self.x + -player_speed * cos_a
ny = self.y + -player_speed * sin_a
self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
if nx == self.x or ny == self.y:
self.play_sound(self.step_sound)
self.level_out_view()
if keys[pygame.K_a]:
nx = self.x + player_speed * sin_a
ny = self.y + -player_speed * cos_a
self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
if nx == self.x or ny == self.y:
self.play_sound(self.step_sound)
self.level_out_view()
if keys[pygame.K_d]:
nx = self.x + -player_speed * sin_a
ny = self.y + player_speed * cos_a
self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
if nx == self.x or ny == self.y:
self.play_sound(self.step_sound)
self.level_out_view()
if keys[pygame.K_e]:
self.interact = True
self.level_out_view()
if keys[pygame.K_LEFT]:
self.angle -= 0.02
self.level_out_view()
if keys[pygame.K_RIGHT]:
self.angle += 0.02
self.level_out_view()
Player Visual Damage Indicator A Visual Damage Indicator is a way to let the player know he is taking damage. This will become more relevant at a later stage when the concept of health points is implemented, but for now, it provides a way of showing when the enemy is in touching range of the player. The number of enemies has also been increased to three to increase the chances of a damage event.
The Visual Damage Indicator is implemented by drawing a semi-transparent red rectangle over the screen whenever a collision between the player and the enemy is detected.
To check for these collisions a new function was added in the common.py file as below:
def check_collision_enemy(x, y, map_to_check, margin):
location = align_grid(x, y)
if location in map_to_check:
# collision
return True
location = align_grid(x - margin, y - margin)
if location in map_to_check:
# collision
return True
location = align_grid(x + margin, y - margin)
if location in map_to_check:
# collision
return True
location = align_grid(x - margin, y + margin)
if location in map_to_check:
# collision
return True
location = align_grid(x + margin, y + margin)
if location in map_to_check:
# collision
return True
return False
This function is called from the keys_control function in player class:
RED_HIGHLIGHT is a Tuple with four values stored in it. The first three values represent the RGB color code, and the last value indicates transparency level, with 0 being completely transparent and 255 completely opaque. The convert_alpha method tells Pygame to draw the rectangle to the screen applying the transparency effect.
Here is a video of the effect in action:
The source code for everything discussed in the post can be downloaded here and the executable here.
I am taking a slight detour from the Raycasting series of posts (don’t worry, the next post in the series is coming soon) to cover another small project I have been working on, creating a Rubber Duckly using a Raspberry Pico and CircuitPython.
A Rubber Ducky is a keystroke injection tool that is often disguised as a USB flash drive to trick an unsuspecting victim into plugging it into their computer. The computer recognizes the Rubber Ducky as a USB keyboard (and mouse if required), and when it is plugged in, it executes a sequence of pre-programmed keystrokes, which will be executed against the target computer, as if the user did it. This attack thus exploits the security roles and permissions assigned to the user logged in at the time. This is a good time to note that using a Rubber Ducky for dubious intents is illegal and a terrible idea, and I take no responsibility for the consequences if anyone chooses to use what they learn here to commit such acts.
To create the Rubber Ducky described in this post, you will need four things: 1. A Rasberry Pico 2. A Micro USB Cable 3. CircuitPython 4. The Adafruit HID Library of CircuitPython
First, you will need to install CircuitPython on your Raspberry Pico. This link will provide all the instructions and downloads you will require to do this. Next, you will need to install the Adafruit HID Library. Instructions on how to do this can be found here.
Now that all the pre-requisites are installed and configured, the source code below can be deployed using the process described in the first link. The Source code below executes a sequence of keystrokes that opens Notepad on the target computer and type out a message. Just note that the keystrokes are slowed down significantly to make what is happening visible to the user Typically, this will not be done with a Rubber Ducky.
So for 16 images, this would result in 22.5 degrees. The decimal 0.5 would be dropped because we use int(), and all operations dependent on the sprite_angle_delta uses an integer number.
This decimal loss results in a dead zone between 352 and 360 degrees that caused the KeyError.
To fix this, the number of sprite images was reduced to 8, as 16 was unnecessary for the purposes we require in this scenario.
Alternatively, the sprite_angle_delta could have been changed to a float variable, and all the dependent operations could have been modified accordingly to facilitate this. However, this would have added unnecessary complexity for the functionality required in the game.
Refactoring of Collision Detection Algorithm to be More Generic and Reusable
Firstly, the check_collision function was moved out of the Player class and into the common.py file. Next, the function was refactored as per the code below so that it returns either the existing x and y values (before the move) if a collision occurred or the new x and y values (after the move) if no collision was detected:
def check_collision(x, y, new_x, new_y, map_to_check, margin):
location = align_grid(new_x, new_y)
if location in map_to_check:
# collision
return x, y
location = align_grid(new_x - margin, new_y - margin)
if location in map_to_check:
# collision
return x, y
location = align_grid(new_x + margin, new_y - margin)
if location in map_to_check:
# collision
return x, y
location = align_grid(new_x - margin, new_y + margin)
if location in map_to_check:
# collision
return x, y
location = align_grid(new_x + margin, new_y + margin)
if location in map_to_check:
# collision
return x, y
return new_x, new_y
The Player keys_control method was modified as per below to facilitate the new check_collision function:
def keys_control(self, object_map):
sin_a = math.sin(self.angle)
cos_a = math.cos(self.angle)
keys = pygame.key.get_pressed()
if keys[pygame.K_ESCAPE]:
exit()
if keys[pygame.K_w]:
nx = self.x + player_speed * cos_a
ny = self.y + player_speed * sin_a
self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
if nx != self.x and ny != self.y:
self.play_sound(self.step_sound)
if keys[pygame.K_s]:
nx = self.x + -player_speed * cos_a
ny = self.y + -player_speed * sin_a
self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
if nx != self.x and ny != self.y:
self.play_sound(self.step_sound)
if keys[pygame.K_a]:
nx = self.x + player_speed * sin_a
ny = self.y + -player_speed * cos_a
self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
if nx != self.x and ny != self.y:
self.play_sound(self.step_sound)
if keys[pygame.K_d]:
nx = self.x + -player_speed * sin_a
ny = self.y + player_speed * cos_a
self.x, self.y = check_collision(self.x, self.y, nx, ny, object_map, HALF_PLAYER_MARGIN)
if nx != self.x and ny != self.y:
self.play_sound(self.step_sound)
if keys[pygame.K_e]:
self.interact = True
if keys[pygame.K_LEFT]:
self.angle -= 0.02
if keys[pygame.K_RIGHT]:
self.angle += 0.02
Where object_map is passed in from the main.py file and is created as follows:
object_map = {**sprites.sprite_map, **world_map}
object_map is thus a new dictionary that contains the values of the sprite_map and world_map dictionaries combined.
The check_collision function can now be easily used by enemies as well.
Basic Enemy Artificial Intelligence and Enemy Walking Animation
The enemy will, for now, only have very basic behavior and will try to move towards the player except if an obstacle is in the way.
A new Enemy class was created to accommodate this and is located in a new file called enemy.py. The contents of the enemy.py file:
from common import *
class Enemy:
def __init__(self, x, y, subtype):
self.x = x
self.y = y
self.subtype = subtype
self.activated = False
self.moving = False
def move(self, player, object_map):
new_x, new_y = player.x, player.y
if self.activated:
if player.x > self.x:
new_x = self.x + ENEMY_SPEED
elif player.x < self.x:
new_x = self.x - ENEMY_SPEED
if player.y > self.y:
new_y = self.y + ENEMY_SPEED
elif player.y < self.y:
new_y = self.y - ENEMY_SPEED
self.x, self.y = check_collision(self.x, self.y, new_x, new_y, object_map, ENEMY_MARGIN)
if (self.x == new_x) or (self.y == new_y):
self.moving = True
else:
self.moving = False
Sprites have also now been given types and subtypes to help assign appropriate behavior. Sprites are now configured as per this code:
The update_sprite_map method has been modified to include enemy flags for where enemies are located. This will be used in the future when enemies can damage the player:
def update_sprite_map(self):
self.sprite_map = {} # used for collision detection with sprites - this will need to move when sprites can move
self.enemy_map = {}
for sprite in self.list_of_sprites:
if not sprite.delete and sprite.type != 'enemy':
sprite_location = common.align_grid(sprite.x, sprite.y)
self.sprite_map[sprite_location] = 'sprite'
elif not sprite.delete and sprite.type == 'enemy':
enemy_location = common.align_grid(sprite.x, sprite.y)
self.enemy_map[enemy_location] = 'enemy'
The SpriteBase __init__, and locate_sprite methods had to be modified to implement the new enemy class and also implement logic to determine if the enemy is moving so that the images loaded under the animation variable could be used to create a walking animation.
Here is the code of the __init__, and locate_sprite methods:
def __init__(self, parameters, pos):
self.sprite_object = parameters['sprite']
self.shift = parameters['shift']
self.scale = parameters['scale']
self.animation = parameters['animation'].copy()
self.animation_distance = parameters['animation_distance']
self.animation_speed = parameters['animation_speed']
self.type = parameters['type']
self.subtype = parameters['subtype']
self.viewing_angles = parameters['viewing_angles']
self.animation_count = 0
self.pos = self.x, self.y = pos[0] * GRID_BLOCK, pos[1] * GRID_BLOCK
self.interact_trigger = False
self.previous_position_y = self.y
self.previous_position_x = self.x
self.delete = False
self.interactive = parameters['interactive']
self.interaction_sound = parameters['interaction_sound']
if self.type == 'enemy':
self.object = Enemy(self.x, self.y, self.subtype)
else:
self.object = None
if self.viewing_angles:
sprite_angle_delta = int(360 / len(self.sprite_object)) # Used to determine at what degree angle to
# change the sprite image- this is based on the number of images loaded for the item.
self.sprite_angles = [frozenset(range(i, i + sprite_angle_delta)) for i in
range(0, 360, sprite_angle_delta)]
self.sprite_positions = {angle: pos for angle, pos in zip(self.sprite_angles, self.sprite_object)}
self.sprite_object = self.sprite_object[0] # set a default image until correct one is selected
def locate_sprite(self, player, object_map):
if self.object:
self.object.move(player, object_map)
dx, dy = self.x - player.x, self.y - player.y
self.distance_to_sprite = math.sqrt(dx ** 2 + dy ** 2)
theta = math.atan2(dy, dx)
gamma = theta - player.angle
if dx > 0 and 180 <= math.degrees(player.angle) <= 360 or dx < 0 and dy < 0:
gamma += DOUBLE_PI
delta_rays = int(gamma / DELTA_ANGLE)
current_ray = CENTER_RAY + delta_rays
self.distance_to_sprite *= math.cos(HALF_FOV - current_ray * DELTA_ANGLE)
sprite_ray = current_ray + SPRITE_RAYS
if 0 <= sprite_ray <= SPRITE_RAYS_RANGE and self.distance_to_sprite > 30:
projected_height = min(int(WALL_HEIGHT / self.distance_to_sprite), resY * 2)
sprite_width = int(projected_height * self.scale[0])
sprite_height = int(projected_height * self.scale[1])
half_sprite_width = sprite_width // 2
half_sprite_height = sprite_height // 2
shift = half_sprite_height * self.shift
if self.interact_trigger:
self.interact()
if self.interaction_sound and not self.delete:
if not pygame.mixer.Channel(3).get_busy():
pygame.mixer.Channel(3).play(pygame.mixer.Sound(self.interaction_sound))
if self.viewing_angles:
if theta < 0:
theta += DOUBLE_PI
theta = 360 - int(math.degrees(theta))
if self.type == "enemy":
if self.object.activated:
theta = 0
for angles in self.sprite_angles:
if theta in angles:
self.sprite_object = self.sprite_positions[angles]
break
if self.animation and self.distance_to_sprite < self.animation_distance:
if self.type == 'enemy':
if self.object.moving:
self.sprite_object = self.animation[0]
else:
self.sprite_object = self.animation[0]
if self.animation_count < self.animation_speed:
self.animation_count += 1
else:
self.animation.rotate()
self.animation_count = 0
sprite = pygame.transform.scale(self.sprite_object, (sprite_width, sprite_height))
if not self.delete:
if (self.type == 'enemy') and self.object:
self.object.activated = True
self.pos = self.x, self.y = self.object.x, self.object.y
return {'image': sprite, 'x': (current_ray * SCALE - half_sprite_width),
'y': (HALF_HEIGHT - half_sprite_height + shift), 'distance': self.distance_to_sprite}
else:
if (self.type == 'enemy') and self.object:
self.object.activated = False
self.pos = self.x, self.y = self.object.x, self.object.y
None
else:
return None
The source code for everything discussed in the post can be downloaded here and the executable here.
The ‘-1’ parameter in the play function sets the music to loop, so when the track has completed playing, it will start playing from the beginning again.
Animated Sprites and Scaling of Sprites
To facilitate the additional values required to implement animated sprites as well as separate width and height scaling, the definition of the parameters of each sprite is now handled in a dictionary as below:
scale is now a tuple containing a value for width and height scaling values separately.
Additionally, the following values were added, which are related to animating of sprites:
animation – if the sprite is an animated sprite, this will contain a list of images used in rendering the animation. The images used for the animation are loaded into a double-ended queue. This is a queue structure where data can be added and removed from the queue at both ends.
animation_distance – at which distance from the player the animation will start being rendered.
animation_speed – the speed at which the animation will be played.
type – used to determine the type of the sprite.
The next two variables will be used later in this post when we discuss interactive sprites (doors), they are:
interactive – which is set for whether a sprite can be interacted with or not.
interaction_sound – This stores an audio file that will be triggered if interaction with the sprite is triggered.
The implementation of how sprites are scaled has been changed to scale the width and height of the sprite separately, this will allow for more accurate scaling as well as fixing distortion of sprites that have a non-symmetrical aspect ratio.
The below cade has been added in the sprite.py file:
The following logic has been added to the locate_sprite function in the sprite.py file to play the animation:
if self.animation and self.distance_to_sprite < self.animation_dist:
self.sprite_object = self.animation[0]
if self.animation_count < self.animation_speed:
self.animation_count += 1
else:
self.animation.rotate()
self.animation_count = 0
In the function above, the current sprite object that will be rendered to the screen is set to the first object in the double-ended queue, and if the sprite animation speed has been exceeded, the double-ended queue will then be rotated, i.e., the first item in the double-ended queue will be moved to the back of the queue.
Fix for the Distortion of Wall Textures
There was a distortion of wall textures that occurred if the player moved too close to the walls. The issue resulted because the wall height was larger than the screen height at that point, and this was rectified by modifying the raycasting function as per below:
To implement interactivity in the game world, a few changes have to be implemented.
Firstly a new variable needed to be added to the Player class called interact. This is a Boolean value that will be set to true if the player presses the ‘e’ key. here is the updated player.py file:
from common import *
from map import *
class Player:
def __init__(self):
player_pos = ((map_width / 2), (map_height / 2))
self.x, self.y = player_pos
self.angle = player_angle
self.sensitivity = 0.001
self.step_sound = pygame.mixer.Sound('assets/audio/footstep.wav')
self.interact = False
pygame.mixer.Channel(2).set_volume(0.2)
@property
def pos(self):
return (self.x, self.y)
def movement(self, sprite_map):
self.keys_control(sprite_map)
self.mouse_control()
self.angle %= DOUBLE_PI # Convert player angle to 0-360 degree values
def check_collision(self, new_x, new_y, sprite_map):
player_location = align_grid(new_x, new_y)
if player_location in world_map or player_location in sprite_map:
# collision
print("Center Collision" + str(new_x) + " " + str(new_y))
return
player_location = align_grid(new_x - HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
if player_location in world_map or player_location in sprite_map:
# collision
print("Top Left Corner Collision" + str(new_x) + " " + str(new_y))
return
player_location = align_grid(new_x + HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
if player_location in world_map or player_location in sprite_map:
# collision
print("Top Right Corner Collision" + str(new_x) + " " + str(new_y))
return
player_location = align_grid(new_x - HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
if player_location in world_map or player_location in sprite_map:
# collision
print("Bottom Left Corner Collision" + str(new_x) + " " + str(new_y))
return
player_location = align_grid(new_x + HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
if player_location in world_map or player_location in sprite_map:
# collision
print("Bottom Right Corner Collision" + str(new_x) + " " + str(new_y))
return
if not pygame.mixer.Channel(2).get_busy():
pygame.mixer.Channel(2).play(pygame.mixer.Sound(self.step_sound))
self.x = new_x
self.y = new_y
def keys_control(self,sprite_map):
sin_a = math.sin(self.angle)
cos_a = math.cos(self.angle)
keys = pygame.key.get_pressed()
if keys[pygame.K_ESCAPE]:
exit()
if keys[pygame.K_w]:
nx = self.x + player_speed * cos_a
ny = self.y + player_speed * sin_a
self.check_collision(nx, ny, sprite_map)
if keys[pygame.K_s]:
nx = self.x + -player_speed * cos_a
ny = self.y + -player_speed * sin_a
self.check_collision(nx, ny, sprite_map)
if keys[pygame.K_a]:
nx = self.x + player_speed * sin_a
ny = self.y + -player_speed * cos_a
self.check_collision(nx, ny, sprite_map)
if keys[pygame.K_d]:
nx = self.x + -player_speed * sin_a
ny = self.y + player_speed * cos_a
self.check_collision(nx, ny, sprite_map)
if keys[pygame.K_e]:
self.interact = True
if keys[pygame.K_LEFT]:
self.angle -= 0.02
if keys[pygame.K_RIGHT]:
self.angle += 0.02
def mouse_control(self):
if pygame.mouse.get_focused():
difference = pygame.mouse.get_pos()[0] - HALF_WIDTH
pygame.mouse.set_pos((HALF_WIDTH, HALF_HEIGHT))
self.angle += difference * self.sensitivity
Next, we need to implement a new class called Interaction. This class is implemented in the interactions.py file.
In this class, a function called interaction_world_objects is defined. This function first checks if the player has pressed the interact button (‘e’) and, if so, iterates through each sprite in the game world, checking that the sprite’s distance from the player is within range. If the sprite is in range and it is an interactive sprite, the sprites interact_trigger variable will be set to true.
Here is the code contained in the interactions.py file:
from settings import *
from common import *
class Interactions:
def __init__(self, player, sprites, drawing):
self.player = player
self.sprites = sprites
self.drawing = drawing
def interaction_world_objects(self):
if self.player.interact:
for obj in sorted(self.sprites.list_of_sprites, key=lambda obj: obj.distance_to_sprite):
px, py = align_grid(self.player.x, self.player.y)
sx, sy = align_grid(obj.x, obj.y)
x_dist = px - sx
y_dist = py - sy
print('x distance : ' + str(x_dist))
print('y distance : ' + str(y_dist))
if obj.interactive:
if ((-INTERACTION_RANGE <= x_dist <= INTERACTION_RANGE) and (
-INTERACTION_RANGE <= y_dist <= INTERACTION_RANGE)) and not obj.interact_trigger:
obj.interact_trigger = True
Lastly, the sprite.py file needs to be updated. First, a check must be done in the locate_sprite function to see if the sprite’s interact_trigger value has been set to true:
if self.interact_trigger:
self.interact()
if self.interaction_sound and not self.delete:
if not pygame.mixer.Channel(3).get_busy():
pygame.mixer.Channel(3).play(pygame.mixer.Sound(self.interaction_sound))
This calls the sprite’s interact function and plays the audio file associated with the sprites interaction.
The interact function as shown below determines the type of the sprite and performs some action based thereon:
In this post, the addition of the following features to the game engine will be covered:
General Enhancements (Making the numbers of rays scale based on the resolution and splitting the raycasting and drawing functionality into separate functions).
Addition of Sound (for now, only footsteps).
Hiding the mouse cursor.
Add a fullscreen flag (set to run game in fullscreen or window mode).
Adding static flat sprites (same image from all directions).
Convert player angle to 0-360 degree angle (removing the potential for negative values).
Add collision detection with sprites.
Implement a Z-buffer.
Add multi-angle sprites (different images from different viewing angles).
General Enhancements
To scale the number of rays to the resolution the following logic is added in the settings.py file:
NUM_RAYS = int(resX / 4) # Would work with all standard resolutions
All logic related to drawing images to the screen has now been removed from the raycasting function and moved to the drawing.py file. This is done for future extensibility and to facilitate the drawing of items other than walls.
Sound
In order to add the sound of footsteps, the sound clip needs to be loaded into a variable:
And then, every time the player moves, the clip is played, but first, a check is done to ensure the sound is not already playing. This is to avoid the sound playing over itself, resulting in an audio mess:
if not pygame.mixer.Channel(2).get_busy():
pygame.mixer.Channel(2).play(pygame.mixer.Sound(step_sound))
Hide Mouse Cursor and Fullscreen
To hide the mouse cursor and add a fullscreen flag, the following code was added to the main.py file:
With the SET_FULLSCREEN flag begging defined and set in the settings.py file.
Here is the source code with the changes up to this point.
Static Sprites
The next major thing added was static sprites.
Let us now examine how sprites are rendered in the game engine.
Sprites are image files (png files with transparency) that are scaled and positioned to create the appearance of a tangible object in the pseudo-3D world.
The image below illustrates the values at play for determining the sprite positioning and scaling:
Thus
gamma (γ) = theta (θ) – player angle(a)
and
theta (θ) = atan2 (player y, player x)
Where atan2 is used to determine the arctangent of point (y, x) in radians, it has a potential value between -π and π.
The distance from the player to the sprite is calculated as follows:
Distance to Sprite (d) = sqrt(player x ** 2 + player y ** 2) * cos((HALF_FOV – current_ray * DELTA_ANGLE)
This logic for this is implemented in the sprite.py file.
For the above logic to function, the player.angle needs to have a value of 0 to 360. This is done by adding the following line to the movement function in the Player class:
self.angle %= DOUBLE_PI
Here is the source code with static sprite feature added.
Sprite Collision Detection
The next feature added was collision detection with sprites. This functions in the same way as collision detection with walls.
A new dictionary similar to world_map was created called sprite_map, this is used to store the location of all sprites in the game world.
self.sprite_map = {} # used for collision detection with sprites
sprite_location = common.align_grid(sprite.x, sprite.y)
self.sprite_map[sprite_location] = 'sprite'
Next, the player collision detection function was updated as below:
def check_collision(self, new_x, new_y, sprite_map):
player_location = align_grid(new_x, new_y)
if player_location in world_map or player_location in sprite_map:
# collision
print("Center Collision" + str(new_x) + " " + str(new_y))
return
player_location = align_grid(new_x - HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
if player_location in world_map or player_location in sprite_map:
# collision
print("Top Left Corner Collision" + str(new_x) + " " + str(new_y))
return
player_location = align_grid(new_x + HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
if player_location in world_map or player_location in sprite_map:
# collision
print("Top Right Corner Collision" + str(new_x) + " " + str(new_y))
return
player_location = align_grid(new_x - HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
if player_location in world_map or player_location in sprite_map:
# collision
print("Bottom Left Corner Collision" + str(new_x) + " " + str(new_y))
return
player_location = align_grid(new_x + HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
if player_location in world_map or player_location in sprite_map:
# collision
print("Bottom Right Corner Collision" + str(new_x) + " " + str(new_y))
return
if not pygame.mixer.Channel(2).get_busy():
pygame.mixer.Channel(2).play(pygame.mixer.Sound(self.step_sound))
self.x = new_x
self.y = new_y
Here is the source code with sprite collision detection implemented.
Z-Buffer
A Z-buffer is implemented as a storage location for all items (walls and sprites) that have to be drawn to the screen. The content of the Z-buffer is sorted by depth, ensuring that items are rendered in the correct sequence, and items behind other items are thus not visible to the player.
The Z-buffer is implemented as a list of dictionaries, with the structure of the dictionary defined as follows:
All walls and sprites to be drawn to the screen are added to the Z-buffer and sorted by distance from the player, starting with the items with the largest distance.
In the drawing.py file the following method is then used to sort the Z-buffer and draw its contents to the screen:
def world(self, zbuffer):
zbuffer = sorted(zbuffer, key=lambda k: k['distance'], reverse=True)
# Sort items by distance to ensure they are drawn in correct sequence, i.e. an item is not drawn in front
# another if it is closer than other object.
for item in zbuffer:
self.screen.blit(item['image'], (item['x'], item['y']))
Multi-Angle Sprites
Multi-angle sprites are sprites where the image rendered to the screen changes based on the player’s viewing angle. This gives the illusion of a 3D object with a front, sides, and a back.
Instead of loading a single image for the sprite, multiple images are loaded into a list as follows:
[pygame.image.load(f'assets/images/sprites/enemy/zombie/{i}.png').convert_alpha()for i in range(4)]
In the above code snippet, four images are loaded (front, left, right, and back), which will result in a choppy rotation effect. Ideally, at least eight images (i.e., angles) would be used for a smoother effect.
In the constructor of the SpriteBase class the following is added:
if not static:
sprite_angle_delta = int(360 / len(self.sprite_object)) # Used to determine at what degree angle to
# change the sprite image- this is based on the number of images loaded for the item.
self.sprite_angles = [frozenset(range(i, i + sprite_angle_delta)) for i in range(0, 360, sprite_angle_delta)]
self.sprite_positions = {angle: pos for angle, pos in zip(self.sprite_angles, self.sprite_object)}
self.sprite_object = sprite_object[0] # set a default image until correct one is selected
This is used to set the angles at which the image should be changed based on the number of images present and also set the sprite position based on the different angles.
The only other code needed to make this function is adding the following to the locate_sprite function in the SpriteBase class :
if not self.static:
if theta < 0:
theta += DOUBLE_PI
theta = 360 - int(math.degrees(theta))
for angles in self.sprite_angles:
if theta in angles:
self.sprite_object = self.sprite_positions[angles]
break
The above code selects the correct image from the list based on the player’s viewing angle.
Here is the complete sprite.py file with all the above changes included:
import common
from settings import *
class Sprites:
def __init__(self):
self.sprite_types = {
'clock': pygame.image.load('assets/images/sprites/objects/Clock.png').convert_alpha(),
'zombie': pygame.image.load('assets/images/sprites/enemy/zombie.png').convert_alpha(),
'zombie360': [pygame.image.load(f'assets/images/sprites/enemy/zombie/{i}.png').convert_alpha()for i in range(4)],
}
self.list_of_sprites = [
SpriteBase(self.sprite_types['clock'], True, (5, 10), 0.6, 1.1),
SpriteBase(self.sprite_types['zombie'], True, (5, 12), 0.6, 1.1),
SpriteBase(self.sprite_types['zombie360'], False, (14, 10), 0.6, 1.1),
]
self.update_sprite_map()
def update_sprite_map(self):
self.sprite_map = {} # used for collision detection with sprites - this will need to move when sprites can move
for sprite in self.list_of_sprites:
sprite_location = common.align_grid(sprite.x, sprite.y)
self.sprite_map[sprite_location] = 'sprite'
class SpriteBase:
def __init__(self, sprite_object, static, pos, shift, scale):
self.sprite_object = sprite_object
self.static = static
self.pos = self.x, self.y = pos[0] * GRID_BLOCK, pos[1] * GRID_BLOCK
self.shift = shift
self.scale = scale
if not static:
sprite_angle_delta = int(360 / len(self.sprite_object)) # Used to determine at what degree angle to
# change the sprite image- this is based on the number of images loaded for the item.
self.sprite_angles = [frozenset(range(i, i + sprite_angle_delta)) for i in range(0, 360, sprite_angle_delta)]
self.sprite_positions = {angle: pos for angle, pos in zip(self.sprite_angles, self.sprite_object)}
self.sprite_object = sprite_object[0] # set a default image until correct one is selected
def locate_sprite(self, player):
dx, dy = self.x - player.x, self.y - player.y
distance_to_sprite = math.sqrt(dx ** 2 + dy ** 2)
theta = math.atan2(dy, dx)
gamma = theta - player.angle
if dx > 0 and 180 <= math.degrees(player.angle) <= 360 or dx < 0 and dy < 0:
gamma += DOUBLE_PI
delta_rays = int(gamma / DELTA_ANGLE)
current_ray = CENTER_RAY + delta_rays
distance_to_sprite *= math.cos(HALF_FOV - current_ray * DELTA_ANGLE)
sprite_ray = current_ray + SPRITE_RAYS
if 0 <= sprite_ray <= SPRITE_RAYS_RANGE and distance_to_sprite > 30:
projected_height = min(int(WALL_HEIGHT / distance_to_sprite * self.scale), resY*2)
half_projected_height = projected_height // 2
shift = half_projected_height * self.shift
if not self.static:
if theta < 0:
theta += DOUBLE_PI
theta = 360 - int(math.degrees(theta))
for angles in self.sprite_angles:
if theta in angles:
self.sprite_object = self.sprite_positions[angles]
break
sprite = pygame.transform.scale(self.sprite_object, (projected_height, projected_height))
return {'image': sprite, 'x': (current_ray * SCALE - half_projected_height), 'y': (HALF_HEIGHT - half_projected_height + shift), 'distance': distance_to_sprite}
else:
return None
Here is the source code with the Z-buffer and multi-angle sprites implemented.
Load Maps From File
To make changing the map and loading different maps easier, the map layout is now defined in a text file and loaded when needed.
The map.py file has been modified as per below:
from settings import *
game_map = []
with open('map/map01.txt') as f:
for line in f:
game_map.append(line.strip())
# map size
map_height = len(game_map) * GRID_BLOCK
map_width = len(game_map[0]) * GRID_BLOCK
world_map = {}
for j, row in enumerate(game_map):
for i, char in enumerate(row):
if char != '0':
if char == '1':
world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '1'
elif char == '2':
world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '2'
elif char == '3':
world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '3'
The line.strip() function is used to remove the newline character from the end of each line.
The text file where the map is defined is shown in the image below:
Here is the source code where map loading from a file is implemented.
The next things I am going to be working on is adding moving sprites and also adding interactive elements to the game world, including doors that open and close. Keep an eye out for future posts that will cover new features I have implemented.
I have started developing a raycasting game in Python (using PyGame) as a learning exercise and to get a better understanding of the math and techniques involved.
Raycasting is a graphic technique used to render pseudo-3D graphics based on a 2D game world. The best-known example of a raycasting engine used in a computer game is probably Wolfenstein 3D, developed by id Software in 1992.
So firstly, here are some resources I used to upskill and get my head around the topic:
YouTube tutorial series by Standalone Coder. These videos are in Russian, but the YouTube subtitles do a good enough job to follow along.
Lastly, I recommend the book Game Engine Black Book: Wolfenstein 3D by Fabien Sanglard, it is not an easy read, but it gives excellent insight into the development of Wolfenstein 3D and a great deal of information into the intricate details of Raycasting and texture mapping.
The Basics of Raycasting
The first thing to understand is that Raycasting is not true 3D, but rather rendering a 2D world in pseudo 3D. Therefore, all movement and game positions consist of only x and y positions, with no height or z positions.
The entire game world consists of a grid, with some blocks in the grid being populated with walls and others being empty. An example of this is shown in the picture below:
In the current version of the game, the world map is implemented as a list of strings, where each character in the string represents a block in the grid. The ‘0’ character represents an empty block, and all other numbers represent a wall. The numbers ‘1’, ‘2’, and ‘3’ are used to show different wall textures according to the different numbers, something covered later in this post.
This is then converted into a dictionary as follows:
world_map = {}
for j, row in enumerate(game_map):
for i, char in enumerate(row):
if char != '0':
if char == '1':
world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '1'
elif char == '2':
world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '2'
elif char == '3':
world_map[(i * GRID_BLOCK, j * GRID_BLOCK)] = '3'
The player is placed on this grid with a x and y coordinates determining the player’s position on the grid. Along with the x and y coordinates, the player also has a viewing angle, i.e., a direction the player is facing.
Now that we have the foundation in place, we can get to the raycasting.
To understand this concept, imagine a line originating from the player and heading off in the direction the player is facing.
Now, this is not an endless line, but rather a line that keeps expanding from one world grid line to the next. (this is done with a for loop).
At every point where this ‘ray’ intersects a grid line on the game world, a check is done to determine if the grid line in question is a wall or not.
If it is a wall, the loop expanding the line is stopped, and the x and y coordinates where the wall was intersected will be noted. We will use this a bit later when drawing the pseudo-3D rendering of the world.
The above is the simplest form of raycasting. However, a single ray will not give us a usable amount of information to do the pseudo-3D render with. This is where a player’s FOV (field of view) and more rays come in.
The Player FOV is an angle on the game world originating at the player and extending out in a triangular form. This determines where the player’s visible range at present begins and ends. For this game, I will use a FOV of 60% (i.e., pi/3).
To change the FOV, the following can be used as a guide:
Radians
Degrees
π / 6
30°
π / 4
45°
π / 3
60°
π / 2
90°
π
180°
Within this FOV, several rays will be generated, exactly as per the single one in the example discussed earlier.
In this game, a value of 480 rays has been defined, which will be generated within the FOV, so the process above for a single ray will be repeated 480 times, with each ray cast having its angle increased by a marginal amount from the previous ray.
The angle of the first ray will be determined as follows:
Starting angle = Player Angle – Half the FOV
Where Player Angle is defined as the center point of direction player is facing.
For each subsequent ray, the angle of the ray will be increased by a delta angle calculated as followed:
Delta Angle = FOV/Number of Rays
This will allow for a sufficient set of information to draw a pseudo-3D rendering from.
To see how this is implemented, please look at lines 6 to 39 in the raycasting.py file.
Sine and Cosine functions are used to determine the intersecting coordinates, and if you require a refresher on these functions, I recommend this web article from mathisfun.com.
For calculating the y coordinate where the ray intersects with a wall, the following formula is used:
y = (player y) + depth * sin(ray angle)
And to calculate the x coordinate where the ray intersects with a wall, the following formula is used:
x = (player x) + depth * cos(ray angle)
For depth value in the above formulas, a sequence of numbers would usually be looped through, starting at 0 and ending at some defined maximum depth.
The above formulas would then be executed at each new depth level to get the corresponding x and y coordinates.
This does provide the desired results, but it is not very optimized.
To improve the performance of this operation, the Digital Differential Analyzer (DDA) algorithm will be used. At a high level, the DDA algorithm functions by not checking every pixel of the 2D game world for an intersection of a ray and a wall but only checking on the grid lines of the 2D world (the only place where walls can occur).
To implement the DDA algorithm, we are going to need four extra variables in conjunction with the Player x and y coordinates, namely:
dx and dy – these two variables will determine the step size to the next grid line. Based on the direction of the angle, these either have the value of 1 or -1.
gx and gy – This will be the x and y coordinates of the grid lines that will be iterated through, starting with the grid line the closest to the player x and y position. The initial value is determined using the following function, located in the common.py file:
This will ensure that the returned x and y coordinates are located on the closet grid line (based on game world tile size). For reference, the // operator in Python is floor division and rounds the resulting number to the nearest whole number down.
To determine the depth to the next y-axis grid line, the following equation will be used:
Depth Y = (gx – player x) / cos (ray angle)
And to determine the depth of the next x-axis grid line, this equation is used:
Depth X = (gy – player y) / sin (ray angle)
The below two code blocks implement what was just described, the first block of code is to determine intersections with walls on the y axis of the world map:
# checks for walls on y axis
gx, dx = (xm + GRID_BLOCK, 1) if cos_a >= 0 else (xm, -1)
for count in range(0, MAX_DEPTH, GRID_BLOCK):
depth_y = (gx - px) / cos_a
y = py + depth_y * sin_a
tile_y = align_grid(gx + dx, y)
if tile_y in world_map:
# Ray has intersection with wall
texture_y = world_map[tile_y]
ray_col_y = True
break
gx += dx * GRID_BLOCK
And the next block of code is to determine intersections with walls on the x axis of the world map:
# checks for walls on x axis
gy, dy = (ym + GRID_BLOCK, 1) if sin_a >= 0 else (ym, -1)
for count in range(0, MAX_DEPTH, GRID_BLOCK):
depth_x = (gy - py) / sin_a
x = px + depth_x * cos_a
tile_x = align_grid(x, gy + dy)
if tile_x in world_map:
# Ray has intersection with wall
texture_x = world_map[tile_x]
ray_col_x = True
break
gy += dy * GRID_BLOCK
texture_x and texture_y are used to store the index of the texture to display on the wall. We will cover this later in this post.
Now that we have the raycasting portion covered, which is the most complex, we can focus on simply rendering the pseudo-3D graphics to the screen.
At a very high level, the basic concept of how the pseudo-3D graphics will be created, is to draw a rectangle for every ray that has intersected a wall. The x position of the rectangle will be based on the angle of the ray. The y position will be determined based on the distance of the wall from the player, with a width of the rectangle equal to the distance between the rays (calculated with Window resolution width / Number of Rays) and a user-defined height.
This will create a very basic pseudo-3D effect, and it would be much nicer using textured walls.
To implement textured walls the concept remains the same, but instead of just drawing rectangles, we will copy a small strip from a texture image and draw that to the screen instead.
In the code blocks above, there were two variables texture_x and texture_y. Where a wall intersection did occur these variables will contain a value of ‘1’, ‘2’ or ‘3’ based on the value in the world map, and these correspond to different textures that are loaded in a dictionary as follows:
The above operations of copying a section of a texture, resizing it, and drawing it to the screen is done for every ray that intersects a wall.
The last thing to do and by far the least complex is to draw in the sky box and the floor. The sky box is simply an image, loaded in the texture dictionary under the ‘S’ key, which is drawn to the screen. The sky box is drawn in three blocks:
For reference, the following PyGame functions are used in the game up to this point:
pygame.init Used to initialize pygame modules and get them ready to use.
pygame.display.set_mode Used to initialize a window to display the game.
pygame.image.load Used to load an image file from the supplied path into a variable to be used when needed.
pygame.Surface.subsurface Used to get a copy of a section of an image (surface) based on the supplied x position,y position, width, and height values.
pygame.transform.scale Used to resize an image (surface) to the supplied width and height.
pygame.Surface.blit Used to draw images to the screen.
pygame.display.flip Used to update the full display Surface to the screen.
pygame.Surface.fill Used to fill the display surface with a background color.
pygame.draw.rect Used to draw a rectangle to the screen (used for the floor).
Also used pygame.key.get_pressed, pygame.event.get and pygame.mouse methods for user input.
Collision Detection
Because the game plays out in a 2D world, collision detection is rather straightforward.
The player has a square hitbox, and every time the player inputs a movement, the check_collision function is called with the new x and y positions the player wants to move to. The function then uses the new x and y positions to determine the player hitbox and check if it is in contact with any walls; if so, the move is not allowed. Otherwise, the player x and y positions are updated to the new positions.
Here is the check_collision function that forms part of the Player class:
def check_collision(self, new_x, new_y):
player_location = mapping(new_x , new_y)
if player_location in world_map:
# collision
print("Center Collision" + str(new_x) + " " + str(new_y))
return
player_location = mapping(new_x - HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
if player_location in world_map:
# collision
print("Top Left Corner Collision" + str(new_x) + " " + str(new_y))
return
player_location = mapping(new_x + HALF_PLAYER_MARGIN, new_y - HALF_PLAYER_MARGIN)
if player_location in world_map:
# collision
print("Top Right Corner Collision" + str(new_x) + " " + str(new_y))
return
player_location = mapping(new_x - HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
if player_location in world_map:
# collision
print("Bottom Left Corner Collision" + str(new_x) + " " + str(new_y))
return
player_location = mapping(new_x + HALF_PLAYER_MARGIN, new_y + HALF_PLAYER_MARGIN)
if player_location in world_map:
# collision
print("Bottom Right Corner Collision" + str(new_x) + " " + str(new_y))
return
self.x = new_x
self.y = new_y
Here is a video of the current version of the game in action:
The current version of this game is still a work in progress, but if you are interested, the source code can be downloaded here and the executable here.
Some of the next things on the to-do list are loading levels from the file, adding sprites to the game world, and adding some interactive world items, such as doors that open and close.
I will keep creating posts on this topic as I progress with this project.
In this post, I will cover some projects I have worked on over the last few months and some projects I have planned for the future.
Bipedal Robot
I am currently busy building a bipedal robot based on this Instructables post by K.Biagini. I used his design as a foundation and added additional components and functionality (such as arms and a Piezo for sound).
I had to modify his 3D models to achieve what I wanted. Here are links to download my modified 3d Models: – Body Extension (to fit in the extra components) – Link – Modified Head – Link – Arms – Link
Here is a list of all the electronic components used: – 1x Arduino Nano – 6x micro servos – 2 x push buttons – 1x mini toggle switch – 1x 9v Battery – 1x ultrasonic sensor (HC-SR04) – 1x RGB LED – 1x Piezo
These components are connected as follows:
Pinout configuration of Arduino Nano:
Pin Number
Connected Hardware
2
Ultrasonic Sensor Echo Pin
3
RGB LED Red Pin
4
Push Button 1
5
RGB LED Green Pin
6
RGB LED Blue Pin
7
Push Button 2
8
Servo Signal Pin (Right Hip)
9
Servo Signal Pin (Right Ankle)
10
Servo Signal Pin (Left Hip)
11
Piezo
12
Servo Signal Pin (Left Ankle)
13
Ultrasonic Sensor Trigger Pin
14 (A0)
Servo Signal Pin (Left Arm)
15 (A1)
Servo Signal Pin (Right Arm)
This is still an in-progress project and is not done, Especially from a coding perspective on the Arduino, but once I have completed this project, I will create a post containing the complete source code.
Rotary Control
I needed a rotary control for another project discussed below, so I decided to build one as per this Post on the Prusa Printers blog. It is based on an Arduino Pro Micro and uses Rotary Encoder Module.
I modified the code available on the Prusa blog to mimic keyboard WASD inputs. Turning the dial left and right will input A and D, respectively. Pressing in the dial control push button will switch to up and down inputs, thus turning the dial left and right will input W and S. Here is the modified code (Based on Prusa Printers blog post code):
#include <ClickEncoder.h>
#include <TimerOne.h>
#include <HID-Project.h>
#define ENCODER_CLK A0
#define ENCODER_DT A1
#define ENCODER_SW A2
ClickEncoder *encoder; // variable representing the rotary encoder
int16_t last, value; // variables for current and last rotation value
bool upDown = false;
void timerIsr() {
encoder->service();
}
void setup() {
Serial.begin(9600); // Opens the serial connection
Keyboard.begin();
encoder = new ClickEncoder(ENCODER_DT, ENCODER_CLK, ENCODER_SW);
Timer1.initialize(1000); // Initializes the timer
Timer1.attachInterrupt(timerIsr);
last = -1;
}
void loop() {
value += encoder->getValue();
if (value != last) {
if (upDown)
{
if(last<value) // Detecting the direction of rotation
Keyboard.write('s');
else
Keyboard.write('w');
}
else
{
if(last<value) // Detecting the direction of rotation
Keyboard.write('d');
else
Keyboard.write('a');
}
last = value;
Serial.print("Encoder Value: ");
Serial.println(value);
}
// This next part handles the rotary encoder BUTTON
ClickEncoder::Button b = encoder->getButton();
if (b != ClickEncoder::Open) {
switch (b) {
case ClickEncoder::Clicked:
upDown = !upDown;
break;
case ClickEncoder::DoubleClicked:
break;
}
}
delay(10);
}
I use the rotary control with a Raspberry Pi to control a camera pan-tilt mechanism. Here is a video showing it in action:
I will cover the purpose of the camera as well as the configuration and coding related to the pan-tilt mechanism later in this post.
Raspberry Pi Projects
Raspberry Pi and TensorFlow lite
TensorFlow is a deep learning library developed by Google that allows for the easy creation and implementation of Machine Learning models. There are many articles available online on how to do this, so I will not focus on how to do this.
At a high level, I created a basic object identification model created on my windows PC and then converted the model to a TensorFlow lite model that can be run on a Raspberry pi 4. When the TensorFlow lite model is run on the Raspberry Pi, a video feed is shown of the attached Raspberry Pi camera, with green blocks around items that the model has identified with a text label of what the model believes the object is, as well as a numerical percentage which indicates the level of confidence the model has in the object identification.
I have attached a 3inch LCD screen (in a 3D printed housing) to the Raspberry Pi to show the video feed and object identification in real-time.
The Raspberry Pi Camera is mounted on a pan-tilt bracket which is controlled via two micro servos. As mentioned earlier, the pan-tilt mechanism is controlled via the dial control discussed earlier. The pan-tilt mechanism servos are driven by an Arduino Uno R3 connected to the Raspberry Pi 4 via USB. I initially connected servos straight to Raspberry Pi GPIO pins. However, this resulted in servo jitter. After numerous modifications and attempted fixes, I was not happy with the results, so I decided to use an Arduino Uno R3 to drive the servos instead and connect it to the Raspberry Pi Via USB. I have always found hardware interfacing significantly easier with Arduino and also the result more consistent.
Here is a diagram of how the servos are connected to the Arduino Uno R3:
Below is the Arduino source code I wrote to control the servos. Instructions are sent to the Arduino through serial communication via USB, and the servos are adjusted accordingly.
On the Raspberry Pi, the following Python script is used to transfer the rotary control input via serial communication to the Arduino:
# Import libraries
import serial
import time
import keyboard
import pygame
pygame.init()
screen = pygame.display.set_mode((1, 1))
with serial.Serial("/dev/ttyACM0", 9600, timeout=1) as arduino:
time.sleep(0.1)
if arduino.isOpen():
done = False
while not done:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_s:
arduino.write('s'.encode())
if event.key == pygame.K_w:
arduino.write('w'.encode())
if event.key == pygame.K_a:
arduino.write('a'.encode())
if event.key == pygame.K_d:
arduino.write('d'.encode())
time.sleep(0.5)
arduino.Close();
print ("Goodbye")
The next thing I want to implement on this project is face tracking using TensorFlow lite with automated camera movement.
It is possible to run Quake 1 on the Raspberry Pi Zero following the instructions in this GitHub, and it runs great.
Raspberry Pi Mini Server Rack
I have 3D printed a mini server rack and configured a four Raspberry Pi Cluster consisting of three raspberry Pi 3s and one Raspberry Pi 2. They are all networked via a basic five-port switch.
I am currently busy with a few different projects using the Pi cluster and will have some posts in the future going into some more details on these projects.
I developed a little Python application to monitor my different Raspberry Pis and show which ones are online (shown in green) and offline (shown in red).
The application pings each endpoint every 5 seconds, and it is also possible to click on an individual endpoint to ping it immediately. The list of endpoints is read from a CSV file, and it is easy to add additional endpoints. The UI is automatically updated on program startup with the endpoints listed in the CSV file.
Here is the Python source code of the application:
import PySimpleGUI as sg
import csv
import time
import os
from apscheduler.schedulers.background import BackgroundScheduler
def ping(address):
response = os.system("ping -n 1 " + address)
return response
def update_element(server):
global window
global layout
response = ping(server.address)
if response == 0:
server.status = 1
window.Element(server.name).Update(button_color=('white', 'green'))
window.refresh()
else:
server.status = 0
window.Element(server.name).Update(button_color=('white', 'red'))
window.refresh()
def update_window():
global serverList
for server in serverlist:
update_element(server)
class server:
def __init__(self, name, address, status):
self.name = name
self.address = address
self.status = status
serverlist = []
with open('servers.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in csv_reader:
if line_count == 0:
line_count += 1
else:
serverlist.append(server(row[0], row[1], 0))
line_count += 1
layout = [
[sg.Text("Server List:")],
]
for server in serverlist:
layout.append([sg.Button('%s' % server.name,
button_color=('white', 'orange'),
key='%s' % server.name)])
window = sg.Window(title="KillerRobotics Server Monitor",
layout=layout, margins=(100, 30))
window.finalize()
scheduler = BackgroundScheduler()
scheduler.start()
scheduler.add_job(update_window, 'interval', seconds=5, id='server_check_job')
while True:
event, values = window.read()
if event == sg.WIN_CLOSED:
scheduler.remove_all_jobs()
scheduler.shutdown()
window.close()
break
elif event in [server.name for server in serverlist]:
scheduler.pause()
update_element([server for server in
serverlist if server.name == event][0])
scheduler.resume()
Raspberry Pi Pico
I ordered a few Raspberry Pi Picos on its release, and thus far, I am very impressed with this small and inexpensive microcontroller.
The Raspberry Pi Pico sells for $4 (USD) and has the following specifications: – RP2040 microcontroller chip designed by Raspberry Pi – Dual-core Arm Cortex-M0+ processor, flexible clock running up to 133 MHz – 264KB on-chip SRAM – 2MB on-board QSPI Flash – 26 multifunction GPIO pins, including 3 analogue inputs – 2 × UART, 2 × SPI controllers, 2 × I2C controllers, 16 × PWM channels – 1 × USB 1.1 controller and PHY, with host and device support – 8 × Programmable I/O (PIO) state machines for custom peripheral support – Low-power sleep and dormant modes – Accurate on-chip clock – Temperature sensor – Accelerated integer and floating-point libraries on-chip
It is a versatile little microcontroller that nicely fills the gap between Arduino and similar microcontrollers and the more traditional Raspberry Pis or similar single board computers. I have only scratched the surface of using the Pico on some really basic projects, but I have quite a few ideas of using it on some more interesting projects in the future.
3D Printing
I ran into some problems with my 3D printer (Wanhao i3 Mini) over the last few months. The First problem was that half of the printed LCD display died, which was an annoyance, but the printer was still usable. The next issue, which was significantly more severe, was that the printer was unable to heat up the hot end.
My first course of action was to replace both the heating cartridge and the thermistor to ensure that neither of those components were to blame, and unfortunately, they were not. After some diagnostics with a multimeter on the printer’s motherboard, I determined that no power was passing through to the heating cartridge connectors on the motherboard.
I ordered a replacement motherboard and installed it, and the 3D printer is working as good as new again. When I have some more time, I will try and diagnose the exact problem on the old motherboard and repair it. Here are photos of the old motherboard I removed from the printer:
Below are some photos of a few things I have 3D printed the last few months:
As I mentioned in my Surviving Lockdown post, I started upskilling on Python, and when upskilling on a new programming language, I usually do a project to build on and enforce the things I am learning.
For my Python-based project, I decided to use PyGame to develop a small game. One piece of advice I can offer when developing a game is that it is better to develop a small and basic game that you finish than a large and ambitious game you never complete. I believe everyone who has tried some form of game development has at least one over-ambitious project they never completed, so it is better to start small.
The game I developed is called “Space Octopus Invasion” and here is a video of the game in action:
The tools and resources I used in the development process are as follows:
Trello
I used Trello for task tracking and planning.
PyCharm
PyCharm is my favorite Python IDE developed by JetBrains, and it offers a free community edition.
PyInstaller
A great utility to package a python application into an executable file.
InstallForge
A free installer maker that allows you to create a professional-looking setup wizard to install your game.
GameDevMarket.net
I am not an artistically inclined person, and typically I use art, sound, and music assets when developing a game, I recommend GameDevMarket.net as they have a great selection of assets available.
The Installer for the game can be downloaded here: Installer.
And the source code can be downloaded here: Source Code.
Here is a Post n wrote for my Companies blog, originally posted here.
DevOps has become a hot topic in organisations over the past year or so. However, there seems to be a lot of confusion regarding what DevOps actually entails. So, what is DevOps?
If you asked a more sales-inclined individual, you may get a response along the lines of: “DevOps digitally transforms an organisation’s development department by bridging the gap between development and operations, resulting in higher quality solutions, fewer bugs, quicker delivery times, shorter recovery times, and controlling scope creep.”
This sounds amazing! However, it does not answer the question as to what DevOps really is. So, I will be taking a different approach to delve into what DevOps entails.
DevOps is based on the principal of continuous improvements in the Software Development Lifecycle, and consists of principles, practices and tools that allow an organisation’s development department to deliver projects at a high velocity, while maintaining quality and continuously improving the process associated with delivery. This is where Azure DevOps comes in. Azure DevOps is a selection of tools that facilitate the implementation of DevOps within an organisation.
DevOps consists of five main pillars (which are supported by processes, practices and tools), namely:
1. Plan and Track
This involves planning what development work needs to be completed and tracking progress against that. The tool Azure DevOps offers here is Azure Boards.
2. Develop
This is where your software developers write code and store that code. In the Azure DevOps ecosystem, the tools that used here are Visual Studio, Visual Studio Code and Azure Repos as a source code repository.
3. Build and Test
Automated builds and testing are a very important part of DevOps, as this automation frees up valuable resource time to focus on more imperative tasks. Automated builds can be set up to trigger new builds (compiling source code into executable programs) based on certain criteria (for example, “once a day”), and automated tests can then be run to verify that everything is working as expected without the intervention of a person. Azure Pipelines and Azure Test Plans are the tools utilized here.
4. Deploy
The next step is Automated Deploy – first to a UAT\Test environment and eventually to production. Doing deploys in this manner prevents unwanted changes being accidentally deployed from a developer’s machine and introduces additional controls to only deploy what is wanted and limiting the introduction of problems. By automating the deployment of systems deployment times are also drastically reduced and thus system down time is reduced. Azure Release Management is the Azure DevOps tool used to automate deployments.
5. Monitor and Operate
After a system has been deployed, it needs to be monitored and operational activities need to be performed to ensure it is up and running and running optimally. Azure Monitor and Application Insights are the tools available in the Azure DevOps tool-belt for this.
With the tools provided by Microsoft Azure DevOps, as well as industry tried and tested principles, the above five pillars can dramatically improve the operations and output of a development department while driving down operational costs.
Now that we understand what DevOps is and how it works, what outcomes can we expect from mastering the 5 pillars?
In this example a list of Book objects is filtered to return only the Objects where the price is less than 10 and additionally the newly created list of Book objects is sorted alphabetically based on the Title field.
In this example the cheapBooks list is filtered to only return the Titles of the books therein and these Titles are then inserted into a new list of strings.
LINQ query operators tend to be slightly more verbose, and the above example can be implemented with query operators as follows:
List<string> cheapBooksTitles = from b in books
where b.Price < 10
orderby b.Title
select b.Title;
Some common extension methods are:
Single
var book = books.Single(b=>b.Title == ”Building Robots”);
Returns a single object that matches the defined criteria. However note that in the event that none or more than one book matches the criteria specified an exception will be thrown.
SingleOrDefault
var book = books.SingleOrDefault(b=>b.Title == ”Building Robots”);
Returns a single object that matches the defined criteria, if more than one book matches the criteria specified an exception will be thrown, however if no books match the criteria the default value defined will be returned.
First
Returns the first object that matches the criteria, however if no matches are found an exception is thrown.
FirstOrDefault
Returns the first object that matches the criteria and if no match is found the default value will be returned.
Last
Returns the last object that matches the criteria, however if no matches are found an exception is thrown.
LastOrDefault
Returns the last object that matches the criteria and if no match is found the default value will be returned.
Max
var maxValue = books.Max(b=>b.Price);
Max is used with numeric values and will return the highest value that is contained in the Price field in the Book objects.
Min
var minValue = books.Min(b=>b.Price);
Min is another operation used with numeric values and will return the smallest value that is contained in the Price field in the Book objects.
Sum
var totalPrice = books.Sum(b=>b.Price);
Sum is used to add up all the values in a numeric field.
There are many other methods available to filter and manipulate data in LINQ and the possibilities for the utilisation of LINQ are nearly endless, for example
var bookSelection = books.Skip(2).Take(3);
The above example will skip the first two books in the books list and take the next three placing them into the newly created bookSelection list.
The best option to gain a better insight of what is possible with LINQ is to give it a try.