• Dynamically Macro A Class To A Laravel Macroable Class

    Dynamically Macro A Class To A Laravel Macroable Class

    Laravel Macros

    I was recently working on a project where I wanted to make a classes functions available inside of the built in Laravel Str:: helper. Laravel Macros are the answer to this issue. However, I wanted to make adding additional methods an easy experience and needing to write and maintain a separate macro with argument definitions for each function was clunky.

    get_class_methods

    To solve this issue we can take advantage of a built in php function get_class_methods. This function returns an array of all the methods defined in a class. Perfect! Except, macros also need any arguments listed as well.

    func_get_args

    Enter func_get_args. This build in php function allows us to list all the arguments of a function. This is the final piece needed to dynamically macro our class.

    Final Code

    The final code looks like this:

    foreach (get_class_methods(AdvStr::class) as $methodName) {
        Str::macro($methodName, function () use ($methodName) {
            $args = func_get_args();
    
            return (new AdvStr())->$methodName(...$args);
        });

    After implementing this in a service provider you can then call your custom methods directly:

    Str::myCustomFunction(myArgument: 'example');
  • Certbot HTTP Verification Debugging Details

    Certbot HTTP Verification Debugging Details

    Ran across a fun issue with Let’s Encrypt and certbot today.

    The issue at hand seemed straightforward at first: we were attempting to create a new SSL certificate via Let’s Encrypt but kept hitting a wall with the http validation process. For those unfamiliar, Let’s Encrypt utilizes a simple challenge-response mechanism to verify domain ownership. Essentially, it sends a set of http requests to the domain and expects responses to confirm control over the domain.

    After much head-scratching and log-digging, I observed three successful http requests from Let’s Encrypt’s API hitting the server. Yet, the validation process refused to complete with the error “Secondary Validation Failed”. The puzzle was why? The logs seemed to indicate success.

    The “Secondary validation” portion of the error did make me think that there must be a “primary validation” that was working. I finally discovered after much googling and forum post reading that Let’s Encrypt actually makes not three, but four successful http requests during its validation process.

    The discrepancy led me down a rabbit hole that ended with a realization about the customer’s server configuration: geoblocking. In an attempt to bolster security, the server was configured to block non-US IP addresses. As it turns out, one of Let’s Encrypt’s validation requests originated from outside the US, thereby being blocked and causing the entire validation process to fail.

    This incident underscores a crucial lesson that often goes overlooked in the documentation and troubleshooting guides: the devil is in the details. It took a deep dive into forum discussions and a considerable amount of googling to piece together the puzzle. Documenting the exact HTTP process that Let’s Encrypt goes through for verification would have been helpful.

    Edit: on further digging found more details on the process https://letsencrypt.org/2020/02/19/multi-perspective-validation.html

  • Quick Tip: Locate Tags in Git with a Specific Commit

    Need to find which tags in your project contain a specific commit ID? I recently need to do this to find out when a specific bug fix (commit ID: abc123) was released in a project.

    Simply enter:

    git tag --contains abc123

    Git will then list all the tags that include the bug fix, helping you identify the release times effortlessly. This method is a time-saver for finding when issues were fixed in a release cycle.

  • Quick Tutorial: How to Manually Test an AWS CloudWatch Alarm Without Triggering It

    At times, you may have set up a CloudWatch alarm that you don’t want to trigger, but you still need to ensure that the actions tied to it function correctly. This tutorial will guide you through a simple command to test your CloudWatch alarm manually without actually setting it off.


    Step-by-Step:

    1. Access the CloudWatch Dashboard: Begin by navigating to your CloudWatch dashboard. For this example, let’s consider you have an S3 size alarm that triggers when a specific bucket reaches a large size. Instead of populating the bucket with a massive amount of files to trigger the alarm, there’s a more straightforward way to test this alarm.
    2. Use the AWS Command Line Tool: Head over to your AWS command line tool.
    3. Enter the Set Alarm State Command: The AWS command line tool has a set-alarm-state command. This command requires three parameters:
      • An alarm name

      • A state value (set this to “alarm” to trigger the alarm without any actual event)

      • A state reason
    4. The command will look something like this:
    aws cloudwatch set-alarm-state --alarm-name "YOUR_ALARM_NAME" --state-value ALARM --state-reason "Testing the alarm manually"
    1. Observe the Alarm State: After running the command, you’ll notice that your CloudWatch alarm enters the “ALARM” state. This allows you to observe any actions or other functionalities you’ve linked to this alarm.
    2. Return to the OK State: Don’t worry about the alarm staying in the “ALARM” state. It will revert to the “OK” state after the next scheduled check of that CloudWatch alarm.

  • When “rm *” doesn’t work

    When “rm *” doesn’t work

    We’ve all deleted files in a folder using rm *. But have you ever had it not work you might have come across an error like this:

    sudo rm *
    -bash: /usr/bin/sudo: Argument list too long

    What’s happening here? Well, this is a consequence of the Linux kernel’s limitation on the number of arguments that can be passed to a command via the exec system call. In my particular environment this limit was 2097152 (2MB) which is shown by this command:

    getconf ARG_MAX

    The Solution

    Thankfully a handy little tool called xargs can help us here. This tool reads items from the standard input, delimited by blanks (which can be
    protected with double or single quotes or a backslash) or
    newlines, and executes the command in the argument.

    ls | xargs sudo rm

    In this way rm is executed multiple times avoiding the system exec call limit. You might assume that the command will be run once per file but xargs is smarter than that. Running once per file is pretty poor for performance in most cases. Instead, the command line for command is built up until it reaches a system-defined limit (unless the -n and -L options are used). The specified command will be invoked as many times as necessary to use up the list of input items. You can see how these limits are built using the --show-limits option.

    xargs --show-limits
    Your environment variables take up 2190 bytes
    POSIX upper limit on argument length (this system): 2092914
    POSIX smallest allowable upper limit on argument length (all systems): 4096
    Maximum length of command we could actually use: 2090724
    Size of command buffer we are actually using: 131072
    Maximum parallelism (--max-procs must be no greater): 2147483647

  • Initial Thoughts On Arc Browser

    I’m coming from Firefox as my default browser.

    What I like:

    • Spaces work well and segment extensions along with session info. This is great if you run a separate personal and work password manger.
    • Split view is sweet and a feature that other browsers need to implement

    What I don’t like:

    • The sidebar tabs. Feels wrong and I feel like I’m looking off the the side on a large monitor to navigate tabs.
    • Full URLs are hidden in the tiny URL bar. URLs are important. Any browser that tries to hide them in any way will have a hard time winning me over.
  • Optimizing Permission Rewriting in Ansible

    I was recently working on speeding up an Ansible playbook and found this task that was running slowly.

    - name: Web Root File Permissions
      command: find /var/www/somedir -type f -exec chmod ug=rw,o=r {} \;
      args:
        chdir: /var/www/somedir
    

    This task runs the find command for all files in a specific directory and recursively executes the chmod command on them. The issue is with a directory thousands of files we end up executing thousands of chmod commands. This is a lengthy process. Thankfully I found a better way at https://bash-prompt.net/guides/bash-find-exec-speedup/

    - name: Web Root File Permissions
      command: find /var/www/somedir -type f -exec chmod ug=rw,o=r {} +
      args:
        chdir: /var/www/somedir
    

    Using the “+” instead of the “\;” exec parameter concatenates all the found files to a single chmod command. This is much faster.

  • A Dark Sky Replacement – MerrySky

    With Apple killing off Dark Sky, I was looking for a replacement and happened across Mary sky. It seems like a pretty darn good replacement for dark sky with an almost clone-like interface. The app is powered by the pirate weather API, so the data source is a bit different than dark skies homegrown weather prediction model. There isn’t a mobile app but it does render very nicely in a web browser. If you want to check it out, just head over to https://merrysky.net/.

  • Adding a Hotkey Pulldown Terminal On My Mac

    I’ve had issues effectively managing a large number of terminal windows on my Mac in the past and I’ve finally arrived at an effective solution. The solution does require switching to iTerm2.

    Once I had iTerm2 setup. Here’s how I configured things for myself. Under Preferences > Keys > Hotkey I created a new HotKey Window.

    The most important item in the setup of the window for it to behave as a pulldown terminal is to set the Window > Style in the profile to “Full Width Top of Screen”. I’ve included the json for my profile as I currently have it here. https://mattstenson.com/wp-content/uploads/2023/01/Hotkey-Window.json_.zip

    I set up the Hotkey to be a double tap of the option key on my Mac. I like the double tap of a single key instead of two key shortcut. I experimented with a double tap of the command key, but what I found was that rapid command-key combinations would also trigger the terminal. Such as copy and paste. The option key is comparatively unused.

    The end result is this incredibly pleasing fast switching to the terminal:

    I experimented with other settings such as transparency and different color schemes but arrived at a simple solid black style.

  • Jack’s kinda right?

    Jack’s kinda right?

    Jack Dorsey just wrote up a little piece on where he sees the future of content on the web going. I saw a few hot takes that seemed like they might be taking things out of context. Which I think they kind of were. So, go read it for yourself.

    This is the part that stuck out to me.

    But instead of a company or government building and controlling these solely, people should be able to build and choose from algorithms that best match their criteria, or not have to use any at all. A “follow” action should always deliver every bit of content from the corresponding account, and the algorithms should be able to comb through everything else through a relevance lens that an individual determines.

    Jack

    What some seem to have missed is Jack making an argument basically for the fediverse.

    Take my mastodon account as an example. The first “algorithm” that I use is the one provided by https://mstdn.social/@stux#, the admin of the instance I joined. I get a set of content moderation rules and a server block list that blocks a bunch of instances that have been shown to not line up with mstdn.social rules. This list is transparent and I can review it.

    The next level down I have my own block lists as a user. I’ve blocked one server and I believe one user so far. Then, because of how mastodon works I get to see a chronological feed of all the posts from all the individuals I follow. (chronological feeds are algorithmic btw. It’s just an algorithm we all understand).

    This isn’t some kind of endorsement of everything Jack is saying. But I do think he gets it right that we don’t want our online communities to be curated by black boxes at large corporations with misaligned incentives. Instead making the internet “smaller” and more diverse is better in the long run.