hacker-news-custom-logo

Hackr News App

60 comments

  • Sytten

     

    2 days ago

    [ - ]

    An again this would not be so bad an impact if github finally pushed their immutable actions [1]. I sound like a broken record since I keep repeating that this would solve like 70%+ of the scope of attacks on gha today. You would think that the weekly disaster they have would finally make them launch it.

    [1] https://github.com/features/preview/immutable-actions

    thund

     

    2 days ago

    [ - ]

    They probably have good reasons if it's still in preview, that could be serious bugs, security gaps, potential breaking changes that would cause more harm than good if rushed, etc

    intelVISA

     

    2 days ago

    [ - ]

    Too much stakeholder alignment?

    tanepiper

     

    1 day ago

    [ - ]

    More like last year they laid off a whole bunch of people. We've been waiting for several open tickets on GitHub to be picked up, some were but seem to be abandoned and others just ignored.

    1oooqooq

     

    1 day ago

    [ - ]

    the only reason any company does or don't anything: not required for sales.

    in 2019 i saw a fortune500 tech company put in place their own vulnerability scanner internal application which included this feature for our enterprise github repos. the tool was built and deployed to an old Linux docker image that was never updated to not be the target of the attack they were preventing... they never vetted to random version they started with either. i guess one can still use zip bomb or even the xz backdoor for extra irony points when attacking that system.

    anyway, the people signing github checks also get promoted by pretending to implement that feature internally.

  • nyrikki

     

    2 days ago

    [ - ]

    No mention why this temp token had rights to do things like create a new deployments and generate artifact attestations?

    For their fix, they disabled debug logs...but didn't answer if they changed the temp tokens permissions to something more appropriate for a code analysis engine.

    declan_roberts

     

    2 days ago

    [ - ]

    I think we all know this old story. The engineer building it was getting permission denied so they gave it all the permissions and never came back and right-sized.

    setr

     

    2 days ago

    [ - ]

    Does any RBAC system actually tell you the missing permissions required to access the object in question? It’s like they’re designed to create this behavior

    Normal_gaussian

     

    2 days ago

    [ - ]

    Yes. Most auth systems do to the developer - GCP & AWS IAM give particularly detailed errors; nearly every feature/permission system I have implemented did. However, it wouldn't be unusual for the full error to be wrapped or swallowed by some lazy error handling. Its a bit of a PITA but well worth it to translate to a safe and informative user facing error.

    as a nit; RBAC is applied to an object based permissions system rather than being one. Simply, RBAC is a simplification of permission management in any underlying auth system.

    8note

     

    2 days ago

    [ - ]

    ive never seen aws give a useful error where i could say which resources need a handshake of permissions, or which one of the two needs the permission granted, or which permission needs to be granted.

    donavanm

     

    2 days ago

    [ - ]

    This is intentional. You, the caller, get a generic http 400 “resource does not exist or are not authorized” response and message. Providing additional information about resource existence or permissions opens an entire category of information disclosure, resource discovery, attribute enumeration, policy enumeration problems.

    The IAM admin persona is the one who gets a bunch of additional information. Thats accessible through aws iam policy builder, access logs, etc.

    And no, its not feasible to determine if the initial caller is an appropriate iam admin persona and vary the initial response.

    the8472

     

    2 days ago

    [ - ]

    Even AWS itself does better than this, but only on some services. They send an encrypted error which you can then decrypt with admin permissions to get those details.

    Atotalnoob

     

    18 hours ago

    [ - ]

    Just add this to the end of the error message “If this resource exists, you will need to add permission X. “

    milch

     

    2 days ago

    [ - ]

    AWS throws errors that look like `arn:aws:iam:... is not authorized to call "$SERVICE_NAME:$API_NAME" on resource arn:aws:$SERVICE_NAME:...`. I think it's more complicated when you go cross account, and the receiving account doesn't have permissions set up (if the calling account doesn't have it set up you get the same error). In any case you would still find that information in the CloudTrail logs of the receiving account

    hobs

     

    2 days ago

    [ - ]

    Right, you can go to cloudtrail and probably get it, but I have definitely ran into things like service says you do not have access to resource or it does not exist - randomly providing the account some other tangentially related permission magically fixes it, I've found sometimes trying the UI and the API will give different errors to help, and neither is particularly more useful than the others.

    donavanm

     

    2 days ago

    [ - ]

    Look in to the AWS IAM “service description files” aka SDF. Thats exposed via the console Policy Builder or Policy Evaluator logic. The SDF _should_ encode all the context (eg resource attributes, principal metadat) that goes in to the authz decision. The most common opaque issue youll see is where one action has other required resources/actions. Eg a single action attaching an ebs volume requires permission on both instance and volume and _maybe_ kms key with permissions across those services.

    winwang

     

    2 days ago

    [ - ]

    Slightly disagree at least for GCP. It will error with a detailed permission, but you're not just going to add that -- you're going to add a role (standard, unless you have custom roles), which you technically have to map back to the permission you need. But also, those (standard) roles have many permissions in one, so you likely overprovision (though presumably by just a bit).

    ...If only we could do something like: dry run and surface all the required permissions, then grant them in one fell (granular) sweep.

    valenterry

     

    2 days ago

    [ - ]

    > you're going to add a role (standard, unless you have custom roles), which you technically have to map back to the permission you need

    Which is terrible btw. You dont "technicall" have to do that, you really cannot add roles to custom roles, you can only add permissions to custom roles. Which makes it really hard to maintain the correctness of custom roles since their permissions can and do change.

    > ...If only we could do something like: dry run and surface all the required permissions, then grant them in one fell (granular) sweep.

    GCP even has something like that, but I honestly think that standard roles are usually fine. Sometimes making things too fine grained is not good either. Semantics matter.

    da_chicken

     

    2 days ago

    [ - ]

    > ...If only we could do something like: dry run and surface all the required permissions, then grant them in one fell (granular) sweep.

    The problem with that is that it can be difficult to know what you need, and it may be impossible to simulate in any practical sense. Like, sure, I can stand up a pair of test systems and fabricate every scenario I can possibly imagine, but my employer does want me to do other things this month. And what happens when one of the systems involves a third party?

    Really, the need is to be able to provision access after the relationship is established. It's weird that you need a completely new secret to change access. Imagine if this were Linux and in order to access a directory you had to provision a new user to do it? How narrow do you really think user security access would be in practical terms then?

    winwang

     

    1 day ago

    [ - ]

    > the need is to be able to provision access after the relationship is established

    Could you go into more detail? At a base level interpretation, this is how it works already (you need a principal to provision access for...), but you presumably mean something more interesting?

    da_chicken

     

    19 hours ago

    [ - ]

    With token-based access, you typically assign the role when the token is created. The access level the token has is typically locked at that point. If you're generating an API access token, you might specify the token is read-only. If you later decide that read/write access is needed, you need to generate a new token with the new access level and replace the token id and value in the client system.

    It's not difficult, but it's a much bigger pain in the ass than just changing access or changing role on a user.

    raverbashing

     

    2 days ago

    [ - ]

    But obviously then the security people will raise ruckus about any attempt of telling you what is wrong

    (Which ok, for an external facing system is ok)

    I can bet the huge prevalence of "system says no, and nothing tells you why" helps a lot with creating vulnerable systems.

    System need an "let X person do Action" instead of having people waddle through 10 options like SystemAdminActionAllow that don't mean anything to an end user

    Uvix

     

    1 day ago

    [ - ]

    Azure’s RBAC system usually tells you this, at least when accessing the Azure management APIs. (Other APIs using RBAC, like the Azure Storage or Key Vailt ones, usually aren’t so accommodating. At least by their nature there’s usually only a handful of possible permissions to choose from.)

    levkk

     

    2 days ago

    [ - ]

    Not usually, that's considered an potential attack vector I believe. You're looking to minimize information leakage.

    UltraSane

     

    2 days ago

    [ - ]

    AWS has a neat feature to analyze cloudtrail logs to determine needed permissions.

    azemetre

     

    2 days ago

    [ - ]

    What's the over/under that said engineer could solve two medium leetcodes in under and hour?

    Pathogen-David

     

    1 day ago

    [ - ]

    If the GitHub Actions temporary token does not have workflow-defined permissions scope, it defaults either to a permissive or restricted default scope based on the repository's setting. This setting can also be configured at the organization level to restrict all repos owned by the org.

    Historically the only choice was permissive by default, so this is unfortunately the setting used by older organizations and repos.

    When a new repo is created, the default is inherited from the parent organization, so this insecure default tends to stick around if nobody bothers to change it. (There is no user-wide setting, so new repos owned by a user will use the restricted default. I believe newly created orgs use the better default.)

    [0]: https://docs.github.com/en/actions/security-for-github-actio...

    beaugunderson

     

    1 day ago

    [ - ]

    Temporary action tokens have full write by default; you have to explicitly opt for a read-only version.

        > Read and write permissions
        > Workflows have read and write permissions in the repository for all scopes.
    
    If you read this line of the documentation (https://docs.github.com/en/actions/security-for-github-actio...) you might think otherwise:

        > If the default permissions for the GITHUB_TOKEN are restrictive, you may have to elevate the permissions to allow some actions and commands to run successfully.
    
    But I can confirm that in our GitHub organization "Read and write permissions" was the default, and thus that line of documentation makes no sense.

    Elucalidavah

     

    2 days ago

    [ - ]

    > For their fix, they disabled debug logs

    For their quick fix, hopefully not for their final fix.

    stogot

     

    11 hours ago

    [ - ]

    The 2023 Microsoft hack (that CISA completely called them out for poor security) also was similar to this. Their blog post that tried to explain what happened left so many unanswered questions

    arccy

     

    1 day ago

    [ - ]

    just goes to show how lax microsoft is about their security. nobody should trust them.

  • ashishb

     

    2 days ago

    [ - ]

    I am getting more and more convinced that CI and CD should be completely separate environments. Compromise of CI should not lead to token leaks related to CD.

    mdaniel

     

    2 days ago

    [ - ]

    This area is near and dear to my heart, and I would offer that the solution isn't to decouple CD over into its own special little thing but rather to make the CD "multi factor" in that it must be "sub":"repo:octo-org/octo-repo:environment:prod"[1] and feel free to sprinkle in any other [fun claims][] you'd like to harden that system

    1: https://docs.github.com/en/actions/security-for-github-actio...

    fun claims: https://github.com/github/actions-oidc-debugger#readme

    ashishb

     

    2 days ago

    [ - ]

    Doable but I would prefer a complete isolation for simplicity.

    thund

     

    2 days ago

    [ - ]

    there are ways to isolate code from CI from CD, it's just not as easy as setting up the classic repo. One can use multiple repos for example, or run CI and CD with different products.

    nrvn

     

    1 day ago

    [ - ]

    This is essentially how separation of duties(and concerns) looks like. And this is how some of the good examples of projects work. Specific techniques and tooling and specific boundaries of CI and CD vary depending on the nature of the end product but conceptually you are absolutely right.

  • junto

     

    2 days ago

    [ - ]

    They weren’t kidding on the response time. Very impressive from GitHub.

    belter

     

    2 days ago

    [ - ]

    Not very impressive to have an exposed public token with full write credentials...

    toomuchtodo

     

    2 days ago

    [ - ]

    Perfect security does not exist. Their security system (people, tech) operated as expected with an impressive response time. Room for improvement, certainly, but there always is.

    Edit: Success is not the absence of vulnerability, but introduction, detection, and response trends.

    (Github enterprise comes out of my budget and I am responsible for appsec training and code IR, thoughts and opinions always my own)

    timewizard

     

    2 days ago

    [ - ]

    > Perfect security does not exist.

    Having your CI/CD pipeline and your git repository service be so tightly bound creates security implications that do not need to exist.

    Further half the point of physical security is tamper evidence. Something entirely lost here.

    Aeolun

     

    2 days ago

    [ - ]

    I find that this is always easy to say from the perspective of the security team. Sure, it would be more secure to develop like that, but also tons more painful for both dev and user.

    timewizard

     

    2 days ago

    [ - ]

    I don't code anymore. I like making devs suffer. And this is all good for the user. ;)

    koolba

     

    2 days ago

    [ - ]

    > Success is not the absence of vulnerability, but introduction, detection, and response trends.

    Don’t forget limitation of blast radius.

    When shit hits the proverbial fan, it’s helpful to limit the size of the room.

    toomuchtodo

     

    2 days ago

    [ - ]

    Yeah, I agree compartmentalization, least privilege, and sound architecture decisions are a component of reducing the pain when you get popped. It’s never if, just when.

    belter

     

    2 days ago

    [ - ]

    > Their security system (people, tech) operated as expected

    You mean not finding the vulnerability in the first place?

    This would allow:

    - Compromise intellectual property by exfiltrating the source code of all private repositories using CodeQL.

    - Steal credentials within GitHub Actions secrets of any workflow job using CodeQL, and leverage those secrets to execute further supply chain attacks.

    - Execute code on internal infrastructure running CodeQL workflows.

    - Compromise GitHub Actions secrets of any workflow using the GitHub Actions Cache within a repo that uses CodeQL.

    >> Success is not the absence of vulnerability, but introduction, detection, and response trends.

    This isn’t a philosophy, it’s PR spin to reframe failure as progress...

    toomuchtodo

     

    2 days ago

    [ - ]

    This is not great based on the potential exposure, but also not the end of the world. You’re free to your opinion of course wrt severity and impact, but folks aren’t going to leave GitHub over this in any material fashion imho. They had a failure, they will recover from it and move on. It’s certainly not PR from me, I don’t work for nor have any financial interest in GH or MS. I am a security person though, these are my opinions based on doing this for ~10 years (I am consistently exposed to security gore in my work), and we likely have an expectations disconnect.

    As a customer, I’m not going to lose sleep over it. I’m going to document for any audits or other governance processes and carry on. I operate within "commercially reasonable" context for this work. Security is just very hard in a Sisyphus sort of way. We cannot not do it, but we also cannot be perfect, so there is always going to be vigorous debate over what enough is.

    1a527dd5

     

    2 days ago

    [ - ]

    Trying my best not to break the no snark rule [1], but I'm sure your code is 100% bullet proof against all current and future-yet-invented-attacks.

    [1] _and failing_.

    atoav

     

    2 days ago

    [ - ]

    Nobody is immune against mistakes, but a certain class of mistakes¹ should never ever happen to anyone who should know better. And that in my book is anybody who has their code used by more people than themselves. I am not saying devs aren't allowed to make stupid mistakes, but if we let civil engineers have their bridges collapse with an "shit happens" -attitude trust in civil engineering would be questionable at best. So yeah shit happens to us devs, but we should be shamed if it was preventable by simply knowing the basics.

    So my opinion is anybody who writes code that is used by others should feel a certain danger-tingle whenever a secret or real user data is put literally anywhere.

    To all beginners that just means that when handling secrets, instead of pressing on, you should pause and make an exhaustive list of who would have read/write access to the secret under which conditions and whether that is intended. And with things that are world-readable like a public repo, this is especially crucial.

    Another one may or may not be your shells history, the context of your environment variables, whatever you copy-paste into the browser-searchbar/application/LLM/chat/comment section of your choice etc.

    If you absolutely have to store secrets/private user data in files within a repo it is a good idea to add the following to your .gitignore:

      *.private
      *.private.*
     
    And then every such file has to have ".private." within the filename (e.g. credentials.private.json), this not only marks it to yourself, it also prevents you to mix up critical with mundane configuration.

    But better is to spend a day to think about where secrets/user data really should be stored and how to manage them properly.

    ¹: a non-exhaustive list of other such mistakes: mistaking XOR for encryption, storing passwords in plaintext, using hardcoded credentials, relying on obscurity for security, sending data unencrypted over HTTP, not hashing passwords, using weak hash functions like MD5 or SHA-1, no input validation to stiff thst goes into your database, trusting user input blindly, buffer overflows due to unchecked input, lack of access control, no user authentication, using default admin credentials, running all code as administrator/root without dropping priviledges, relying on client-side validation for security, using self-rolled cryptographic algorithms, mixing authentication and authorization logic, no session expiration or timeout, predictable session IDs, no patch management or updates, wide-open network shares, exposing internal services to the internet, trusting data from cookies or query strings without verification, etc

    immibis

     

    2 days ago

    [ - ]

    > no input validation to stiff thst goes into your database

    I'd put "conflating input validation with escaping" on this list, and then the list fails the list because the list conflates input validation with escaping.

    atoav

     

    2 days ago

    [ - ]

    Good point, as I mentioned, this is a non-exhaustive list. Input validation and related topics like encodings, escaping, etc could fill a list single-handedly.

  • helsinki

     

    2 days ago

    [ - ]

    As someone with the last name Prater—derived from Praetorian—I really wish I owned praetorian.com.

    smoyer

     

    2 days ago

    [ - ]

    Their gokart project was awesome!

    ratg13

     

    2 days ago

    [ - ]

    You would have had to have this thought prior to the release of the movie “The Net” in 1995

  • udev4096

     

    2 days ago

    [ - ]

    Using public github actions is just asking for trouble and more so without analyzing the workflow's procedure. Instead, just host one yourself using woodpecker or countless other great CI builders (circle, travis, gitlab, etc)

  • ryao

     

    2 days ago

    [ - ]

    I put CodeQL in use in OpenZFS PRs. This is not an issue for OpenZFS. None of our code is secret. :)

    asmosoinio

     

    2 days ago

    [ - ]

    I don't think this is a good take: Even if your code is not secret, the attack could add anything to your code or release artifacts.

    Luckily it was quickly remedied at least.

  • atxtechbro

     

    2 days ago

    [ - ]

    Is this fixed?

    lsllc

     

    2 days ago

    [ - ]

    It's in the article (and the comments here) -- yes, it was remediated within 3 hours of being reported back in January by GitHub.

  • bloqs

     

    2 days ago

    [ - ]

    This sites performance is so bad i can barely scroll