Recent @dolftax Activity
A short list of patterns that beginners miss while writing Go
2 points • 0 comments
A hands-on introduction to static code analysis
185 points • 31 comments
"And we’re still accepting late applications if you’ve always wanted to do YC but couldn’t move out to the bay area" - Aaron from YC.
> There's a better solution: use open-source cli tools that do just that!
We do not deny that you can't run the open-source tools locally. Be it one line command, or be it setting up pylint or flake8 with dedicated configurations. DeepSource is a tool meant to eliminate the need to set up all those open source tools locally or in your CI pipeline. So that you don't need to
- Fish for issues amongst hundreds of lines of logs in the CI
- Figure out and update linter config to remove duplicates and false positives (for ex: Bandit throws errors like `assets statement used` in a test file — which is a false-positive. Bandit doesn’t know that it is a test file by default)
- Some issues needed better description of why is that an issue, for ex: why should default file permissions be 0600? Justification on why is it necessary,.
- By default on every commit or pull request, linters run on all the files.
- If there are issues that occur in say 50 places, one have to manually fix it.
> 1. 520 Python checks? Use `wemake-python-styleguide` (wrapper around flake8) that has bigger amount of checks: https://github.com/wemake-services/wemake-python-styleguide There's also `pylint` with a set of awesome checks as well.
Our focus at the moment is not on style issues. In fact, amongst the categories of issues we raise (anti-patterns, bug-risks, performance, security, style, documentation), style issues are the most debated on by our users as it is really subjective. We’re thinking of removing style issues by default (as an opt-in) and are working on running formatters like `black`, `yapf`, .. with a single line config in `.deepsource.toml`. Our analyzer team actively adds custom rules which you don’t get from the open-source tools. The following issues for example:
- Raising another exception when `assert` fails is ineffective. For ex: `assert isinstance(num_channels, int), ValueError('Number of image channels needs to be an integer')`
- If the condition would not be satisfied, user would be expecting a `ValueError`, but this would be raised: `AssertionError: Number of image channels needs to be an integer` which should be
- `yield` used inside a comprehension (which breaks code in Python 3.8)
- Write operation on file that is opened in read-only mode
- I/O detected on a closed file descriptor
> 2. Type checking? Use `mypy`: it just a single command!
Sure. If one prefers running it locally (or) as part of their CI. But if you already use DeepSource to flag issues, it can be enabled by a single line in .deepsource.toml file.
> 3. Autofixing? Use `black` / `autopep8` / `autoflake` and you can use `pybetter` to have the same ~15 auto-fix rules. But, it is completely free and open-source
We are working on adding support for autopep8, black and autoflake in coming weeks. They mostly auto-patch stylistic issues . Thanks for letting us know about pybetter. It looks like a great tool and fixes ~9 issues . DeepSource’s autofix aim is to fix more than 3/4th of issues we detect and we detect 522 issues in our Python analyzer. We have dedicated engineering team actively working on the analyzers. As of today, following are some of the issues our Python analyzer can autofix (which I couldn’t find it among the open-source tools):
- No use of `self`
- Usafe of dangerous default argument
- Module imported but unused
- Function contains unused argument
- Debugger import detected
- Debugger activation detected
- Unnecessary comprehension
- Unnecessary literal
- Unnecessary call
- Unnecessary typecast
- Bad comparison test
- Empty module
- Built-in function `len` used as condition
- Unnecessary `fstring`
- `raise NotImplemented` should be `raise NotImplementedError`
- `assert` statement used outside of tests
Same goes with Go and other analyzers we support.
> I don't like this whole idea of such tools (both technically and ethically): > Why would anyone want to send all their codebase to 3rd party? We used to call it a security breach back in the days.
We follow strict security practices . In a gist, 1) We do not store your code, 2) Source code is pulled in an isolated environment that has no access to any of our internal systems or the external network, 3) As soon as the analysis is completed, the environment is destroyed and all logs are purged. Also, there are many tools that developers use everyday (Travis CI, Circle CI, GitHub) where the source code is sent to the cloud — I don't think it is accurate to call it a security breach. That said, we have on-premise setup of DeepSource in the roadmap. We’re working on SOC 2 Type 2 compliance as well .
> On moral side, this (and similar) projects look like thin wrappers around open-source tools but with a monetisation model. How much do these companies contribute back to the original authors of pylint, mypy, flake8? Ones who created and maintained them for years. I will be happy to be wrong here
We have kept the tool completely free to use for open-source projects. We’ve also partnered with GitHub Education and made it free for students. We’re an early stage company trying to build a business in automating objective parts of code review and making it easier for every developer to adopt and use static analysis. With all transparency, we had plans to sponsor open-source projects but got sidetracked due to various reasons. We will be backing some of the open-source projects, in next couple of weeks.
DeepSource integrates with GitHub checks  and via the dashboard, you can select the issue types (anti-patterns, bug risks, performance and security issues, style, type checks and documentation), which when detected, will cause analysis runs to fail and pull requests to be blocked.
We'll tweet about it at https://twitter.com/deepsourcehq
There are two GitHub apps we maintain. One with read access (DeepSource) and one with write access (DeepSource Autofix).
By default, on signup, you would be installing the app with read access -- this enables us to pull source code from GitHub on every commit and pull-request, run analysis and report issues as GitHub checks. This is sufficient if you would like to use DeepSource only to flag issues.
With the release of Autofix -- when a fix is available for a flagged issue, DeepSource creates a pull request to the repository with the patch. For this, you would be asked to install the app with write access (DeepSource Autofix). Note that, DeepSource always creates a separate branch with the fixes and creates a pull request. We do not perform any write operations beyond the above mentioned scope.
Sure. I've left you an email.
We went ahead with integration with providers like GitHub and GitLab to have these checks in a central place as it is the easiest way for a team to adopt a tool like ours. Also, just having a local or IDE plugin doesn't ensure these issues never make it to trunk unless everyone in the team follows it strictly.
That said, for the convenience of developers, we're working on the ability to run the analysis and the fixes using our CLI.  This opens up doors to use the CLI and build IDE plugins in the near future.
> Why is this needed? Can’t you imply the necessary analyzers from my codebase?
Sure. We can probably infer the languages used in the repository. But we need metadata like test glob patterns, exclude patterns, runtime versions (Python 2, Python 3) to improve the accuracy of issues. For ex: Usage of assert statement in application logic is discouraged as it is removed when compiling to optimised byte code (python -o producing *.pyo files). Ideally, assert statement should be used only in tests. Also, we haven’t found a way to infer Python 2 vs Python 3 accurately. Can you think of a way? That would be helpful.
> Also, the name implies use if Deep Networks and AI. Am I mistaken? If not, what kind of AI is used here? Seems like just an automatic runner of static analysis tools.
It’s just the name :) We do not use Machine Learning or AI at the moment — the reason being we’re optimizing for high accuracy, and a rules engine that uses AST parsing helps us do that reliably. We do plan to use learning in the future to capture data around which issues are being fixed the most and which are not, and then show issues in the most relevant order to users depending on their context.
Launch HN: DeepSource (YC W20) – Find and fix issues during code reviews
105 points • 27 comments
How to Setup Vault with Kubernetes
3 points • 0 comments
Founder's guide to building a developer tools business
3 points • 0 comments
Sure. The least we expect from any service sending webhooks is built-in retry strategy. GitHub doesn't. We were thinking of building this ourselves internally but if someone takes care of this for you reliably, why not.
For API tracker, even if their services go down for a short while, it isn't good for business. Though it's been only few weeks using API tracker, we had zero failed webhook deliveries. They say they've designed their systems with this as a primary goal, of course. What if AWS or GCP goes down. It's a matter of trust and SLAs.
We've been using API Tracker in production for few weeks now. The primary use case for us is to reliably handle webhooks from GitHub which our product relies heavily on (app installation, commit and pull request events).
Unfortunately, GitHub doesn't retry any failed webhooks and when our service goes down for a few seconds, thousands of webhooks fail and pile up. GitHub doesn't provide an API to query the failed webhooks and retry as well. We had to go through the painstaking task of visiting GitHub's app dashboard and click retry on each webhook, one by one.
With API tracker in place, we've updated our GitHub app's webhook delivery URL to send the webhooks to API tracker and they forward it to our services. In worst case when our service goes down for a while, API tracker gracefully retries all the failed webhooks.
Google ends its free Wi-Fi program Station
2 points • 0 comments
“isn't a title of this post” isn't a title of this post
7 points • 0 comments
Package Management in Go
1 points • 0 comments
Positional-only arguments in Python
7 points • 0 comments