Skip to content

feat: draft of blog, "webforJ: AI-assisted, human-owned"#740

Draft
gosteenBASIS wants to merge 1 commit intomainfrom
blog-webforj-human-owned
Draft

feat: draft of blog, "webforJ: AI-assisted, human-owned"#740
gosteenBASIS wants to merge 1 commit intomainfrom
blog-webforj-human-owned

Conversation

@gosteenBASIS
Copy link
Member

Initial draft of blog post, "webforJ: AI-assisted, human-owned"

Still needs some more writing and editing, but I wanted to make it available for feedback at this point.

@gosteenBASIS gosteenBASIS self-assigned this Feb 12, 2026
@gosteenBASIS gosteenBASIS added the Blog Post Blog content for the doc site label Feb 12, 2026
@gosteenBASIS gosteenBASIS force-pushed the blog-webforj-human-owned branch from c35d1a1 to 5a35618 Compare February 12, 2026 00:07
I recently came across [get-shit-done](https://github.com/glittercowboy/get-shit-done) and [Auto-Claude](https://github.com/AndyMik90/Auto-Claude), which are meta-prompting systems that promise to automate entire development workflows.
At first glance, they're impressive. But they also made me wonder: what's happening to the quality of the open source ecosystem as AI-generated code floods platforms like npm, PyPI, and GitHub?

The answer, backed by recent research, is sobering. And it supports why we've made a strategic choice at webforJ: **AI-assisted development, but human-owned code.**

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ [vale] reported by reviewdog 🐶
[Google.Colons] ': A' should be in lowercase.

AI code often contains mistakes in logic and correctness, with increased occurrences of logic errors, misconfigurations, and poor error or exception handling.

StackOverflow's survey found that 66% of developers using AI tools had the problem of "AI solutions that are almost right, but not quite," and 45% agreed that "Debugging AI-generated code is more time-consuming."
Only 4% responded that they have not encountered any problems when using AI tools.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 [vale] reported by reviewdog 🐶
[Google.Contractions] Use 'haven't' instead of 'have not'.

Large Language Models sometimes "hallucinate" information, including dependencies and package names.
Malicious actors can take advantage of common AI hallucinations by creating malicious packages under those names, tricking inattentive software developers into installing them without verifying their legitimacy.
This practice is called [slopsquatting](https://en.wikipedia.org/wiki/Slopsquatting), a combination of the words "AI Slop" and "[typosquatting](https://en.wikipedia.org/wiki/Typosquatting)," an older technique of registering misspelled domain names.
The research paper [We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs](https://arxiv.org/abs/2406.10279) found that almost 20% of recommended packages across more than half a million code samples did not exist.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 [vale] reported by reviewdog 🐶
[Google.Contractions] Use 'we've' instead of 'We Have'.

Large Language Models sometimes "hallucinate" information, including dependencies and package names.
Malicious actors can take advantage of common AI hallucinations by creating malicious packages under those names, tricking inattentive software developers into installing them without verifying their legitimacy.
This practice is called [slopsquatting](https://en.wikipedia.org/wiki/Slopsquatting), a combination of the words "AI Slop" and "[typosquatting](https://en.wikipedia.org/wiki/Typosquatting)," an older technique of registering misspelled domain names.
The research paper [We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs](https://arxiv.org/abs/2406.10279) found that almost 20% of recommended packages across more than half a million code samples did not exist.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 [vale] reported by reviewdog 🐶
[Google.Contractions] Use 'didn't' instead of 'did not'.

## webforJ's AI policy

AI coding assistants are dramatically changing the way that people create software, and the engineers working at webforJ are no exception.
While we are excited to leverage these new tools and capabilities, we are committed to our policy of **AI-assisted development, and human-owned code**.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 [vale] reported by reviewdog 🐶
[Google.Contractions] Use 'we're' instead of 'we are'.


Not only is this a low-risk area to make use of AI, but doing so also provides us with valuable insight into the experience of our users.
Since AI usage has become the norm, we expect that webforJ users are also using AI in the development of their Java-based web applications.
By putting ourselves in their shoes, we can uncover problems and ensure that webforJ works with AI as smoothly as possible.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ [vale] reported by reviewdog 🐶
[webforJ.BeDirect] Avoid using 'ensure that'. Focus more on explicitly giving details about the feature.

AI excels at creating low-effort first drafts of new ideas.
This can improve our ability to test the viability of new features before investing more time into the quality and accuracy of the code.

This is another area where the low risk allows us to prioritize speed and turnaround time, giving us the opportunity to experiment freely and quickly in order to find promising areas to invest development time in.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ [vale] reported by reviewdog 🐶
[Google.WordList] Use 'to' instead of 'in order to'.


Much of our documentation follows similar patterns, especially the webforJ [components](/docs/components/overview) pages.
Even though AI tools rarely provide a finished product, they can accelerate our documentation process by essentially providing the boilerplate code of a documentation page.
This is an area where strict style guides and review processes ensure that, regardless of the workflow used to create a page, the final product meets our standards.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ [vale] reported by reviewdog 🐶
[webforJ.BeDirect] Avoid using 'ensure that'. Focus more on explicitly giving details about the feature.

#### Code review

Just as it's important for humans to review AI output, AI can be extremely valuable in reviewing human output.
Using AI as a code reviewer can help flag up issues that a human may miss, and ensures that a PR meets basic standards before another developer looks at it, saving them the cognitive load of addressing minor issues or oversights.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ [vale] reported by reviewdog 🐶
[webforJ.BeDirect] Avoid using 'ensures that'. Focus more on explicitly giving details about the feature.


## How webforJ supports your use of AI

In addition to our own use of AI, we understand that AI development tools are shaping the future of software development, and we are always looking for ways to improve the experience of developers using webforJ with these tools.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 [vale] reported by reviewdog 🐶
[Google.Contractions] Use 'we're' instead of 'we are'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Blog Post Blog content for the doc site

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant