497 Commits

Author SHA1 Message Date
github-actions[bot]
9e3f5a1a5b chore: Update Python dependencies 2026-03-10 10:34:35 +00:00
kleinanzeigen-bot-tu[bot]
ddbe88e422 chore: ✔ Update jaraco-context 6.1.0 -> 6.1.1 (#862)
✔ Update jaraco-context 6.1.0 -> 6.1.1 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-03-09 21:24:02 +01:00
dependabot[bot]
712b96e2f4 ci(deps): bump github/codeql-action from 4.32.5 to 4.32.6 in the all-actions group (#864)
Bumps the all-actions group with 1 update:
[github/codeql-action](https://github.com/github/codeql-action).

Updates `github/codeql-action` from 4.32.5 to 4.32.6
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v4.32.6</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3548">#3548</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>[UNRELEASED]</h2>
<p>No user facing changes.</p>
<h2>4.32.6 - 05 Mar 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3548">#3548</a></li>
</ul>
<h2>4.32.5 - 02 Mar 2026</h2>
<ul>
<li>Repositories owned by an organization can now set up the
<code>github-codeql-disable-overlay</code> custom repository property to
disable <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis for CodeQL</a>. First, create a custom repository
property with the name <code>github-codeql-disable-overlay</code> and
the type &quot;True/false&quot; in the organization's settings. Then in
the repository's settings, set this property to <code>true</code> to
disable improved incremental analysis. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>. This
feature is not yet available on GitHub Enterprise Server. <a
href="https://redirect.github.com/github/codeql-action/pull/3507">#3507</a></li>
<li>Added an experimental change so that when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> fails on a runner — potentially due to
insufficient disk space — the failure is recorded in the Actions cache
so that subsequent runs will automatically skip improved incremental
analysis until something changes (e.g. a larger runner is provisioned or
a new CodeQL version is released). We expect to roll this change out to
everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3487">#3487</a></li>
<li>The minimum memory check for improved incremental analysis is now
skipped for CodeQL 2.24.3 and later, which has reduced peak RAM usage.
<a
href="https://redirect.github.com/github/codeql-action/pull/3515">#3515</a></li>
<li>Reduced log levels for best-effort private package registry
connection check failures to reduce noise from workflow annotations. <a
href="https://redirect.github.com/github/codeql-action/pull/3516">#3516</a></li>
<li>Added an experimental change which lowers the minimum disk space
requirement for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>, enabling it to run on standard GitHub Actions
runners. We expect to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3498">#3498</a></li>
<li>Added an experimental change which allows the
<code>start-proxy</code> action to resolve the CodeQL CLI version from
feature flags instead of using the linked CLI bundle version. We expect
to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3512">#3512</a></li>
<li>The previously experimental changes from versions 4.32.3, 4.32.4,
3.32.3 and 3.32.4 are now enabled by default. <a
href="https://redirect.github.com/github/codeql-action/pull/3503">#3503</a>,
<a
href="https://redirect.github.com/github/codeql-action/pull/3504">#3504</a></li>
</ul>
<h2>4.32.4 - 20 Feb 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.2">2.24.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3493">#3493</a></li>
<li>Added an experimental change which improves how certificates are
generated for the authentication proxy that is used by the CodeQL Action
in Default Setup when <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>. This is expected to generate more
widely compatible certificates and should have no impact on analyses
which are working correctly already. We expect to roll this change out
to everyone in February. <a
href="https://redirect.github.com/github/codeql-action/pull/3473">#3473</a></li>
<li>When the CodeQL Action is run <a
href="https://docs.github.com/en/code-security/how-tos/scan-code-for-vulnerabilities/troubleshooting/troubleshooting-analysis-errors/logs-not-detailed-enough#creating-codeql-debugging-artifacts-for-codeql-default-setup">with
debugging enabled in Default Setup</a> and <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>, the &quot;Setup proxy for
registries&quot; step will output additional diagnostic information that
can be used for troubleshooting. <a
href="https://redirect.github.com/github/codeql-action/pull/3486">#3486</a></li>
<li>Added a setting which allows the CodeQL Action to enable network
debugging for Java programs. This will help GitHub staff support
customers with troubleshooting issues in GitHub-managed CodeQL
workflows, such as Default Setup. This setting can only be enabled by
GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3485">#3485</a></li>
<li>Added a setting which enables GitHub-managed workflows, such as
Default Setup, to use a <a
href="https://github.com/dsp-testing/codeql-cli-nightlies">nightly
CodeQL CLI release</a> instead of the latest, stable release that is
used by default. This will help GitHub staff support customers whose
analyses for a given repository or organization require early access to
a change in an upcoming CodeQL CLI release. This setting can only be
enabled by GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3484">#3484</a></li>
</ul>
<h2>4.32.3 - 13 Feb 2026</h2>
<ul>
<li>Added experimental support for testing connections to <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a>. This feature is not currently enabled for any
analysis. In the future, it may be enabled by default for Default Setup.
<a
href="https://redirect.github.com/github/codeql-action/pull/3466">#3466</a></li>
</ul>
<h2>4.32.2 - 05 Feb 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.1">2.24.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3460">#3460</a></li>
</ul>
<h2>4.32.1 - 02 Feb 2026</h2>
<ul>
<li>A warning is now shown in Default Setup workflow logs if a <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registry is configured</a> using a GitHub Personal Access Token
(PAT), but no username is configured. <a
href="https://redirect.github.com/github/codeql-action/pull/3422">#3422</a></li>
<li>Fixed a bug which caused the CodeQL Action to fail when repository
properties cannot successfully be retrieved. <a
href="https://redirect.github.com/github/codeql-action/pull/3421">#3421</a></li>
</ul>
<h2>4.32.0 - 26 Jan 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.0">2.24.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3425">#3425</a></li>
</ul>
<h2>4.31.11 - 23 Jan 2026</h2>
<ul>
<li>When running a Default Setup workflow with <a
href="https://docs.github.com/en/actions/how-tos/monitor-workflows/enable-debug-logging">Actions
debugging enabled</a>, the CodeQL Action will now use more unique names
when uploading logs from the Dependabot authentication proxy as workflow
artifacts. This ensures that the artifact names do not clash between
multiple jobs in a build matrix. <a
href="https://redirect.github.com/github/codeql-action/pull/3409">#3409</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0d579ffd05"><code>0d579ff</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3551">#3551</a>
from github/update-v4.32.6-72d2d850d</li>
<li><a
href="d4c6be7cf1"><code>d4c6be7</code></a>
Update changelog for v4.32.6</li>
<li><a
href="72d2d850d1"><code>72d2d85</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3548">#3548</a>
from github/update-bundle/codeql-bundle-v2.24.3</li>
<li><a
href="23f983ce00"><code>23f983c</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3544">#3544</a>
from github/dependabot/github_actions/dot-github/wor...</li>
<li><a
href="832e97ccad"><code>832e97c</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3545">#3545</a>
from github/dependabot/github_actions/dot-github/wor...</li>
<li><a
href="5ef38c0b13"><code>5ef38c0</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3546">#3546</a>
from github/dependabot/npm_and_yarn/tar-7.5.10</li>
<li><a
href="80c9cda739"><code>80c9cda</code></a>
Add changelog note</li>
<li><a
href="f2669dd916"><code>f2669dd</code></a>
Update default bundle to codeql-bundle-v2.24.3</li>
<li><a
href="bd03c44cf4"><code>bd03c44</code></a>
Merge branch 'main' into
dependabot/github_actions/dot-github/workflows/actio...</li>
<li><a
href="102d7627b6"><code>102d762</code></a>
Bump tar from 7.5.7 to 7.5.10</li>
<li>Additional commits viewable in <a
href="c793b717bc...0d579ffd05">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github/codeql-action&package-manager=github_actions&previous-version=4.32.5&new-version=4.32.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-09 21:23:36 +01:00
Jens
71028ea844 fix: serialize downloaded ad timestamps as schema-compliant strings (#863)
## ℹ️ Description
- Link to the related issue(s): Issue #
- Fixes drift where `pdm run app download` wrote timestamp values in
YAML-native datetime form that could violate `schemas/ad.schema.json`
string expectations.
- Ensures downloaded ads persist `created_on`/`updated_on` as
JSON-serialized ISO-8601 strings and adds a regression test validating
written YAML against the schema.

## 📋 Changes Summary
- Updated downloader save path to use `ad_cfg.model_dump(mode =
\"json\")` before writing YAML in `src/kleinanzeigen_bot/extract.py`.
- Updated existing `download_ad` unit assertion to match JSON-mode
serialization.
- Added `test_download_ad_writes_schema_compliant_yaml` in
`tests/unit/test_extract.py` that writes a real tmp YAML file and
validates it against `schemas/ad.schema.json` with `jsonschema`.
- Added dev dependency `jsonschema>=4.26.0` (and lockfile updates).
- Dependencies/config updates introduced: new dev dependency
(`jsonschema`) for full schema validation in tests.

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

# Release Notes

* **Bug Fixes**
* Improved ad data serialization to ensure consistent JSON format when
saving ad configurations.

* **Tests**
  * Added schema validation tests to verify ad YAML output compliance.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-03-08 23:10:16 +01:00
kleinanzeigen-bot-tu[bot]
e151f0d104 chore: Update Python dependencies (#861) 2026-03-07 17:55:15 +01:00
kleinanzeigen-bot-tu[bot]
5c4e0cc90d chore: ✔ Update pyinstaller-hooks-contrib 2026.1 -> 2026.2 (#860)
✔ Update pyinstaller-hooks-contrib 2026.1 -> 2026.2 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-03-04 08:22:58 +01:00
dependabot[bot]
9baba41e5e ci(deps): bump the all-actions group with 3 updates (#858)
Bumps the all-actions group with 3 updates:
[actions/upload-artifact](https://github.com/actions/upload-artifact),
[actions/download-artifact](https://github.com/actions/download-artifact)
and [github/codeql-action](https://github.com/github/codeql-action).

Updates `actions/upload-artifact` from 6.0.0 to 7.0.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/upload-artifact/releases">actions/upload-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v7.0.0</h2>
<h2>v7 What's new</h2>
<h3>Direct Uploads</h3>
<p>Adds support for uploading single files directly (unzipped). Callers
can set the new <code>archive</code> parameter to <code>false</code> to
skip zipping the file during upload. Right now, we only support single
files. The action will fail if the glob passed resolves to multiple
files. The <code>name</code> parameter is also ignored with this
setting. Instead, the name of the artifact will be the name of the
uploaded file.</p>
<h3>ESM</h3>
<p>To support new versions of the <code>@actions/*</code> packages,
we've upgraded the package to ESM.</p>
<h2>What's Changed</h2>
<ul>
<li>Add proxy integration test by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/upload-artifact/pull/754">actions/upload-artifact#754</a></li>
<li>Upgrade the module to ESM and bump dependencies by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/upload-artifact/pull/762">actions/upload-artifact#762</a></li>
<li>Support direct file uploads by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/upload-artifact/pull/764">actions/upload-artifact#764</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Link"><code>@​Link</code></a>- made
their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/754">actions/upload-artifact#754</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v6...v7.0.0">https://github.com/actions/upload-artifact/compare/v6...v7.0.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bbbca2ddaa"><code>bbbca2d</code></a>
Support direct file uploads (<a
href="https://redirect.github.com/actions/upload-artifact/issues/764">#764</a>)</li>
<li><a
href="589182c5a4"><code>589182c</code></a>
Upgrade the module to ESM and bump dependencies (<a
href="https://redirect.github.com/actions/upload-artifact/issues/762">#762</a>)</li>
<li><a
href="47309c993a"><code>47309c9</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/754">#754</a>
from actions/Link-/add-proxy-integration-tests</li>
<li><a
href="02a8460834"><code>02a8460</code></a>
Add proxy integration test</li>
<li>See full diff in <a
href="b7c566a772...bbbca2ddaa">compare
view</a></li>
</ul>
</details>
<br />

Updates `actions/download-artifact` from 7.0.0 to 8.0.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v8.0.0</h2>
<h2>v8 - What's new</h2>
<h3>Direct downloads</h3>
<p>To support direct uploads in <code>actions/upload-artifact</code>,
the action will no longer attempt to unzip all downloaded files.
Instead, the action checks the <code>Content-Type</code> header ahead of
unzipping and skips non-zipped files. Callers wishing to download a
zipped file as-is can also set the new <code>skip-decompress</code>
parameter to <code>false</code>.</p>
<h3>Enforced checks (breaking)</h3>
<p>A previous release introduced digest checks on the download. If a
download hash didn't match the expected hash from the server, the action
would log a warning. Callers can now configure the behavior on mismatch
with the <code>digest-mismatch</code> parameter. To be secure by
default, we are now defaulting the behavior to <code>error</code> which
will fail the workflow run.</p>
<h3>ESM</h3>
<p>To support new versions of the @actions/* packages, we've upgraded
the package to ESM.</p>
<h2>What's Changed</h2>
<ul>
<li>Don't attempt to un-zip non-zipped downloads by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/download-artifact/pull/460">actions/download-artifact#460</a></li>
<li>Add a setting to specify what to do on hash mismatch and default it
to <code>error</code> by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/download-artifact/pull/461">actions/download-artifact#461</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v7...v8.0.0">https://github.com/actions/download-artifact/compare/v7...v8.0.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="70fc10c6e5"><code>70fc10c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/461">#461</a>
from actions/danwkennedy/digest-mismatch-behavior</li>
<li><a
href="f258da9a50"><code>f258da9</code></a>
Add change docs</li>
<li><a
href="ccc058e5fb"><code>ccc058e</code></a>
Fix linting issues</li>
<li><a
href="bd7976ba57"><code>bd7976b</code></a>
Add a setting to specify what to do on hash mismatch and default it to
<code>error</code></li>
<li><a
href="ac21fcf45e"><code>ac21fcf</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/460">#460</a>
from actions/danwkennedy/download-no-unzip</li>
<li><a
href="15999bff51"><code>15999bf</code></a>
Add note about package bumps</li>
<li><a
href="974686ed50"><code>974686e</code></a>
Bump the version to <code>v8</code> and add release notes</li>
<li><a
href="fbe48b1d27"><code>fbe48b1</code></a>
Update test names to make it clearer what they do</li>
<li><a
href="96bf374a61"><code>96bf374</code></a>
One more test fix</li>
<li><a
href="b8c4819ef5"><code>b8c4819</code></a>
Fix skip decompress test</li>
<li>Additional commits viewable in <a
href="37930b1c2a...70fc10c6e5">compare
view</a></li>
</ul>
</details>
<br />

Updates `github/codeql-action` from 4.32.4 to 4.32.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v4.32.5</h2>
<ul>
<li>Repositories owned by an organization can now set up the
<code>github-codeql-disable-overlay</code> custom repository property to
disable <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis for CodeQL</a>. First, create a custom repository
property with the name <code>github-codeql-disable-overlay</code> and
the type &quot;True/false&quot; in the organization's settings. Then in
the repository's settings, set this property to <code>true</code> to
disable improved incremental analysis. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>. This
feature is not yet available on GitHub Enterprise Server. <a
href="https://redirect.github.com/github/codeql-action/pull/3507">#3507</a></li>
<li>Added an experimental change so that when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> fails on a runner — potentially due to
insufficient disk space — the failure is recorded in the Actions cache
so that subsequent runs will automatically skip improved incremental
analysis until something changes (e.g. a larger runner is provisioned or
a new CodeQL version is released). We expect to roll this change out to
everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3487">#3487</a></li>
<li>The minimum memory check for improved incremental analysis is now
skipped for CodeQL 2.24.3 and later, which has reduced peak RAM usage.
<a
href="https://redirect.github.com/github/codeql-action/pull/3515">#3515</a></li>
<li>Reduced log levels for best-effort private package registry
connection check failures to reduce noise from workflow annotations. <a
href="https://redirect.github.com/github/codeql-action/pull/3516">#3516</a></li>
<li>Added an experimental change which lowers the minimum disk space
requirement for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>, enabling it to run on standard GitHub Actions
runners. We expect to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3498">#3498</a></li>
<li>Added an experimental change which allows the
<code>start-proxy</code> action to resolve the CodeQL CLI version from
feature flags instead of using the linked CLI bundle version. We expect
to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3512">#3512</a></li>
<li>The previously experimental changes from versions 4.32.3, 4.32.4,
3.32.3 and 3.32.4 are now enabled by default. <a
href="https://redirect.github.com/github/codeql-action/pull/3503">#3503</a>,
<a
href="https://redirect.github.com/github/codeql-action/pull/3504">#3504</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>[UNRELEASED]</h2>
<p>No user facing changes.</p>
<h2>4.32.5 - 02 Mar 2026</h2>
<ul>
<li>Repositories owned by an organization can now set up the
<code>github-codeql-disable-overlay</code> custom repository property to
disable <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis for CodeQL</a>. First, create a custom repository
property with the name <code>github-codeql-disable-overlay</code> and
the type &quot;True/false&quot; in the organization's settings. Then in
the repository's settings, set this property to <code>true</code> to
disable improved incremental analysis. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>. This
feature is not yet available on GitHub Enterprise Server. <a
href="https://redirect.github.com/github/codeql-action/pull/3507">#3507</a></li>
<li>Added an experimental change so that when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> fails on a runner — potentially due to
insufficient disk space — the failure is recorded in the Actions cache
so that subsequent runs will automatically skip improved incremental
analysis until something changes (e.g. a larger runner is provisioned or
a new CodeQL version is released). We expect to roll this change out to
everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3487">#3487</a></li>
<li>The minimum memory check for improved incremental analysis is now
skipped for CodeQL 2.24.3 and later, which has reduced peak RAM usage.
<a
href="https://redirect.github.com/github/codeql-action/pull/3515">#3515</a></li>
<li>Reduced log levels for best-effort private package registry
connection check failures to reduce noise from workflow annotations. <a
href="https://redirect.github.com/github/codeql-action/pull/3516">#3516</a></li>
<li>Added an experimental change which lowers the minimum disk space
requirement for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>, enabling it to run on standard GitHub Actions
runners. We expect to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3498">#3498</a></li>
<li>Added an experimental change which allows the
<code>start-proxy</code> action to resolve the CodeQL CLI version from
feature flags instead of using the linked CLI bundle version. We expect
to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3512">#3512</a></li>
<li>The previously experimental changes from versions 4.32.3, 4.32.4,
3.32.3 and 3.32.4 are now enabled by default. <a
href="https://redirect.github.com/github/codeql-action/pull/3503">#3503</a>,
<a
href="https://redirect.github.com/github/codeql-action/pull/3504">#3504</a></li>
</ul>
<h2>4.32.4 - 20 Feb 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.2">2.24.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3493">#3493</a></li>
<li>Added an experimental change which improves how certificates are
generated for the authentication proxy that is used by the CodeQL Action
in Default Setup when <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>. This is expected to generate more
widely compatible certificates and should have no impact on analyses
which are working correctly already. We expect to roll this change out
to everyone in February. <a
href="https://redirect.github.com/github/codeql-action/pull/3473">#3473</a></li>
<li>When the CodeQL Action is run <a
href="https://docs.github.com/en/code-security/how-tos/scan-code-for-vulnerabilities/troubleshooting/troubleshooting-analysis-errors/logs-not-detailed-enough#creating-codeql-debugging-artifacts-for-codeql-default-setup">with
debugging enabled in Default Setup</a> and <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>, the &quot;Setup proxy for
registries&quot; step will output additional diagnostic information that
can be used for troubleshooting. <a
href="https://redirect.github.com/github/codeql-action/pull/3486">#3486</a></li>
<li>Added a setting which allows the CodeQL Action to enable network
debugging for Java programs. This will help GitHub staff support
customers with troubleshooting issues in GitHub-managed CodeQL
workflows, such as Default Setup. This setting can only be enabled by
GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3485">#3485</a></li>
<li>Added a setting which enables GitHub-managed workflows, such as
Default Setup, to use a <a
href="https://github.com/dsp-testing/codeql-cli-nightlies">nightly
CodeQL CLI release</a> instead of the latest, stable release that is
used by default. This will help GitHub staff support customers whose
analyses for a given repository or organization require early access to
a change in an upcoming CodeQL CLI release. This setting can only be
enabled by GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3484">#3484</a></li>
</ul>
<h2>4.32.3 - 13 Feb 2026</h2>
<ul>
<li>Added experimental support for testing connections to <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a>. This feature is not currently enabled for any
analysis. In the future, it may be enabled by default for Default Setup.
<a
href="https://redirect.github.com/github/codeql-action/pull/3466">#3466</a></li>
</ul>
<h2>4.32.2 - 05 Feb 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.1">2.24.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3460">#3460</a></li>
</ul>
<h2>4.32.1 - 02 Feb 2026</h2>
<ul>
<li>A warning is now shown in Default Setup workflow logs if a <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registry is configured</a> using a GitHub Personal Access Token
(PAT), but no username is configured. <a
href="https://redirect.github.com/github/codeql-action/pull/3422">#3422</a></li>
<li>Fixed a bug which caused the CodeQL Action to fail when repository
properties cannot successfully be retrieved. <a
href="https://redirect.github.com/github/codeql-action/pull/3421">#3421</a></li>
</ul>
<h2>4.32.0 - 26 Jan 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.0">2.24.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3425">#3425</a></li>
</ul>
<h2>4.31.11 - 23 Jan 2026</h2>
<ul>
<li>When running a Default Setup workflow with <a
href="https://docs.github.com/en/actions/how-tos/monitor-workflows/enable-debug-logging">Actions
debugging enabled</a>, the CodeQL Action will now use more unique names
when uploading logs from the Dependabot authentication proxy as workflow
artifacts. This ensures that the artifact names do not clash between
multiple jobs in a build matrix. <a
href="https://redirect.github.com/github/codeql-action/pull/3409">#3409</a></li>
<li>Improved error handling throughout the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3415">#3415</a></li>
<li>Added experimental support for automatically excluding <a
href="https://docs.github.com/en/repositories/working-with-files/managing-files/customizing-how-changed-files-appear-on-github">generated
files</a> from the analysis. This feature is not currently enabled for
any analysis. In the future, it may be enabled by default for some
GitHub-managed analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3318">#3318</a></li>
<li>The changelog extracts that are included with releases of the CodeQL
Action are now shorter to avoid duplicated information from appearing in
Dependabot PRs. <a
href="https://redirect.github.com/github/codeql-action/pull/3403">#3403</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c793b717bc"><code>c793b71</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3523">#3523</a>
from github/update-v4.32.5-ca42bf226</li>
<li><a
href="06cd615ad8"><code>06cd615</code></a>
Soften language re overlay failures</li>
<li><a
href="f5516c6630"><code>f5516c6</code></a>
Improve changelog</li>
<li><a
href="97519e197e"><code>97519e1</code></a>
Update release date</li>
<li><a
href="05259a1d08"><code>05259a1</code></a>
Add more changelog notes</li>
<li><a
href="01ee2f785a"><code>01ee2f7</code></a>
Add changelog notes</li>
<li><a
href="c72d9a4933"><code>c72d9a4</code></a>
Update changelog for v4.32.5</li>
<li><a
href="ca42bf226a"><code>ca42bf2</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3522">#3522</a>
from github/henrymercer/update-supported-versions-table</li>
<li><a
href="6704d80ac6"><code>6704d80</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3520">#3520</a>
from github/dependabot/npm_and_yarn/fast-xml-parser-...</li>
<li><a
href="76348c0f12"><code>76348c0</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3521">#3521</a>
from github/dependabot/npm_and_yarn/minimatch-3.1.5</li>
<li>Additional commits viewable in <a
href="89a39a4e59...c793b717bc">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-02 16:27:06 +01:00
kleinanzeigen-bot-tu[bot]
3c6655c2cd chore: ✔ Update filelock 3.24.3 -> 3.25.0 (#857) 2026-03-02 15:34:27 +01:00
Jens
fa9df6fca4 feat: keep login selector fallbacks close to auth flow (#855) 2026-03-02 13:01:05 +01:00
Jens
c4a2d1c4f5 fix: continue own-ad extraction when links are incomplete (#854) 2026-03-02 06:05:21 +01:00
Jens
ed6137c8ae fix: use native page xpath api for xpath selectors (#853)
## ℹ️ Description
*Provide a concise summary of the changes introduced in this pull
request.*

- Link to the related issue(s): n/a
- Describe the motivation and context for this change.
This replaces the stacked XPath work from #845 with a standalone fix
from `main`. It makes `By.XPATH` use the native page XPath API instead
of routing XPath selectors through text lookup.

## 📋 Changes Summary

- Add private XPath helpers in `WebScrapingMixin` for first-match and
all-match lookups.
- Route `By.XPATH` in `_web_find_once()` and `_web_find_all_once()`
through `page.xpath(...)`.
- Add unit coverage for XPath helper behavior, empty results, and
unsupported parent scoping.
- No configuration changes or new dependencies.

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Refactoring**
* Improved web scraping element selection reliability through
streamlined XPath operations and better internal helper methods.

* **Tests**
* Added comprehensive unit tests for XPath-based element lookup
operations to ensure consistent behavior.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-03-01 21:00:34 +01:00
Jens
e856f6e3df fix: use explicit commit hash for docker package versioning (#856) 2026-03-01 20:40:38 +01:00
Sebastian Thomschke
8c94ca5f9c docs: update code of conduct (#852) 2026-02-28 22:11:29 +01:00
sebthom
022b965f96 chore: simplify .gitignore 2026-02-28 22:01:40 +01:00
kleinanzeigen-bot-tu[bot]
9ca63527fe chore: Update Python dependencies (#849)
✔ Update ruff 0.15.2 -> 0.15.4 successful
  ✔ Update basedpyright 1.38.1 -> 1.38.2 successful
  ✔ Update nodejs-wheel-binaries 24.13.1 -> 24.14.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-28 13:03:26 +01:00
Mario
69ae8af922 fix: set Config.sandbox=False when --no-sandbox is in browser_args (#850) 2026-02-28 08:26:47 +01:00
Jens
38e0f97578 feat: add grouped selector timeout fallback for login detection (#843) 2026-02-27 19:11:49 +01:00
kleinanzeigen-bot-tu[bot]
fc456f4abd chore: ✔ Update certifi 2026.1.4 -> 2026.2.25 (#842) 2026-02-25 16:21:21 +01:00
Jens
930b3f6028 feat: unify pdm test defaults and verbosity controls (#836) 2026-02-23 16:44:13 +01:00
dependabot[bot]
6aab9761f1 ci(deps): bump github/codeql-action from 4.32.3 to 4.32.4 in the all-actions group (#838)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-23 16:19:45 +01:00
kleinanzeigen-bot-tu[bot]
556a6eb5c1 chore: ✔ Update typer 0.24.0 -> 0.24.1 (#837)
✔ Update typer 0.24.0 -> 0.24.1 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-22 17:59:06 +01:00
kleinanzeigen-bot-tu[bot]
4a847e77e2 chore: Update Python dependencies (#835)
✔ Update rich 14.3.2 -> 14.3.3 successful
  ✔ Update ruff 0.15.1 -> 0.15.2 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-21 20:31:32 +01:00
Torsten Liermann
3308d31b8e fix: reject invalid --ads selector values instead of silent fallback (#834)
## Summary

- When an invalid `--ads` value is explicitly provided (e.g.
`--ads=my-directory-name`), the bot now exits with code 2 and a clear
error message listing valid options, instead of silently falling back to
the command default
- Fixes the numeric ID regex from unanchored `\d+[,\d+]*` (which could
match partial strings like `abc123`) to anchored `^\d+(,\d+)*$`
- Adds `_is_valid_ads_selector()` helper to deduplicate validation logic
across publish/update/download/extend commands

## Motivation

When calling `kleinanzeigen-bot publish --ads=led-grow-light-set`
(passing a directory name instead of a numeric ad ID), the bot silently
fell back to `--ads=due` and republished all due ads — causing
unintended republication of multiple ads and loss of conversations on
those ads.

The silent fallback with only a WARNING log message makes it too easy to
accidentally trigger unwanted operations. An explicit error with exit
code 2 (consistent with other argument validation like
`--workspace-mode`) is the expected behavior for invalid arguments.

## Changes

| File | Change |
|------|--------|
| `src/kleinanzeigen_bot/__init__.py` | Added `_ads_selector_explicit`
flag (set when `--ads` or `--force` is used), `_is_valid_ads_selector()`
helper method, and updated all 4 command blocks
(publish/update/download/extend) to error on explicitly invalid
selectors |
| `resources/translations.de.yaml` | Replaced 3 old fallback messages
with 4 new error messages |
| `tests/unit/test_init.py` | Updated 2 existing tests to expect
`SystemExit(2)` instead of silent fallback, added 2 new tests for
update/extend invalid selectors |

## Test plan

- [x] All 754 unit tests pass (`pdm run utest`)
- [x] Lint clean (`pdm run lint`)
- [x] Translation completeness verified
(`test_all_log_messages_have_translations`,
`test_no_obsolete_translations`)
- [x] `--ads=invalid` on publish/update/download/extend all exit with
code 2
- [x] Default behavior (no `--ads` flag) unchanged for all commands
- [x] Valid selectors (`--ads=all`, `--ads=due`, `--ads=12345,67890`,
`--ads=changed,due`) still work

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Stricter validation of ad selectors; invalid selectors now terminate
with exit code 2 and preserve safe defaults when no selector is
provided.

* **New Features**
* Support for comma-separated numeric ID lists as a valid selector
format.

* **Tests**
* Unit tests updated to assert non-zero exit on invalid selectors and
verify default-fallback behavior.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Liermann Torsten - Hays <liermann.hays@partner.akdb.de>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 20:30:06 +01:00
kleinanzeigen-bot-tu[bot]
304e6b48ec chore: Update Python dependencies (#832)
✔ Update filelock 3.24.2 -> 3.24.3 successful
  ✔ Update librt 0.8.0 -> 0.8.1 successful
  ✔ Update pyinstaller-hooks-contrib 2026.0 -> 2026.1 successful
  ✔ Update basedpyright 1.38.0 -> 1.38.1 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-19 20:59:32 +01:00
Jens
4ae46f7aa4 feat: preview auto price reduction decisions in verify command (#829) 2026-02-17 13:34:09 +01:00
kleinanzeigen-bot-tu[bot]
b2cda15466 chore: Update Python dependencies (#830) 2026-02-17 12:16:45 +01:00
Jens
398286bcbc ci: check generated schema and default config artifacts (#825)
## ℹ️ Description
- Link to the related issue(s): N/A
- Add a CI guard that fails when generated artifacts are out of sync,
motivated by preventing missing schema updates and keeping generated
reference files current.
- Add a committed `docs/config.default.yaml` as a user-facing default
configuration reference.

## 📋 Changes Summary
- Add `scripts/check_generated_artifacts.py` to regenerate schema
artifacts and compare tracked outputs (`schemas/*.json` and
`docs/config.default.yaml`) against generated content.
- Run the new artifact consistency check in CI via
`.github/workflows/build.yml`.
- Add `pdm run generate-config` and `pdm run generate-artifacts` tasks,
with a cross-platform-safe delete in `generate-config`.
- Add generated `docs/config.default.yaml` and document it in
`docs/CONFIGURATION.md`.
- Update `schemas/config.schema.json` with the
`diagnostics.timing_collection` property generated from the model.

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [ ] 🐞 Bug fix (non-breaking change which fixes an issue)
- [x]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Added a reference link to the default configuration snapshot for
easier access to baseline settings.

* **Chores**
* Added a CI build-time check that validates generated schemas and the
default config and alerts when regeneration is needed.
* Added scripts to generate the default config and to sequence artifact
generation.
* Added a utility to produce standardized schema content and compare
generated artifacts.
  * Minor tweak to schema generation success messaging.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-16 16:56:31 +01:00
dependabot[bot]
c152418b45 ci(deps): bump the all-actions group with 2 updates (#828) 2026-02-16 16:18:37 +01:00
Jens
55777710e8 feat: explain auto price reduction decisions and traces (#826) 2026-02-16 15:52:24 +01:00
kleinanzeigen-bot-tu[bot]
b6cf0eea93 chore: Update Python dependencies (#827)
✔ Update filelock 3.24.0 -> 3.24.2 successful
  ✔ Update platformdirs 4.9.1 -> 4.9.2 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-16 12:34:47 +01:00
kleinanzeigen-bot-tu[bot]
abc6614d16 chore: Update Python dependencies (#823)
✔ Update filelock 3.21.2 -> 3.24.0 successful
  ✔ Update platformdirs 4.7.0 -> 4.9.1 successful
  ✔ Update pyinstaller 6.18.0 -> 6.19.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-16 07:37:58 +01:00
kleinanzeigen-bot-tu[bot]
94aafd81ab chore: Update Python dependencies (#822) 2026-02-13 16:46:51 +01:00
Jens
50fc8781a9 feat: collect timeout timing sessions for diagnostics (#814) 2026-02-13 16:45:52 +01:00
kleinanzeigen-bot-tu[bot]
81c55316db chore: Update Python dependencies (#821)
✔ Update typer-slim 0.21.1 -> 0.21.2 successful
  ✔ Update typer 0.21.1 -> 0.21.2 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-11 15:48:38 +01:00
kleinanzeigen-bot-tu[bot]
73f04d17dc chore: Update Python dependencies (#820) 2026-02-11 05:36:43 +01:00
Jens
4282b05ff3 fix: add explicit workspace mode resolution for --config (#818) 2026-02-11 05:35:41 +01:00
Jens
c212113638 fix: improve Windows browser autodetection paths and diagnose fallback (#816)
## ℹ️ Description
This pull request fixes Windows browser auto-detection failures reported
by users where `diagnose`/startup could not find an installed browser
even when Chrome or Edge were present in standard locations. It also
makes diagnostics resilient when auto-detection fails by avoiding an
assertion-driven abort and continuing with a clear failure log.

- Link to the related issue(s): Issue #815
- Describe the motivation and context for this change.
- Users reported `Installed browser could not be detected` on Windows
despite having a browser installed.
- The previous Windows candidate list used a mix of incomplete paths and
direct `os.environ[...]` lookups that could raise when variables were
missing.
- The updated path candidates and ordering were aligned with common
Windows install locations used by Playwright’s channel/executable
resolution logic (Chrome/Edge under `LOCALAPPDATA`, `PROGRAMFILES`, and
`PROGRAMFILES(X86)`).

## 📋 Changes Summary
- Expanded Windows browser path candidates in `get_compatible_browser()`
to include common Google Chrome and Microsoft Edge install paths, while
keeping Chromium and PATH fallbacks.
- Replaced unsafe direct env-var indexing with safe retrieval
(`os.environ.get(...)`) and added a fallback derivation for
`LOCALAPPDATA` via `USERPROFILE\\AppData\\Local` when needed.
- Kept legacy Chrome path candidates
(`...\\Chrome\\Application\\chrome.exe`) as compatibility fallback.
- Updated diagnostics flow to catch browser auto-detection assertion
failures and continue with `(fail) No compatible browser found` instead
of crashing.
- Added/updated unit tests to verify:
  - Windows detection for LocalAppData Chrome/Edge/Chromium paths.
- Missing Windows env vars no longer cause key lookup failures and still
surface the intended final detection assertion.
- `diagnose_browser_issues()` handles auto-detection assertion failures
without raising and logs the expected failure message.


### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Hardened Windows browser auto-detection: checks additional common
installation locations for Chrome/Chromium/Edge and treats detection
failures as non-fatal, allowing diagnostics to continue with fallback
behavior and debug logging when no browser is found.

* **Tests**
* Expanded Windows detection tests to cover more path scenarios and
added cases verifying failure-mode diagnostics and logging.

* **Style**
  * Minor formatting tweak in default configuration.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-09 19:55:05 +01:00
dependabot[bot]
7ae5f3122a ci(deps): bump the all-actions group with 2 updates (#819)
Bumps the all-actions group with 2 updates:
[vegardit/fast-apt-mirror.sh](https://github.com/vegardit/fast-apt-mirror.sh)
and [github/codeql-action](https://github.com/github/codeql-action).

Updates `vegardit/fast-apt-mirror.sh` from 1.4.1 to 1.4.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vegardit/fast-apt-mirror.sh/releases">vegardit/fast-apt-mirror.sh's
releases</a>.</em></p>
<blockquote>
<h2>1.4.2</h2>
<h2>What's Changed</h2>
<h3>Fixed</h3>
<ul>
<li>prevent Ubuntu ARM switching to non-ubuntu-ports mirrors</li>
<li>prevent invalid fastest mirror selection with ignore-sync-state</li>
<li>avoid pipefail/ERR-trap corrupting fastest mirror detection</li>
<li>Option --exclude-current not working reliably and support ARM</li>
<li>Multiple /etc/*-release files can cause wrong distro detection <a
href="https://redirect.github.com/vegardit/fast-apt-mirror.sh/issues/12">#12</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/vegardit/fast-apt-mirror.sh/compare/1.4.1...1.4.2">https://github.com/vegardit/fast-apt-mirror.sh/compare/1.4.1...1.4.2</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="29a5ef3401"><code>29a5ef3</code></a>
fix(find): prevent Ubuntu ARM switching to non-ubuntu-ports mirrors</li>
<li><a
href="f3f6ac867d"><code>f3f6ac8</code></a>
fix(find): keep Ubuntu ARM mirror candidates on ubuntu-ports</li>
<li><a
href="77bc0f4f48"><code>77bc0f4</code></a>
fix(find): harden sync baseline and fallback to reachable mirrors</li>
<li><a
href="e4cfe62e1a"><code>e4cfe62</code></a>
fix(find): use InRelease for Ubuntu ARM healthchecks</li>
<li><a
href="85bc4a4115"><code>85bc4a4</code></a>
fix(action): simplify fast-apt-mirror.sh setup</li>
<li><a
href="61f5fd911b"><code>61f5fd9</code></a>
fix(find): avoid pipefail/ERR-trap corrupting fastest mirror
detection</li>
<li><a
href="7ee8df396d"><code>7ee8df3</code></a>
fix: dedup mirror URLs</li>
<li><a
href="3b80eadc89"><code>3b80ead</code></a>
fix: refine mirror health checks and exclude 404 mirrors</li>
<li><a
href="39824222f5"><code>3982422</code></a>
fix: prevent invalid fastest mirror selection with
ignore-sync-state</li>
<li><a
href="4c4ae91025"><code>4c4ae91</code></a>
ci(deps): bump actions/checkout from 4 to 6</li>
<li>Additional commits viewable in <a
href="e5288ed7a1...29a5ef3401">compare
view</a></li>
</ul>
</details>
<br />

Updates `github/codeql-action` from 4.31.11 to 4.32.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v4.32.2</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.1">2.24.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3460">#3460</a></li>
</ul>
<h2>v4.32.1</h2>
<ul>
<li>A warning is now shown in Default Setup workflow logs if a <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registry is configured</a> using a GitHub Personal Access Token
(PAT), but no username is configured. <a
href="https://redirect.github.com/github/codeql-action/pull/3422">#3422</a></li>
<li>Fixed a bug which caused the CodeQL Action to fail when repository
properties cannot successfully be retrieved. <a
href="https://redirect.github.com/github/codeql-action/pull/3421">#3421</a></li>
</ul>
<h2>v4.32.0</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.0">2.24.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3425">#3425</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>[UNRELEASED]</h2>
<p>No user facing changes.</p>
<h2>4.32.2 - 05 Feb 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.1">2.24.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3460">#3460</a></li>
</ul>
<h2>4.32.1 - 02 Feb 2026</h2>
<ul>
<li>A warning is now shown in Default Setup workflow logs if a <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registry is configured</a> using a GitHub Personal Access Token
(PAT), but no username is configured. <a
href="https://redirect.github.com/github/codeql-action/pull/3422">#3422</a></li>
<li>Fixed a bug which caused the CodeQL Action to fail when repository
properties cannot successfully be retrieved. <a
href="https://redirect.github.com/github/codeql-action/pull/3421">#3421</a></li>
</ul>
<h2>4.32.0 - 26 Jan 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.0">2.24.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3425">#3425</a></li>
</ul>
<h2>4.31.11 - 23 Jan 2026</h2>
<ul>
<li>When running a Default Setup workflow with <a
href="https://docs.github.com/en/actions/how-tos/monitor-workflows/enable-debug-logging">Actions
debugging enabled</a>, the CodeQL Action will now use more unique names
when uploading logs from the Dependabot authentication proxy as workflow
artifacts. This ensures that the artifact names do not clash between
multiple jobs in a build matrix. <a
href="https://redirect.github.com/github/codeql-action/pull/3409">#3409</a></li>
<li>Improved error handling throughout the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3415">#3415</a></li>
<li>Added experimental support for automatically excluding <a
href="https://docs.github.com/en/repositories/working-with-files/managing-files/customizing-how-changed-files-appear-on-github">generated
files</a> from the analysis. This feature is not currently enabled for
any analysis. In the future, it may be enabled by default for some
GitHub-managed analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3318">#3318</a></li>
<li>The changelog extracts that are included with releases of the CodeQL
Action are now shorter to avoid duplicated information from appearing in
Dependabot PRs. <a
href="https://redirect.github.com/github/codeql-action/pull/3403">#3403</a></li>
</ul>
<h2>4.31.10 - 12 Jan 2026</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.9. <a
href="https://redirect.github.com/github/codeql-action/pull/3393">#3393</a></li>
</ul>
<h2>4.31.9 - 16 Dec 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.8 - 11 Dec 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.8. <a
href="https://redirect.github.com/github/codeql-action/pull/3354">#3354</a></li>
</ul>
<h2>4.31.7 - 05 Dec 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.7. <a
href="https://redirect.github.com/github/codeql-action/pull/3343">#3343</a></li>
</ul>
<h2>4.31.6 - 01 Dec 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.5 - 24 Nov 2025</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="45cbd0c69e"><code>45cbd0c</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3461">#3461</a>
from github/update-v4.32.2-7aee93297</li>
<li><a
href="cb528be87e"><code>cb528be</code></a>
Update changelog for v4.32.2</li>
<li><a
href="7aee932974"><code>7aee932</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3460">#3460</a>
from github/update-bundle/codeql-bundle-v2.24.1</li>
<li><a
href="b5f028a984"><code>b5f028a</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3457">#3457</a>
from github/dependabot/npm_and_yarn/npm-minor-4c1fc3...</li>
<li><a
href="9702c27ab9"><code>9702c27</code></a>
Merge branch 'main' into
dependabot/npm_and_yarn/npm-minor-4c1fc3d0aa</li>
<li><a
href="c36c94846f"><code>c36c948</code></a>
Add changelog note</li>
<li><a
href="3d0331896c"><code>3d03318</code></a>
Update default bundle to codeql-bundle-v2.24.1</li>
<li><a
href="77591e2c4a"><code>77591e2</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3459">#3459</a>
from github/copilot/fix-github-actions-workflow-again</li>
<li><a
href="7a44a9db3f"><code>7a44a9d</code></a>
Fix Rebuild Action workflow by adding --no-edit flag to git merge
--continue</li>
<li><a
href="e2ac371513"><code>e2ac371</code></a>
Initial plan</li>
<li>Additional commits viewable in <a
href="19b2f06db2...45cbd0c69e">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 19:51:04 +01:00
kleinanzeigen-bot-tu[bot]
83b7d318d7 chore: Update Python dependencies (#812)
✔ Update setuptools 80.10.2 -> 82.0.0 successful
  ✔ Update jaraco-text 4.0.0 -> 4.1.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-09 16:46:59 +01:00
kleinanzeigen-bot-tu[bot]
7a27cc0198 chore: Update Python dependencies (#810)
✔ Update pip 26.0 -> 26.0.1 successful
  ✔ Update coverage 7.13.2 -> 7.13.3 successful
  ✔ Update ruff 0.14.14 -> 0.15.0 successful
  ✔ Update basedpyright 1.37.3 -> 1.37.4 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-05 22:11:35 +01:00
Jens
a8051c3814 feat: cache published ads data to avoid repetitive API calls during ad download (#809) 2026-02-03 14:51:59 +01:00
kleinanzeigen-bot-tu[bot]
e994ce1b1f chore: ✔ Update wrapt 2.1.0 -> 2.1.1 (#808)
✔ Update wrapt 2.1.0 -> 2.1.1 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-03 14:33:54 +01:00
kleinanzeigen-bot-tu[bot]
8b115b4722 chore: ✔ Update rich 14.3.1 -> 14.3.2 (#806) 2026-02-02 17:21:46 +01:00
Jens
601b405ded fix: improve logging messages and documentation (#803) 2026-02-02 17:21:21 +01:00
Jens
e85126ec86 feat: Add descriptive comments and examples to create-config output (#805) 2026-02-02 17:20:56 +01:00
kleinanzeigen-bot-tu[bot]
3229656ef4 chore: Update Python dependencies (#804)
✔ Update wrapt 2.0.1 -> 2.1.0 successful
  ✔ Update basedpyright 1.37.2 -> 1.37.3 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-02-01 18:22:22 +01:00
Jens
b3d5a4b228 feat: capture publish failure diagnostics with screenshot and logs (#802) 2026-02-01 08:17:14 +01:00
Jens
96f465d5bc fix: JSON API Pagination for >25 Ads (#797)
## ℹ️ Description
*Provide a concise summary of the changes introduced in this pull
request.*

- Link to the related issue(s): Closes #789 (completes the fix started
in #793)
- **Motivation**: Fix JSON API pagination for accounts with >25 ads.
Aligns pagination logic with weidi’s approach (starts at page 1), while
hardening error handling and tests. Based on
https://github.com/weidi/kleinanzeigen-bot/pull/1.

## 📋 Changes Summary

- Added pagination helper to fetch all published ads and use it in
delete/extend/publish/update flows
- Added robust handling for malformed JSON payloads and unexpected ads
types (with translated warnings)
- Improved sell_directly extraction with pagination, bounds checks, and
shared coercion helper
- Added/updated tests for pagination and edge cases; updated assertions
to pytest.fail style

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test:cov:unified`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Reliable multi-page fetching for published ads and buy-now eligibility
checks.

* **Bug Fixes**
* Safer pagination with per-page JSON handling, limits and improved
termination diagnostics; ensures pageNum is used when needed.

* **Tests**
* New comprehensive pagination tests and updates to existing tests to
reflect multi-page behavior.

* **Chores**
* Added a utility to safely coerce page numbers; minor utility signature
cleanup.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-31 22:17:37 +01:00
kleinanzeigen-bot-tu[bot]
51a8042cda chore: ✔ Update pip 25.3 -> 26.0 (#801)
✔ Update pip 25.3 -> 26.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-31 16:28:20 +01:00
Jens
a4946ba104 docs: refactor guides for clearer navigation (#795)
## ℹ️ Description
Refactors and reorganizes documentation to improve navigation and keep
the README concise.

- Link to the related issue(s): Issue #N/A
- Describe the motivation and context for this change.
- The README had grown long and duplicated detailed config/ad
references; this consolidates docs into focused guides and adds an
index.

## 📋 Changes Summary
- Add dedicated docs pages for configuration, ad configuration, update
checks, and a docs index.
- Slim README and CONTRIBUTING to reference dedicated guides and clean
up formatting/markdownlint issues.
- Refresh browser troubleshooting and update-check guidance; keep the
update channel name aligned with schema/implementation.
- Add markdownlint configuration for consistent docs formatting.

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [ ] 🐞 Bug fix (non-breaking change which fixes an issue)
- [x]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Reorganized and enhanced contributing guidelines with improved
structure and formatting
* Streamlined README with better organization and updated installation
instructions
* Added comprehensive configuration reference documentation for
configuration and ad settings
* Improved browser troubleshooting guide with updated guidance,
examples, and diagnostic information
  * Created new documentation index for easier navigation

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-30 11:06:36 +01:00
kleinanzeigen-bot-tu[bot]
3dc24e1df7 chore: ✔ Update psutil 7.2.1 -> 7.2.2 (#799)
✔ Update psutil 7.2.1 -> 7.2.2 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-30 06:55:13 +01:00
Jens
49e44b9a20 fix: prioritize DOM-based login detection over auth probe for stealth (#798) 2026-01-30 06:03:39 +01:00
Jens
c0378412d1 ci: Enable manual workflow dispatch for PR binary artifacts (#796)
## ℹ️ Description

This PR enables manual triggering of the build workflow from any PR
branch to generate platform-specific executables (Windows .exe, macOS,
Linux binaries).

**Motivation:** Windows users often need pre-built executables to test
PRs without setting up a development environment. Currently, binaries
are only generated for `main` and `release` branches. This change allows
maintainers to manually trigger artifact generation for any PR when
needed for testing.

## 📋 Changes Summary

- Modified `.github/workflows/build.yml` artifact upload condition to
include `workflow_dispatch` event
- The `workflow_dispatch` trigger already existed but was gated at the
artifact upload step
- All 8 platform/Python version matrix combinations now produce
artifacts when manually triggered
- The `publish-release` job remains unchanged and only runs for
`main`/`release` branches

**How to use:** Go to Actions → "Build" workflow → "Run workflow" →
select the PR branch

### ⚙️ Type of Change
- [x]  New feature (adds new functionality without breaking existing
usage)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass.
- [x] I have verified that linting passes.
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
  * Updated build workflow to support manual deployment triggers.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-28 19:47:09 +01:00
Heavenfighter
23c27157d0 fix: Handle email verification dialog (#782) 2026-01-28 12:50:40 +01:00
Jens
b4cb979164 fix: auth probe + diagnostics for UNKNOWN states (#791) 2026-01-28 06:08:45 +01:00
Jens
7098719d5b fix: extend command fails with >25 ads due to pagination (#793) 2026-01-28 06:08:03 +01:00
kleinanzeigen-bot-tu[bot]
d954e849a2 chore: ✔ Update pathspec 1.0.3 -> 1.0.4 (#794)
✔ Update pathspec 1.0.3 -> 1.0.4 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-27 21:59:02 +01:00
kleinanzeigen-bot-tu[bot]
4f1995402f chore: Update Python dependencies (#790) 2026-01-26 16:48:15 +01:00
dependabot[bot]
6a2d0dac86 ci(deps): bump the all-actions group with 3 updates (#792) 2026-01-26 16:47:52 +01:00
kleinanzeigen-bot-tu[bot]
d024c9ddca chore: ✔ Update rich 14.2.0 -> 14.3.1 (#788) 2026-01-25 15:50:37 +01:00
Jens
6cc17f869c fix: keep shipping_type SHIPPING for individual postage (#785) 2026-01-24 15:31:22 +01:00
Jens
08385fa01d chore: translation handling for log messages (#787) 2026-01-24 15:27:46 +01:00
kleinanzeigen-bot-tu[bot]
9b75a4047a chore: ✔ Update basedpyright 1.37.1 -> 1.37.2 (#786)
✔ Update basedpyright 1.37.1 -> 1.37.2 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-24 12:14:50 +01:00
Jens
eda1b4d0ec feat: add browser profile XDG support and documentation (#777) 2026-01-23 22:45:22 +01:00
kleinanzeigen-bot-tu[bot]
dc0d9404bf chore: ✔ Update ruff 0.14.13 -> 0.14.14 (#780) 2026-01-23 17:10:14 +01:00
Jens
e8cf10101d feat: integrate XDG paths into bot core (#776)
## ℹ️ Description
Wire XDG path resolution into main bot components.

- Link to the related issue(s): N/A (new feature)
- Integrates installation mode detection into bot core

## 📋 Changes Summary

- Added `finalize_installation_mode()` method for mode detection
- UpdateChecker, AdExtractor now respect installation mode
- Dynamic browser profile defaults (resolved at runtime)
- German translations for installation mode messages
- Comprehensive tests for installation mode integration

**Part 2 of 3 for XDG support**
- Depends on: PR #775 (must be merged first)
- Will rebase on main after merge of previous PR

### ⚙️ Type of Change
- [x]  New feature (adds new functionality without breaking existing
usage)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Support for portable and XDG (system-wide) installation modes with
automatic detection and interactive first-run setup.
* Config and paths standardized so app stores config, downloads, logs,
and browser profiles in appropriate locations per mode.
  * Update checker improved for more reliable version/commit detection.

* **Chores**
* Moved dependency to runtime: platformdirs added to main dependencies.

* **Tests**
* Added comprehensive tests for installation modes, path utilities, and
related behaviors.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-23 07:36:10 +01:00
Jens
7468ef03dc feat: add core XDG path resolution module (#775)
## ℹ️ Description
Core module for XDG Base Directory specification support.

- Link to the related issue(s): N/A (new feature)
- Adds portable and XDG installation mode path resolution

## 📋 Changes Summary

- New `xdg_paths.py` module with 11 path resolution functions
- Comprehensive test suite (32 tests, 95% coverage)
- German translations for all user-facing strings
- Moved `platformdirs` from dev to runtime dependencies

**Part 1 of 3 for XDG support**
- Depends on: None
- Preserves: extend command, ContactDefaults.location

### ⚙️ Type of Change
- [x]  New feature (adds new functionality without breaking existing
usage)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added support for portable and XDG-standard installation modes for
flexible config, cache, and state storage.

* **Chores**
* Added a runtime dependency to handle platform-specific directory
locations.

* **Tests**
* Added comprehensive unit tests covering path resolution,
installation-mode detection, interactive prompts, and Unicode path
handling.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-23 06:39:44 +01:00
kleinanzeigen-bot-tu[bot]
0fbc1f61ea chore: Update Python dependencies (#778)
✔ Update setuptools 80.9.0 -> 80.10.1 successful
  ✔ Update pyparsing 3.3.1 -> 3.3.2 successful
  ✔ Update packaging 25.0 -> 26.0 successful
  ✔ Update pyinstaller-hooks-contrib 2025.11 -> 2026.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-22 19:58:10 +01:00
Airwave1981
e52b600aa0 fix: Add retry logic for ad publishing (3 attempts before skipping) (#774) 2026-01-20 12:41:24 +01:00
Jens
15f35ba3ee fix: publishing contact fields and download stability (#771)
## ℹ️ Description
- Link to the related issue(s): Issue #761
- Describe the motivation and context for this change.
- This PR bundles several small fixes identified during recent testing,
covering issue #761 and related publishing/download edge cases.

## 📋 Changes Summary
- Avoid crashes in `download --ads=new` when existing local ads lack an
ID; skip those files for the “already downloaded” set and log a clear
reason.
- Harden publishing contact fields: clear ZIP before typing; tolerate
missing phone field; handle missing street/name/ZIP/location gracefully
with warnings instead of aborting.
- Improve location selection by matching full option text or the
district suffix after ` - `.
- Preserve `contact.location` in defaults (config model + regenerated
schema with example).

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

* **New Features**
* Added optional location field to contact configuration for specifying
city/locality details in listings.
* Enhanced contact field validation with improved error handling and
fallback mechanisms.

* **Bug Fixes**
* Ad download process now gracefully handles unpublished or manually
created ads instead of failing.

* **Documentation**
* Clarified shipping type requirements and cost configuration guidance
in README.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-19 15:39:11 +01:00
Jens
6ef6aea3a8 feat: Add extend command to extend ads before expiry (#732)
## ℹ️ Description

Add a manual "extend" command to extend listings shortly before they
expire. This keeps existing watchers/savers and does not count toward
the current 100 ads/month quota.

- Link to the related issue(s): Issue #664
- **Motivation**: Users need a way to extend ads before they expire
without republishing (which consumes quota).

## 📋 Changes Summary

### Implementation
- Add `extend` command case in `run()`
- Implement `extend_ads()` to filter and process eligible ads
- Implement `extend_ad()` for browser automation
- Add German translations for all user-facing messages

### Testing
- Tests cover: filtering logic, date parsing, browser automation, error
handling, edge cases

### Features
- Detects ads within the **8-day extension window** (kleinanzeigen.de
policy)
- Uses API `endDate` from `/m-meine-anzeigen-verwalten.json` for
eligibility
- Only extends active ads (`state == "active"`)
- Handles confirmation dialog (close dialog / skip paid bump-up)
- Updates `updated_on` in YAML after successful extension
- Supports `--ads` parameter to extend specific ad IDs

### Usage
```bash
kleinanzeigen-bot extend                  # Extend all eligible ads
kleinanzeigen-bot extend --ads=1,2,3      # Extend specific ads
```

### ⚙️ Type of Change
- [x]  New feature (adds new functionality without breaking existing
usage)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have updated documentation where necessary (help text in English
+ German).

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added an "extend" command to find ads nearing expiry (default 8-day
window) or target specific IDs, open a session, attempt extensions, and
record per-ad outcomes.

* **Documentation**
* Updated CLI/help (bilingual) and README to document the extend
command, options (--ads), default behavior, and expiry-window
limitations.

* **Tests**
* Added comprehensive unit tests for eligibility rules, date parsing
(including German format), edge cases, UI interaction flows, timing, and
error handling.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-19 10:24:23 +01:00
Airwave1981
a2473081e6 fix: don't pass extra to BaseModel.model_validate (#772) 2026-01-18 20:14:00 +01:00
Jens
183f0ab4e1 fix: raise pydantic version for compatibility (#773)
## ℹ️ Description
Raise Pydantic version cause we use features that require at least
v2.11.

- Link to the related issue(s): PR #772

## 📋 Changes Summary

- Set min version to v2.11

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)

##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
  * Updated project dependencies to improve compatibility and stability.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-18 17:36:37 +01:00
kleinanzeigen-bot-tu[bot]
0146952e0c chore: ✔ Update ruff 0.14.11 -> 0.14.13 (#769)
✔ Update ruff 0.14.11 -> 0.14.13 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Jens <1742418+1cu@users.noreply.github.com>
2026-01-18 00:57:32 +01:00
Jens
183c01078e fix: correct sell_directly extraction using JSON API (#765) 2026-01-17 16:34:31 +01:00
dependabot[bot]
12dc3d2e13 ci(deps): bump github/codeql-action from 4.31.9 to 4.31.10 in the all-actions group (#768)
Bumps the all-actions group with 1 update:
[github/codeql-action](https://github.com/github/codeql-action).

Updates `github/codeql-action` from 4.31.9 to 4.31.10
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v4.31.10</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>4.31.10 - 12 Jan 2026</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.9. <a
href="https://redirect.github.com/github/codeql-action/pull/3393">#3393</a></li>
</ul>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v4.31.10/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>[UNRELEASED]</h2>
<p>No user facing changes.</p>
<h2>4.31.10 - 12 Jan 2026</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.9. <a
href="https://redirect.github.com/github/codeql-action/pull/3393">#3393</a></li>
</ul>
<h2>4.31.9 - 16 Dec 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.8 - 11 Dec 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.8. <a
href="https://redirect.github.com/github/codeql-action/pull/3354">#3354</a></li>
</ul>
<h2>4.31.7 - 05 Dec 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.7. <a
href="https://redirect.github.com/github/codeql-action/pull/3343">#3343</a></li>
</ul>
<h2>4.31.6 - 01 Dec 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.5 - 24 Nov 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.6. <a
href="https://redirect.github.com/github/codeql-action/pull/3321">#3321</a></li>
</ul>
<h2>4.31.4 - 18 Nov 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.3 - 13 Nov 2025</h2>
<ul>
<li>CodeQL Action v3 will be deprecated in December 2026. The Action now
logs a warning for customers who are running v3 but could be running v4.
For more information, see <a
href="https://github.blog/changelog/2025-10-28-upcoming-deprecation-of-codeql-action-v3/">Upcoming
deprecation of CodeQL Action v3</a>.</li>
<li>Update default CodeQL bundle version to 2.23.5. <a
href="https://redirect.github.com/github/codeql-action/pull/3288">#3288</a></li>
</ul>
<h2>4.31.2 - 30 Oct 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.1 - 30 Oct 2025</h2>
<ul>
<li>The <code>add-snippets</code> input has been removed from the
<code>analyze</code> action. This input has been deprecated since CodeQL
Action 3.26.4 in August 2024 when this removal was announced.</li>
</ul>
<h2>4.31.0 - 24 Oct 2025</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="cdefb33c0f"><code>cdefb33</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3394">#3394</a>
from github/update-v4.31.10-0fa411efd</li>
<li><a
href="cfa77c6b13"><code>cfa77c6</code></a>
Update changelog for v4.31.10</li>
<li><a
href="0fa411efd0"><code>0fa411e</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3393">#3393</a>
from github/update-bundle/codeql-bundle-v2.23.9</li>
<li><a
href="c284324212"><code>c284324</code></a>
Add changelog note</li>
<li><a
href="83e7d0046c"><code>83e7d00</code></a>
Update default bundle to codeql-bundle-v2.23.9</li>
<li><a
href="f6a16bef8e"><code>f6a16be</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3391">#3391</a>
from github/dependabot/npm_and_yarn/npm-minor-f1cdf5...</li>
<li><a
href="c1f5f1a8b5"><code>c1f5f1a</code></a>
Rebuild</li>
<li><a
href="1805d8d0a4"><code>1805d8d</code></a>
Bump the npm-minor group with 2 updates</li>
<li><a
href="b2951d2a1e"><code>b2951d2</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3353">#3353</a>
from github/kaspersv/bump-min-cli-v-for-overlay</li>
<li><a
href="41448d92b9"><code>41448d9</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3287">#3287</a>
from github/henrymercer/generate-mergeback-last</li>
<li>Additional commits viewable in <a
href="5d4e8d1aca...cdefb33c0f">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github/codeql-action&package-manager=github_actions&previous-version=4.31.9&new-version=4.31.10)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-15 22:04:26 +01:00
kleinanzeigen-bot-tu[bot]
a9150137b0 chore: Update Python dependencies (#766)
✔ Update jaraco-context 6.0.2 -> 6.1.0 successful
  ✔ Update tomli 2.3.0 -> 2.4.0 successful
  ✔ Update librt 0.7.7 -> 0.7.8 successful
  ✔ Update pyinstaller 6.17.0 -> 6.18.0 successful
  ✔ Update nodejs-wheel-binaries 24.12.0 -> 24.13.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-15 22:03:51 +01:00
kleinanzeigen-bot-tu[bot]
2ff8969d5a chore: Update Python dependencies (#760) 2026-01-10 15:28:41 +01:00
Alex Strutsysnkyi
f8a9c8e942 fix: set category before title to prevent form field reset (#763) 2026-01-10 15:28:00 +01:00
Jens
7d8a0c43d9 fix: restore build push triggers (#759) 2026-01-09 06:28:19 +01:00
Heavenfighter
066ecc87b8 fix: take care of changed belen_conf keys (#758)
## ℹ️ Description
This PR takes care of the changed belen_conf dictionary.
So extracting special attributes and third category will work again.

- Link to the related issue(s): Issue #757


## 📋 Changes Summary

- changed belen_conf keys from "dimension108" to "ad_attributes" and
"dimension92" to "l3_category_id"

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated internal data extraction sources for category and attribute
information to align with current analytics configuration.
  * Updated test suite to reflect configuration changes.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Co-authored-by: Jens <1742418+1cu@users.noreply.github.com>
2026-01-08 22:16:46 +01:00
kleinanzeigen-bot-tu[bot]
8ab3f50385 chore: Update Python dependencies (#756)
✔ Update pathspec 0.12.1 -> 1.0.2 successful
  ✔ Update typer 0.21.0 -> 0.21.1 successful
  ✔ Update types-requests 2.32.4.20250913 -> 2.32.4.20260107 successful
  ✔ Update urllib3 2.6.2 -> 2.6.3 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-08 20:32:06 +01:00
kleinanzeigen-bot-tu[bot]
edafde6176 chore: Update Python dependencies (#755)
✔ Update filelock 3.20.1 -> 3.20.2 successful
  ✔ Update certifi 2025.11.12 -> 2026.1.4 successful
  ✔ Update psutil 7.2.0 -> 7.2.1 successful
  ✔ Update librt 0.7.5 -> 0.7.7 successful
  ✔ Update ruamel-yaml 0.18.17 -> 0.19.1 successful
  ✔ Update coverage 7.13.0 -> 7.13.1 successful
  ✔ Update basedpyright 1.36.2 -> 1.37.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-05 16:59:04 +01:00
Jens
ddeebc8bca fix: CI coverage on PRs (#746) 2025-12-28 20:36:11 +01:00
Jens
1aa08be4ec fix: eliminate duplicate auto price reduction wrapper methods (#753) 2025-12-28 20:34:03 +01:00
kleinanzeigen-bot-tu[bot]
613e2d728a chore: Update Python dependencies (#751)
✔ Update jaraco-context 6.0.1 -> 6.0.2 successful
  ✔ Update typer 0.20.1 -> 0.21.0 successful
  ✔ Update librt 0.7.4 -> 0.7.5 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-25 21:26:37 +01:00
kleinanzeigen-bot-tu[bot]
65860edff8 chore: Update Python dependencies (#750)
✔ Update pyparsing 3.2.5 -> 3.3.1 successful
  ✔ Update psutil 7.1.3 -> 7.2.0 successful
  ✔ Update pyinstaller-hooks-contrib 2025.10 -> 2025.11 successful
  ✔ Update basedpyright 1.36.1 -> 1.36.2 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-25 00:29:55 +01:00
dependabot[bot]
4abd0da10a ci(deps): bump github/codeql-action from 4.31.8 to 4.31.9 in the all-actions group (#749) 2025-12-22 16:54:48 +01:00
kleinanzeigen-bot-tu[bot]
b6d88483bb chore: ✔ Update jaraco-functools 4.3.0 -> 4.4.0 (#744)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-21 17:51:54 +01:00
Jens
8ea41d3230 fix: compare updates via release tag (#745)
## ℹ️ Description
- Link to the related issue(s): Issue #N/A
- Describe the motivation and context for this change.
Ensure update-check compares against release tags instead of moving
branch tips and keep tests/translations in sync.

## 📋 Changes Summary
- compare release commit via tag name first and fall back only when
missing
- update update-checker tests for commit-ish resolution and tag-based
release data
- refresh German translations for update-checker log strings

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* More reliable update checks by resolving commits from tags, branches
or hashes and robustly comparing short vs full hashes.
* Improved prerelease handling to avoid inappropriate preview updates
and better handling of missing release data.

* **Localization & UX**
* Error and prerelease messages now use localized strings; commit dates
shown consistently in UTC and short-hash form.

* **Tests**
* Updated tests to cover the new resolution flow, error cases, and
logging behavior.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-21 16:45:20 +01:00
Jens
01753d0cba fix: generate release notes without temp tags (#743)
## ℹ️ Description
*Provide a concise summary of the changes introduced in this pull
request.*

- Link to the related issue(s): N/A
- Describe the motivation and context for this change.
- Fix empty release notes when using moving `latest`/`preview` tags
without creating temp tags.
- Avoid GitHub App permission errors when pushing tags on
workflow-modifying commits.

## 📋 Changes Summary

- Use a fake `tag_name` and anchor `previous_tag_name` to the moving
release tag for generate-notes.
- Add log output showing the refs used for note generation.
- Keep removal of the “Full Changelog” line to avoid broken compare
links.

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)

##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-12-21 08:37:33 +01:00
Jens
c0a144fadc fix: ensure release notes diff with temp tags (#742)
## ℹ️ Description
*Provide a concise summary of the changes introduced in this pull
request.*

- Link to the related issue(s): N/A
- Describe the motivation and context for this change.
- Fix empty release notes when using moving `latest`/`preview` tags by
diffing two short‑lived tags.
- Remove the generated “Full Changelog” link because temporary tags are
deleted after notes generation.

## 📋 Changes Summary

- Generate release notes using a temp prev tag and a temp head tag to
ensure old → new comparisons.
- Clean up temp tags after notes generation to keep tags tidy.
- Strip the “Full Changelog” line to avoid broken compare links.

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [ ] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [ ] I have formatted the code (`pdm run format`).
- [ ] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-12-20 21:12:15 +01:00
Jens
767871dca4 fix: avoid mixed returns in pydantics (#741)
## ℹ️ Description
Fix remaining CodeQL mixed-returns warning in pydantics error message
mapping.

- Link to the related issue(s): Issue #
- Motivation/context: eliminate implicit return path to satisfy CodeQL
`py/mixed-returns` on `pydantics.__get_message_template`.

## 📋 Changes Summary
- Make the default `case _:` fall through and return `None` explicitly
at function end.

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Refactor**
* Minor code style adjustment with no functional impact on application
behavior.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-20 21:12:03 +01:00
Jens
ba9b14b71b fix: address codeql notes and warnings (#740) 2025-12-20 18:17:51 +01:00
Jens
f0ebb26e5d ci: fix generate-notes for moving latest/preview releases (#738) 2025-12-20 13:51:24 +01:00
kleinanzeigen-bot-tu[bot]
63a6cb8480 chore: ✔ Update typer 0.20.0 -> 0.20.1 (#739)
✔ Update typer 0.20.0 -> 0.20.1 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-20 13:12:46 +01:00
kleinanzeigen-bot-tu[bot]
30ec9eae3a chore: ✔ Update ruff 0.14.9 -> 0.14.10 (#737) 2025-12-20 08:44:04 +01:00
Jens
85e3b730cd ci: fix codeql triggers and release notes (#736) 2025-12-19 06:26:01 +01:00
kleinanzeigen-bot-tu[bot]
e556eefe71 chore: Update Python dependencies (#735)
✔ Update yamlfix 1.19.0 -> 1.19.1 successful
  ✔ Update ruamel-yaml 0.18.16 -> 0.18.17 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-18 21:01:42 +01:00
Jens
920ddf5533 feat: Add automatic price reduction on reposts (#691) 2025-12-17 20:31:58 +01:00
Jens
25079c32c0 fix: increase login detection timeout to fix intermittent failures (#701) (#726)
## ℹ️ Description

This PR fixes intermittent login detection failures where the bot fails
to detect existing login sessions and unnecessarily re-logins,
potentially causing IP blocks.

- Link to the related issue(s): Issue #701
- Describe the motivation and context for this change:

Users reported that the bot sometimes fails to detect existing login
sessions (50/50 behavior), especially for browser profiles that haven't
been used for 20+ days. This appears to be a race condition where:
1. `web_open()` completes when `document.readyState == 'complete'`
2. But kleinanzeigen.de's client-side JavaScript hasn't yet rendered
user profile elements
3. The login detection timeout (5s default) is too short for slow
networks or sessions requiring server-side validation

## 📋 Changes Summary

- **Add dedicated `login_detection` timeout** to `TimeoutConfig`
(default: 10s, previously used generic 5s timeout)
- **Apply timeout to both DOM checks** in `is_logged_in()`: `.mr-medium`
and `#user-email` elements
- **Add debug logging** to track which element detected login or if no
login was found
- **Regenerate JSON schema** to include new timeout configuration
- **Effective total timeout**: ~22.5s (10s base × 1.0 multiplier × 1.5
backoff × 2 retries) vs previous ~11.25s

### Benefits:
- Addresses race condition between page load completion and client-side
rendering
- Provides sufficient time for sessions requiring server-side validation
(20+ days old)
- User-configurable via `timeouts.login_detection` in `config.yaml`
- Follows established pattern of dedicated timeouts (`sms_verification`,
`gdpr_prompt`, etc.)

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a configurable login-detection timeout (default 10s, min 1s) to
tune session detection.

* **Bug Fixes**
* More reliable login checks using a timeout-aware, two-step detection
sequence.
* Improved diagnostic logging for login attempts, retry behavior,
detection outcomes, and timeout events.

* **Documentation**
* Added troubleshooting guidance explaining the login-detection timeout
and when to adjust it.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-16 21:30:40 +01:00
kleinanzeigen-bot-tu[bot]
ce833b9350 chore: Update Python dependencies (#733) 2025-12-16 13:18:30 +01:00
Jens
0b995fae18 fix: handle Unicode normalization in save_dict for umlauts (#728) (#729) 2025-12-15 20:46:10 +01:00
kleinanzeigen-bot-tu[bot]
861b8ec367 chore: ✔ Update mypy 1.19.0 -> 1.19.1 (#730) 2025-12-15 18:01:30 +01:00
dependabot[bot]
8fd55ca074 ci(deps): bump the all-actions group with 3 updates (#731) 2025-12-15 17:58:48 +01:00
kleinanzeigen-bot-tu[bot]
1b9f78ab37 chore: Update Python dependencies (#727)
✔ Update urllib3 2.6.1 -> 2.6.2 successful
  ✔ Update ruff 0.14.8 -> 0.14.9 successful
  ✔ Update basedpyright 1.36.0 -> 1.36.1 successful
  ✔ Update nodejs-wheel-binaries 24.11.1 -> 24.12.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-12 16:54:01 +01:00
dependabot[bot]
733097b532 ci(deps): bump the all-actions group with 7 updates (#725)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-11 21:36:36 +01:00
Jens
efede9a5a2 ci: Fix CodeQL security warnings (#720)
## ℹ️ Description

This PR resolves all open CodeQL security warnings by implementing
recommended security best practices for GitHub Actions workflows and
addressing code analysis findings.

**Related**: Resolves CodeQL alerts 37-53

**Motivation**: CodeQL identified 17 security warnings across our
workflows and Python code. These warnings highlight potential supply
chain security risks (unpinned actions), missing security boundaries
(workflow permissions), and false positives that needed proper
documentation.

## 📋 Changes Summary

### Security Hardening
- **Pinned all GitHub Actions to commit SHAs** (26 action references
across 5 workflows)
- Added version comments for maintainability (e.g., `@8e8c483... #
v6.0.0`)
  - Dependabot will now auto-update these pinned SHAs securely
  
### Workflow Permissions
- Added explicit `permissions` block to `update-python-deps.yml`
workflow
- Added explicit `permissions: contents: read` to `publish-coverage` job
in `build.yml`
- Follows principle of least privilege

### Dependabot Configuration
- Enhanced `.github/dependabot.yml` with action update grouping (single
PR instead of multiple)
- Added `rebase-strategy: auto` for automatic conflict resolution

### Code Quality
- Added CodeQL suppression with detailed explanation in
`src/kleinanzeigen_bot/utils/reflect.py`
- Documented why explicit `del stack` is necessary for frame cleanup
(prevents false positive)

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* CI workflows: pinned external actions to specific commits for
reproducible runs and added explicit permission scopes where required.
* Dependabot: grouped GitHub Actions updates into a single consolidated
group for unified updates and auto-rebasing.
* **Documentation**
* Expanded internal comments clarifying cleanup logic to reduce
potential reference-cycle concerns.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-11 21:24:24 +01:00
Jens
385af708e5 feat: Use GitHub auto-generated release notes instead of single commit message (#724)
## ℹ️ Description
Currently, release changelogs only show the last commit message, which
doesn't provide sufficient visibility into all changes included in a
release. This PR improves the release workflow to use GitHub's
auto-generated release notes, providing a comprehensive changelog of all
commits and PRs since the previous release.

- Addresses the issue of insufficient release changelog detail
- Improves transparency for users reviewing what changed in each release

## 📋 Changes Summary

- Added `--generate-notes` flag to `gh release create` command in
`.github/workflows/build.yml`
- Renamed `COMMIT_MSG` environment variable to `LEGAL_NOTICE` for better
clarity
- Legal disclaimers now append after the auto-generated changelog
instead of replacing it
- The auto-generated notes will include:
  - All commits since the last release
  - All merged PRs since the last release
  - Contributor attribution
  - Automatic categorization (New Contributors, Full Changelog, etc.)

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [ ] 🐞 Bug fix (non-breaking change which fixes an issue)
- [x]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Release process updated to embed a bilingual (English/German) legal
notice directly into generated release notes.
* Release creation now auto-generates notes using that legal notice so
published releases consistently include the legal text.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-11 15:31:03 +01:00
kleinanzeigen-bot-tu[bot]
bcf4857707 chore: ✔ Update basedpyright 1.35.0 -> 1.36.0 (#723)
✔ Update basedpyright 1.35.0 -> 1.36.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-10 14:05:09 +01:00
Jens
9ed87ff17f fix: wait for image upload completion before submitting ad (#716)
## ℹ️ Description
Fixes a race condition where ads were submitted before all images
finished uploading to the server, causing some images to be missing from
published ads.

- Link to the related issue(s): Issue #715
- The bot was submitting ads immediately after the last image
`send_file()` call completed, only waiting 1-2.5 seconds via
`web_sleep()`. This wasn't enough time for server-side image processing,
thumbnail generation, and DOM updates to complete, resulting in missing
images in published ads.

## 📋 Changes Summary

### Image Upload Verification (Initial Fix)
- Added thumbnail verification in `__upload_images()` method to wait for
all image thumbnails to appear in the DOM after upload
- Added configurable timeout `image_upload` to `TimeoutConfig` (default:
30s, minimum: 5s)
- Improved error messages to show expected vs actual image count when
upload times out
- Added German translations for new log messages and error messages
- Regenerated JSON schemas to include new timeout configuration

### Polling Performance & Crash Fix (Follow-up Fix)
- Fixed critical bug where `web_find_all()` would raise `TimeoutError`
when no thumbnails exist yet, causing immediate crash
- Wrapped DOM queries in `try/except TimeoutError` blocks to handle
empty results gracefully
- Changed polling to use `self._timeout("quick_dom")` (~1s with PR #718)
instead of default timeout
- Improved polling performance: reduced cycle time from ~2s to ~1.5s
- DOM queries are client-side only (no server load from frequent
polling)

**New configuration option:**
```yaml
timeouts:
  image_upload: 30.0  # Total timeout for image upload and server-side processing
  quick_dom: 1.0      # Per-poll timeout for thumbnail checks (adjustable via multiplier)
```

The bot now polls the DOM for `ul#j-pictureupload-thumbnails >
li.ui-sortable-handle` elements after uploading images, ensuring
server-side processing is complete before submitting the ad form.

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Image uploads now verify completion by waiting for all uploaded
thumbnails to appear before proceeding.

* **Improvements**
  * Added a configurable image upload timeout (default 30s, minimum 5s).
* Improved timeout reporting: when thumbnails don’t appear in time, the
app returns clearer feedback showing expected vs. observed counts.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-10 14:04:42 +01:00
kleinanzeigen-bot-tu[bot]
1db304b7ae chore: Update Python dependencies (#722)
✔ Update urllib3 2.6.0 -> 2.6.1 successful
  ✔ Update coverage 7.12.0 -> 7.13.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-09 19:30:02 +01:00
kleinanzeigen-bot-tu[bot]
fcc80bbce8 chore: Update Python dependencies (#719)
✔ Update pytest 9.0.1 -> 9.0.2 successful
  ✔ Update librt 0.7.0 -> 0.7.3 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-08 10:48:03 +01:00
Jens
00fa0d359f feat: Add Chrome 136+ safe defaults for browser configuration (#717)
## ℹ️ Description

This PR updates the default browser configuration to be safe for
Chrome/Chromium 136+ out of the box.

Chrome 136+ (released March 2025) requires `--user-data-dir` to be
specified when using `--remote-debugging-port` for security reasons.
Since nodriver relies on remote debugging, the bot needs proper defaults
to avoid validation errors.

**Motivation:** Eliminate Chrome 136+ configuration validation errors
for fresh installations and ensure session persistence by default.

## 📋 Changes Summary

- Set `browser.arguments` default to include
`--user-data-dir=.temp/browser-profile`
- Set `browser.user_data_dir` default to `.temp/browser-profile`
(previously `None`)
- Regenerated JSON schema (`config.schema.json`) with new defaults

**Benefits:**
-  Chrome 136+ compatible out of the box (no validation errors)
-  Browser session/cookies persist across runs (better UX)
-  Consistent with existing `.temp` directory pattern (update state,
caches)
-  Already gitignored - no accidental commits of browser profiles

**No breaking changes:** Existing configs with explicit
`browser.arguments: []` continue to work (users can override defaults).

### ⚙️ Type of Change
- [x]  New feature (adds new functionality without breaking existing
usage)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Standardized browser profile configuration with improved default user
data directory settings.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-07 01:07:35 +01:00
kleinanzeigen-bot-tu[bot]
645cc40633 chore: ✔ Update librt 0.6.3 -> 0.7.0 (#714)
✔ Update librt 0.6.3 -> 0.7.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-07 00:08:06 +01:00
Sebastian Thomschke
c3091bfe4e ci: update CodeQL workflow (#713) 2025-12-05 22:24:20 +01:00
Bjoern147
5f68c09899 feat: Improved WebSelect Handling: Added Combobox Support, Enhanced Element Detection, and Smarter Option Matching (#679)
## ℹ️ Description

Added Webselect-Function for Input/Dropdown Combobox
PR for issue/missing feature #677

# Fixes / Enhancements

Finding Special Attributes Elements can fail because they are currently
only selected using the name="..." attributes of the HTML elements. If
it fails, ALSO fallback-handle selecting special attribute HTML elements
by ID instead / additionally. (For example the "brands" Input/Combobox
for Mens Shoes...

When trying to select a Value in a <select>, it does not only rely on
the actual Option value (xxx in the example <options
value="xxx">yyy</...>) but instead also on the displayed HTML value
(i.e. yyy in above example). This improves UX because the User doesnt
have to check the actual "value" of the Option but instead can check the
displayed Value from the Browsers Display directly.


Testcases for Webselect_Combobox were not added due to missing knowledge
about Async Mocking properly.


## 📋 Changes Summary

 Fixes & Enhancements
- New WebSelect Functionality
- Improved Element Detection for Special Attributes
- Enhanced <select> Option Matching Logic

This improves UX and test robustness — users no longer need to know the
exact underlying value, as matching also works with the visible label
shown in the browser.

🧩 Result

These updates make dropdown and combobox interactions more intuitive,
resilient, and user-friendly across diverse HTML structures.


### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [x]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [ ] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Field lookup now falls back to locating by ID when name lookup times
out.
* Option selection uses a two-pass match (value then displayed text);
JS-path failures now surface as timeouts.
  * Error and log messages localized and clarified.

* **New Features**
* Support for combobox-style inputs: type into the input, open dropdown,
and select by visible text (handles special characters).

* **Tests**
* Added tests for combobox selection, missing dropdowns, no-match
errors, value-path selection, and special-character handling.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Jens <1742418+1cu@users.noreply.github.com>
Co-authored-by: Claude <claude@anthropic.com>
2025-12-05 21:03:31 +01:00
Jens
220c01f257 fix: eliminate async safety violations and migrate to pathlib (#697)
## ℹ️ Description
Eliminate all blocking I/O operations in async contexts and modernize
file path handling by migrating from os.path to pathlib.Path.

- Link to the related issue(s): #692 
- Get rid of the TODO in pyproject.toml
- The added debug logging will ease the troubleshooting for path related
issues.

## 📋 Changes Summary

- Enable ASYNC210, ASYNC230, ASYNC240, ASYNC250 Ruff rules
- Wrap blocking urllib.request.urlopen() in run_in_executor
- Wrap blocking file operations (open, write) in run_in_executor
- Replace blocking os.path calls with async helpers using
run_in_executor
- Replace blocking input() with await ainput()
- Migrate extract.py from os.path to pathlib.Path
- Use Path() constructor and / operator for path joining
- Use Path.mkdir(), Path.rename() in executor instead of os functions
- Create mockable _path_exists() and _path_is_dir() helpers
- Add debug logging for all file system operations

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [X] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [X] I have reviewed my changes to ensure they meet the project's
standards.
- [X] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [X] I have formatted the code (`pdm run format`).
- [X] I have verified that linting passes (`pdm run lint`).
- [X] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Refactor**
  * Made user prompt non‑blocking to improve responsiveness.
* Converted filesystem/path handling and prefs I/O to async‑friendly
operations; moved blocking network and file work to background tasks.
* Added async file/path helpers and async port‑check before browser
connections.

* **Tests**
* Expanded unit tests for path helpers, image download success/failure,
prefs writing, and directory creation/renaming workflows.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-05 20:53:40 +01:00
Jens
6cbc25b54c docs: Improve README clarity and fix configuration documentation (#711)
## ℹ️ Description

This PR addresses issue #708 by improving the README's About section to
make the bot's purpose clearer to new users. It also fixes a technical
inaccuracy in the configuration documentation.

- Link to the related issue(s): Issue #708
- **Motivation**: The current About section uses ambiguous terminology
("ads" instead of "listings") and doesn't clearly communicate what the
bot does. Additionally, the configuration example incorrectly documents
`shipping_costs` as available in `ad_defaults`, when it's only
implemented for per-ad configuration.

## 📋 Changes Summary

**About Section Improvements:**
- Changed "ads" to "listings" for clarity (addresses confusion mentioned
in #708)
- Added "Key Features" section with 6 concrete capabilities
- Added "Why This Project?" section explaining the rewrite and
differences from legacy client
- Preserved all legal disclaimers

**Configuration Documentation Fix:**
- Removed `shipping_costs` from `ad_defaults` example (not implemented
in `AdDefaults` Pydantic class)
- Added clarifying comment that `shipping_costs` and `shipping_options`
must be configured per-ad
- Verified `shipping_costs` remains documented in ad configuration
section

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

*Note: This is a documentation-only change with no code modifications.*

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (N/A -
documentation only).
- [x] I have formatted the code (N/A - documentation only).
- [x] I have verified that linting passes (N/A - documentation only).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-12-05 20:51:48 +01:00
Jens
9877f26407 chore: Improve CodeRabbit path filters configuration (#712)
## ℹ️ Description

This PR improves the CodeRabbit configuration to ensure all important
project files are reviewed while excluding only build artifacts and
temporary files.

The previous configuration used a blanket `!**/.*` exclusion that was
unintentionally filtering out the entire `.github` directory, including
workflows, dependabot config, issue templates, and CODEOWNERS files.

## 📋 Changes Summary

- **Added** `.github/**` to include all GitHub automation files
(workflows, dependabot, templates, CODEOWNERS)
- **Added** root config files (`pyproject.toml`, `*.yaml`, `*.yml`,
`**/*.md`)
- **Removed** overly broad `!**/.*` exclusion pattern
- **Added** specific exclusions for Python cache directories
(`.pytest_cache`, `.mypy_cache`, `.ruff_cache`)
- **Added** explicit IDE file exclusions (`.vscode`, `.idea`,
`.DS_Store`)
- **Added** `pdm.lock` exclusion to reduce noise

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
  * Updated internal code review configuration and automation settings.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-12-05 20:39:10 +01:00
kleinanzeigen-bot-tu[bot]
455862eb51 chore: Update Python dependencies (#709)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-05 20:28:20 +01:00
sebthom
afbd73e368 ci: fix publish release workflow 2025-12-05 20:20:46 +01:00
sebthom
65d40be3eb ci: add publish release workflow 2025-12-04 18:00:32 +01:00
kleinanzeigen-bot-tu[bot]
f0704addad chore: ✔ Update basedpyright 1.34.0 -> 1.35.0 (#707)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-04 16:45:03 +01:00
Jens
6c2cba50fa fix: Handle missing dimension108 in special attributes extraction (#706) 2025-12-04 14:01:11 +01:00
kleinanzeigen-bot-tu[bot]
554c3a4e1f chore: ✔ Update cyclonedx-python-lib 11.5.0 -> 11.6.0 (#704) 2025-12-03 12:41:24 +01:00
kleinanzeigen-bot-tu[bot]
ed53639ec6 chore: Update Python dependencies (#702)
✔ Update pip-audit 2.9.0 -> 2.10.0 successful
  ✔ Update cyclonedx-python-lib 9.1.0 -> 11.5.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-03 07:41:58 +01:00
kleinanzeigen-bot-tu[bot]
9aaefe8657 chore: Update Python dependencies (#700)
✔ Update packageurl-python 0.17.5 -> 0.17.6 successful
  ✔ Update pyinstaller 6.16.0 -> 6.17.0 successful
  ✔ Update pydantic 2.12.4 -> 2.12.5 successful
  ✔ Update ruff 0.14.6 -> 0.14.7 successful
  ✔ Update mypy 1.18.2 -> 1.19.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-01 22:33:52 +01:00
dependabot[bot]
119de19d75 ci(deps): bump actions/checkout from 5 to 6 (#696)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to
6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>v6-beta by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2298">actions/checkout#2298</a></li>
<li>update readme/changelog for v6 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2311">actions/checkout#2311</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5.0.0...v6.0.0">https://github.com/actions/checkout/compare/v5.0.0...v6.0.0</a></p>
<h2>v6-beta</h2>
<h2>What's Changed</h2>
<p>Updated persist-credentials to store the credentials under
<code>$RUNNER_TEMP</code> instead of directly in the local git
config.</p>
<p>This requires a minimum Actions Runner version of <a
href="https://github.com/actions/runner/releases/tag/v2.329.0">v2.329.0</a>
to access the persisted credentials for <a
href="https://docs.github.com/en/actions/tutorials/use-containerized-services/create-a-docker-container-action">Docker
container action</a> scenarios.</p>
<h2>v5.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5...v5.0.1">https://github.com/actions/checkout/compare/v5...v5.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V6.0.0</h2>
<ul>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
</ul>
<h2>V5.0.1</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.1</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1af3b93b68"><code>1af3b93</code></a>
update readme/changelog for v6 (<a
href="https://redirect.github.com/actions/checkout/issues/2311">#2311</a>)</li>
<li><a
href="71cf2267d8"><code>71cf226</code></a>
v6-beta (<a
href="https://redirect.github.com/actions/checkout/issues/2298">#2298</a>)</li>
<li><a
href="069c695914"><code>069c695</code></a>
Persist creds to a separate file (<a
href="https://redirect.github.com/actions/checkout/issues/2286">#2286</a>)</li>
<li><a
href="ff7abcd0c3"><code>ff7abcd</code></a>
Update README to include Node.js 24 support details and requirements (<a
href="https://redirect.github.com/actions/checkout/issues/2248">#2248</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 18:11:09 +01:00
kleinanzeigen-bot-tu[bot]
03b91a3d8c chore: Update Python dependencies (#695)
✔ Update altgraph 0.17.4 -> 0.17.5 successful
  ✔ Update exceptiongroup 1.3.0 -> 1.3.1 successful
  ✔ Update pyinstaller-hooks-contrib 2025.9 -> 2025.10 successful
  ✔ Update ruff 0.14.5 -> 0.14.6 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-24 18:10:05 +01:00
kleinanzeigen-bot-tu[bot]
651c894a86 chore: ✔ Update basedpyright 1.33.0 -> 1.34.0 (#694) 2025-11-20 12:47:56 +01:00
kleinanzeigen-bot-tu[bot]
5e6668d268 chore: Update Python dependencies (#693) 2025-11-19 15:08:06 +01:00
Heavenfighter
c7733eb1a9 fix: Setting correct shipping and package size in update mode (#690)
## ℹ️ Description
This PR fixes the update logic for shipping options. 
A different dialog sequence is used in some categories which must be
taken into account.
Also the selection of the correct shipping sizes was refactured.

- Link to the related issue(s): Issue #687 


## 📋 Changes Summary
- An check was added if two dialogs have to be closed in update mode
- Logic for setting package siz was refactured in update mode

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Improved shipping-option handling when editing listings: the flow now
more reliably navigates the shipping dialog, correctly selects or
deselects options based on item size and the desired configuration, and
avoids incorrect selections across size categories—resulting in more
consistent shipping choices when modifying ads.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-11-18 08:43:54 +01:00
kleinanzeigen-bot-tu[bot]
5c3b243194 chore: Update ruamel-yaml-clib 0.2.14 -> 0.2.15 (#688)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-17 11:28:01 +01:00
Jens
89df56bf8b test: strengthen coverage for sessions, logging, and update check (#686)
## ℹ️ Description
* Strengthen the session/logging/update-check tests to exercise real
resources and guards while bringing the update-check docs in line with
the supported interval units.
- Link to the related issue(s): Issue #N/A

## 📋 Changes Summary
- Reworked the `WebScrapingMixin` session tests so they capture each
`stop` handler before the browser reference is nulled, ensuring cleanup
logic is exercised without crashing.
- Added targeted publish and update-check tests that patch the async
helpers, guard logic, and logging handlers while confirming
`requests.get` is skipped when the state gate is closed.
- Updated `docs/update-check.md` to list only the actually supported
interval units (up to 30 days) and noted the new guard coverage in the
changelog.

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Tests**
* Expanded test coverage for publish workflow orchestration and update
checking interval behavior.
* Added comprehensive browser session cleanup tests, including
idempotent operations and edge case handling.
* Consolidated logging configuration tests with improved handler
management validation.
  * Refined test fixtures and assertions for better test reliability.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-11-17 11:02:18 +01:00
Jens
b99966817c chore: harden version helper (#684)
## ℹ️ Description
Currently version.py isn't checked by the linters. Ran linters manually
and fixed all lints.

- Link to the related issue(s): none

## 📋 Changes Summary

- Introduced shutil.which("git") so the helper explicitly locates the
Git binary and raises a clear error when it’s absent rather than relying
on a relative PATH.
- Switched to subprocess.run(..., capture_output=True, text=True) with
the located executable, guarding the call with check=True and # noqa:
S603 since the arguments are trusted.
- Made the timestamp timezone-aware with datetime.now(timezone.utc) to
avoid implicit local-time assumptions when creating the YYYY+<commit>
version string.

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-11-16 17:50:04 +01:00
Jens
4870bc223f chore: improve coverage reporting (#683)
## ℹ️ Description
* Restrict coverage reporting to library files and collect per-suite
coverage data for Codecov’s flags.
- Link to the related issue(s): Issue #N/A
- Describe the motivation and context for this change.

## 📋 Changes Summary
- add `coverage:prepare` and per-suite `COVERAGE_FILE`s so each test
group writes its own sqlite and XML artifacts without appending
- replace the shell scripts with `scripts/coverage_helper.py`, scope the
report to `src/kleinanzeigen_bot/*`, and add logging/validation around
cleanup, pytest runs, and data combining
- ensure the helper works in CI (accepts extra pytest args, validates
file presence)

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.
2025-11-16 17:46:02 +01:00
kleinanzeigen-bot-tu[bot]
3a79059335 chore: ✔ Update click 8.3.0 -> 8.3.1 (#685)
✔ Update click 8.3.0 -> 8.3.1 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-16 14:39:09 +01:00
kleinanzeigen-bot-tu[bot]
9fc118e5fe chore: Update Python dependencies (#682)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-14 11:54:27 +01:00
Jens
a3ac27c441 feat: add configurable timeouts (#673)
## ℹ️ Description
- Related issues: #671, #658
- Introduces configurable timeout controls plus retry/backoff handling
for flaky DOM operations.

We often see timeouts which are note reproducible in certain
configurations. I suspect timeout issues based on a combination of
internet speed, browser, os, age of the computer and the weather.

This PR introduces a comprehensive config model to tweak timeouts.

## 📋 Changes Summary
- add TimeoutConfig to the main config/schema and expose timeouts in
README/docs
- wire WebScrapingMixin, extractor, update checker, and browser
diagnostics to honor the configurable timeouts and retries
- update translations/tests to cover the new behaviour and ensure
lint/mypy/pyright pipelines remain green

### ⚙️ Type of Change
- [ ] 🐞 Bug fix (non-breaking change which fixes an issue)
- [x]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Centralized, configurable timeout system for web interactions,
detection flows, publishing, and pagination.
* Optional retry with exponential backoff for operations that time out.

* **Improvements**
* Replaced fixed wait times with dynamic timeouts throughout workflows.
  * More informative timeout-related messages and diagnostics.

* **Tests**
* New and expanded test coverage for timeout behavior, pagination,
diagnostics, and retry logic.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-11-13 15:08:52 +01:00
kleinanzeigen-bot-tu[bot]
ac678ed888 chore: Update Python dependencies (#681)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-13 11:31:00 +01:00
Jens
33d1964f86 feat: speed up and stabilise test suite (#676)
## ℹ️ Description
*Provide a concise summary of the changes introduced in this pull
request.*

- Link to the related issue(s): Issue #
- Describe the motivation and context for this change.

Refactors the test harness for faster and more reliable feedback: adds
deterministic time freezing for update checks, accelerates and refactors
smoke tests to run in-process, defaults pytest to xdist with durations
tracking, and adjusts CI triggers so PRs run the test matrix only once.

## 📋 Changes Summary

- add pytest-xdist + durations reporting defaults, force deterministic
locale and slow markers, and document the workflow adjustments
- run smoke tests in-process (no subprocess churn), mock update
checks/logging, and mark slow specs appropriately
- deflake update check interval tests by freezing datetime and simplify
FixedDateTime helper
- limit GitHub Actions `push` trigger to `main` so feature branches rely
on the single pull_request run

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [ ] 🐞 Bug fix (non-breaking change which fixes an issue)
- [x]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Tests**
* Ensure tests run in a consistent English locale and restore prior
locale after each run
  * Mark integration scraping tests as slow for clearer categorization
* Replace subprocess-based CLI tests with an in-process runner that
returns structured results and captures combined stdout/stderr/logs;
disable update checks during smoke tests
* Freeze current time in update-check tests for deterministic assertions
* Add mock for process enumeration in web‑scraping unit tests to
stabilize macOS-specific warnings

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-11-12 21:29:51 +01:00
kleinanzeigen-bot-tu[bot]
91cb677d17 chore: ✔ Update certifi 2025.10.5 -> 2025.11.12 (#680)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-12 11:25:22 +01:00
kleinanzeigen-bot-tu[bot]
c3c278b6a1 chore: Update Python dependencies (#678)
✔ Update pytest-asyncio 1.2.0 -> 1.3.0 successful
  ✔ Update pytest 8.4.2 -> 9.0.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-11 13:02:37 +01:00
Jens
71feedc700 fix: pin nodriver to 0.47 (#675)
## ℹ️ Description
- Link to the related issue(s): Issue #N/A
- Describe the motivation and context for this change.

Pin `nodriver` to the last known good 0.47 series so we can avoid the
UTF-8 decoding regression in 0.48.x that currently breaks our local
mypy/linting runs.

## 📋 Changes Summary
- lock runtime dependency `nodriver` to `0.47.*` with an inline comment
describing the upstream regression
- refresh `pdm.lock` so local/CI installs stay on the pinned version

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [ ] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [ ] I have formatted the code (`pdm run format`).
- [ ] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-11-10 11:36:44 +01:00
kleinanzeigen-bot-tu[bot]
d28a2b2cfa chore: Update Python dependencies (#669)
✔ Update deprecated 1.2.18 -> 1.3.1 successful
  ✔ Update nodriver 0.47.0 -> 0.48.1 successful
  ✔ Update psutil 7.1.2 -> 7.1.3 successful
  ✔ Update pydantic 2.12.3 -> 2.12.4 successful
  ✔ Update wrapt 1.17.3 -> 2.0.1 successful
  ✔ Update coverage 7.11.0 -> 7.11.3 successful
  ✔ Update basedpyright 1.32.1 -> 1.33.0 successful
  ✔ Update ruff 0.14.2 -> 0.14.4 successful
  ✔ Update pydantic-core 2.41.4 -> 2.41.5 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-11-10 11:27:57 +01:00
sebthom
974646fa43 ci: fix "mv: cannot stat 'artifacts-macos-13/...': No such file" 2025-11-03 11:15:28 +01:00
sebthom
214dd09809 fix: GHA workflow fails to delete untagged docker images 2025-11-02 16:30:36 +01:00
Sebastian Thomschke
1244fce528 ci: Update GHA workflow config (test on MacOS 15) (#670) 2025-11-02 12:41:23 +01:00
Jens
e76abc66e8 fix: harden category extraction breadcrumb parsing (#668)
## ℹ️ Description
- Link to the related issue(s): Issue #667
- Harden breadcrumb category extraction so downloads no longer fail when
the breadcrumb structure changes.

## 📋 Changes Summary
- Parse breadcrumb anchors dynamically and fall back with debug logging
when legacy selectors are needed.
- Added unit coverage for multi-anchor, single-anchor, and fallback
scenarios to keep diff coverage above 80%.
- Documented required lint/format/test steps in PR checklist; no new
dependencies.

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Improved category extraction accuracy with enhanced breadcrumb
parsing.
* Better handling for listings with a single breadcrumb (returns stable
category identifier).
* More resilient fallback when breadcrumb data is missing or malformed.
* Safer normalization of category identifiers to avoid incorrect parsing
across site variations.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-10-28 15:10:01 +01:00
dependabot[bot]
9c73696b29 ci(deps): bump actions/upload-artifact from 4 to 5 (#666)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 17:33:52 +01:00
dependabot[bot]
88196838dd ci(deps): bump actions/download-artifact from 5 to 6 (#665)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 15:55:14 +01:00
kleinanzeigen-bot-tu[bot]
f20da20287 chore: ✔ Update psutil 7.1.1 -> 7.1.2 (#663)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-26 11:23:01 +01:00
Jens
08a60c2043 chore: remove pip 25.2 audit exception after pip 25.3 release (#661) 2025-10-25 20:40:45 +02:00
kleinanzeigen-bot-tu[bot]
06bbd0ef6f chore: ✔ Update pip 25.2 -> 25.3 (#659)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-25 12:23:09 +02:00
Jens
ae5b09a997 feat: add CodeRabbit configuration and propose adopting CodeRabbit (#656) 2025-10-25 12:21:52 +02:00
kleinanzeigen-bot-tu[bot]
27a17f3e56 chore: Update Python dependencies (#657)
✔ Update typer 0.19.2 -> 0.20.0 successful
  ✔ Update ruamel-yaml 0.18.15 -> 0.18.16 successful
  ✔ Update ruff 0.14.1 -> 0.14.2 successful
  ✔ Update basedpyright 1.31.7 -> 1.32.1 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-24 12:44:06 +02:00
kleinanzeigen-bot-tu[bot]
20e43db2ef chore: ✔ Update psutil 7.1.0 -> 7.1.1 (#655)
✔ Update psutil 7.1.0 -> 7.1.1 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-20 12:42:16 +02:00
Jens
06a716f87a fix: condition dialog selector for special attributes (#653)
## ℹ️ Description
*Provide a concise summary of the changes introduced in this pull
request.*

- Link to the related issue(s): Issue #648
- Fix condition dialog selector that was failing to open and select
condition values for special attributes during ad publishing.

## 📋 Changes Summary

- Remove unused condition_mapping dictionary that was not needed
- Fix dialog button selector to use aria-haspopup attribute instead of
non-existent SelectionButton class
- Fix radio button selection to use ID selector instead of data-testid
approach
- Simplify confirm button XPath selector for better reliability

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-10-20 09:56:52 +02:00
Jens
339d66ed47 feat: Replace custom RemoteObject wrapper with direct NoDriver 0.47+ usage (#652)
## ℹ️ Description
*Replace custom RemoteObject serialization wrapper with direct NoDriver
0.47+ RemoteObject API usage for better performance and
maintainability.*

- **Motivation**: The custom wrapper was unnecessary complexity when
NoDriver 0.47+ provides direct RemoteObject API
- **Context**: Upgrading from NoDriver 0.39 to 0.47 introduced
RemoteObject, and we want to use it as intended
- **Goal**: Future-proof implementation using the standard NoDriver
patterns

## 📋 Changes Summary

- Replace custom serialization wrapper with direct RemoteObject API
usage
- Implement proper RemoteObject detection and conversion in
web_execute()
- Add comprehensive _convert_remote_object_value() method for recursive
conversion
- Handle key/value list format from deep_serialized_value.value
- Add type guards and proper type checking for RemoteObject instances
- Maintain internal API stability while using RemoteObject as intended
- Add 19 comprehensive test cases covering all conversion scenarios
- Application tested and working with real ad download, update and
publish

### ⚙️ Type of Change
- [x]  New feature (adds new functionality without breaking existing
usage)
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (pdm run
test).
- [x] I have formatted the code (pdm run format).
- [x] I have verified that linting passes (pdm run lint).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-10-20 08:52:06 +02:00
Jens
a9643d916f fix: resolve nodriver RemoteObject conversion bug (#651)
## ℹ️ Description
*Fixes the nodriver 0.47.0 RemoteObject conversion bug that was causing
KeyError and TypeError when accessing BelenConf dimensions.*

- Link to the related issue(s): Issue #650
- The bot was crashing when downloading ads because nodriver 0.47.0 was
returning JavaScript objects as lists of [key, value] pairs instead of
proper Python dictionaries, causing BelenConf dimensions to be
inaccessible.

## 📋 Changes Summary

- **Fixed nodriver RemoteObject conversion bug** in
`web_scraping_mixin.py`:
- Added detection logic for list-of-pairs format in `web_execute` method
- Enhanced `_convert_remote_object_dict` to recursively convert nested
structures
  - Now properly converts JavaScript objects to Python dictionaries
- **Bot functionality fully restored** - can now download ads with
subcategories and special attributes

### ⚙️ Type of Change
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-10-19 21:39:05 +02:00
kleinanzeigen-bot-tu[bot]
19c0768255 chore: ✔ Update iniconfig 2.1.0 -> 2.3.0 (#649)
✔ Update iniconfig 2.1.0 -> 2.3.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-19 13:37:39 +02:00
Jens
0ee0b2a4dc feat: improve codecov configuration for more reliable PR coverage checks (#646)
## ℹ️ Description
*Improve codecov configuration to address erratic PR blocking behavior
and provide better developer visibility into coverage impact.*

- Addresses inconsistent flag definitions between codecov.yml and
workflow
- Resolves confusing threshold values and separate flag status checks
- Improves developer experience with comprehensive PR comments

## 📋 Changes Summary

- Replace separate flag checks with single combined project coverage
check (70% target)
- Add patch coverage check (80% target) to catch regressions in changed
code
- Add comprehensive PR comments showing project, patch, and file-level
coverage
- Configure flag carryforward for better handling of partial test runs
- Remove confusing 0.2% integration threshold and separate flag status
checks
- Validate configuration with Codecov's official endpoint

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [ ] 🐞 Bug fix (non-breaking change which fixes an issue)
- [x]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-10-18 19:44:11 +02:00
Jens
8aee313aba fix: resolve nodriver 0.47+ RemoteObject compatibility issues (#645)
## ℹ️ Description
*Provide a concise summary of the changes introduced in this pull
request.*

- Link to the related issue(s): #644
- Describe the motivation and context for this change.

This PR resolves compatibility issues with nodriver 0.47+ where
page.evaluate() returns RemoteObject instances that need special
handling for proper conversion to Python objects. The update introduced
breaking changes in how JavaScript evaluation results are returned,
causing TypeError: [RemoteObject] object is not subscriptable errors.

## 📋 Changes Summary

- Fixed TypeError: [RemoteObject] object is not subscriptable in
web_request() method
- Added comprehensive RemoteObject conversion logic with
_convert_remote_object_result()
- Added _convert_remote_object_dict() for recursive nested structure
conversion
- Fixed price field concatenation issue in MODIFY mode by explicit field
clearing
- Updated web_sleep() to accept integer milliseconds instead of float
seconds
- Updated German translations for new log messages
- Fixed linting issues (E711, E712) in test assertions

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (pdm run
test).
- [x] I have formatted the code (pdm run format).
- [x] I have verified that linting passes (pdm run lint).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-10-18 19:38:51 +02:00
kleinanzeigen-bot-tu[bot]
34013cb869 chore: Update Python dependencies (#643)
✔ Update pydantic 2.12.2 -> 2.12.3 successful
  ✔ Update coverage 7.10.7 -> 7.11.0 successful
  ✔ Update ruff 0.14.0 -> 0.14.1 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-18 13:51:29 +02:00
kleinanzeigen-bot-tu[bot]
f76e3b69ba chore: Update Python dependencies (#642)
✔ Update pydantic 2.12.1 -> 2.12.2 successful
  ✔ Update pydantic-core 2.41.3 -> 2.41.4 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-15 12:49:42 +02:00
Jens
84e9d82a55 fix: increase build timeout from 10 to 20 minutes (#641) 2025-10-15 10:37:10 +02:00
Sebastian Thomschke
dadd08aedb build: upgrade to Python 3.14 (#636)
Co-authored-by: Jens <1742418+1cu@users.noreply.github.com>
2025-10-14 15:56:35 +02:00
kleinanzeigen-bot-tu[bot]
799ec447af chore: Update Python dependencies (#640)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-14 12:32:55 +02:00
Jens
7b4b7907d0 feat: cleanup test structure and remove BelenConf testing (#639) 2025-10-14 09:50:50 +02:00
kleinanzeigen-bot-tu[bot]
ff0be420e7 chore: Update Python dependencies (#637)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-13 17:31:47 +02:00
dependabot[bot]
9ed4d48315 ci(deps): bump github/codeql-action from 3 to 4 (#638)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-13 17:15:07 +02:00
Jens
36ca178574 feat: upgrade nodriver from 0.39 to 0.47 (#635)
## ℹ️ Description
Upgrade nodriver dependency from pinned version 0.39.0 to latest 0.47.0
to resolve browser startup issues and JavaScript evaluation problems
that affected versions 0.40-0.44.

- Link to the related issue(s): Resolves nodriver compatibility issues
- This upgrade addresses browser startup problems and window.BelenConf
evaluation failures that were blocking the use of newer nodriver
versions.

## 📋 Changes Summary

- Updated nodriver dependency from pinned 0.39.0 to >=0.47.0 in
pyproject.toml
- Fixed RemoteObject handling in web_execute method for nodriver 0.47
compatibility
- Added comprehensive BelenConf test fixture with real production data
structure
- Added integration test to validate window.BelenConf evaluation works
correctly
- Added German translation for new error message
- Replaced real user data with privacy-safe dummy data in test fixtures

### 🔧 Type Safety Improvements

**Added explicit `str()` conversions to resolve type inference issues:**

The comprehensive BelenConf test fixture contains deeply nested data
structures that caused pyright's type checker to infer complex
dictionary types throughout the codebase. To ensure type safety and
prevent runtime errors, I added explicit `str()` conversions in key
locations:

- **CSRF tokens**: `str(csrf_token)` - Ensures CSRF tokens are treated
as strings
- **Special attributes**: `str(special_attribute_value)` - Converts
special attribute values to strings
- **DOM attributes**: `str(special_attr_elem.attrs.id)` - Ensures
element IDs are strings
- **URL handling**: `str(current_img_url)` and `str(href_attributes)` -
Converts URLs and href attributes to strings
- **Price values**: `str(ad_cfg.price)` - Ensures price values are
strings

These conversions are defensive programming measures that ensure
backward compatibility and prevent type-related runtime errors, even if
the underlying data structures change in the future.

### ⚙️ Type of Change
- [x]  New feature (adds new functionality without breaking existing
usage)
- [ ] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)

##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-10-12 21:22:46 +02:00
Jens
a2745c03b2 fix: resolve linting errors after dependency updates (#634) 2025-10-12 14:45:59 +02:00
kleinanzeigen-bot-tu[bot]
f2f139617b chore: Update Python dependencies (#630)
✔ Update filelock 3.19.1 -> 3.20.0 successful
  ✔ Update click 8.2.1 -> 8.3.0 successful
  ✔ Update maison 2.0.0 -> 2.0.2 successful
  ✔ Update certifi 2025.8.3 -> 2025.10.5 successful
  ✔ Update platformdirs 4.4.0 -> 4.5.0 successful
  ✔ Update msgpack 1.1.1 -> 1.1.2 successful
  ✔ Update pyparsing 3.2.4 -> 3.2.5 successful
  ✔ Update pyinstaller-hooks-contrib 2025.8 -> 2025.9 successful
  ✔ Update pytest-rerunfailures 16.0.1 -> 16.1 successful
  ✔ Update rich 14.1.0 -> 14.2.0 successful
  ✔ Update ruamel-yaml-clib 0.2.12 -> 0.2.14 successful
  ✔ Update pydantic 2.11.9 -> 2.12.0 successful
  ✔ Update typing-inspection 0.4.1 -> 0.4.2 successful
  ✔ Update tomli 2.2.1 -> 2.3.0 successful
  ✔ Update mypy 1.18.1 -> 1.18.2 successful
  ✔ Update coverage 7.10.6 -> 7.10.7 successful
  ✔ Update basedpyright 1.31.4 -> 1.31.7 successful
  ✔ Update pydantic-core 2.33.2 -> 2.41.1 successful
  ✔ Update ruff 0.13.0 -> 0.14.0 successful
  ✔ Update nodejs-wheel-binaries 22.19.0 -> 22.20.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-11 17:20:54 +02:00
kleinanzeigen-bot-tu[bot]
a8a3f83925 chore: ✔ Update psutil 7.0.0 -> 7.1.0 (#629)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-18 12:42:04 +02:00
kleinanzeigen-bot-tu[bot]
d96b1d3460 chore: Update Python dependencies (#628)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-17 12:22:26 +02:00
kleinanzeigen-bot-tu[bot]
ee813bcf06 chore: ✔ Update pytest-cov 6.3.0 -> 7.0.0 (#627)
✔ Update pytest-cov 6.3.0 -> 7.0.0 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-10 15:18:46 +02:00
kleinanzeigen-bot-tu[bot]
ea012e634b chore: ✔ Update pytest-cov 6.2.1 -> 6.3.0 (#624)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-09 13:32:22 +02:00
Heavenfighter
c9d5c03ad2 feat: Allow individual shipping without setting shipping costs (#626) 2025-09-09 11:24:46 +02:00
dependabot[bot]
a913d00e23 ci(deps): bump actions/stale from 9 to 10 (#625)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 17:25:39 +02:00
kleinanzeigen-bot-tu[bot]
171996869e chore: Update Python dependencies (#621)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-06 13:15:22 +02:00
kleinanzeigen-bot-tu[bot]
4d39f956f0 chore: Update Python dependencies (#619)
✔ Update pytest-rerunfailures 15.1 -> 16.0 successful
  ✔ Update platformdirs 4.3.8 -> 4.4.0 successful
  ✔ Update typing-extensions 4.14.1 -> 4.15.0 successful
  ✔ Update coverage 7.10.4 -> 7.10.5 successful
  ✔ Update ruff 0.12.9 -> 0.12.11 successful
  ✔ Update basedpyright 1.31.2 -> 1.31.3 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-29 22:25:26 +02:00
kleinanzeigen-bot-tu[bot]
7b898a9136 chore: Update Python dependencies (#617)
✔ Update jaraco-functools 4.2.1 -> 4.3.0 successful
  ✔ Update requests 2.32.4 -> 2.32.5 successful
  ✔ Update ruamel-yaml 0.18.14 -> 0.18.15 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Jens Bergmann <1742418+1cu@users.noreply.github.com>
2025-08-20 21:20:52 +02:00
Jens Bergmann
37a36988c3 fix: improve Chrome version detection to reuse existing browsers (#615) 2025-08-20 12:51:13 +02:00
dependabot[bot]
21cdabb469 ci(deps): bump amannn/action-semantic-pull-request from 5 to 6 (#616)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 00:44:52 +02:00
Jens Bergmann
332926519d feat: chrome version detection clean (#607) 2025-08-18 13:19:50 +02:00
Jens Bergmann
df24a675a9 fix: resolve #612 FileNotFoundError and improve ad download architecture (#613) 2025-08-17 17:49:00 +02:00
kleinanzeigen-bot-tu[bot]
c1b273b757 chore: Update Python dependencies (#610)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-17 12:24:38 +02:00
Heavenfighter
252dd52632 fix: refactored approval message detection (#608) 2025-08-13 18:00:24 +02:00
Heavenfighter
a1fe36f925 fix: publishing without images (#609) 2025-08-13 17:59:29 +02:00
Jens Bergmann
c9d04da70d feat: browser connection improvements (#601) 2025-08-13 09:29:25 +02:00
Heavenfighter
b94661c4d5 fix: handle security message during ad update (#605) 2025-08-12 19:28:19 +02:00
kleinanzeigen-bot-tu[bot]
6f4a4e319d chore: Update Python dependencies (#603)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-12 16:43:13 +02:00
Jens Bergmann
91a40b0116 feat: enhanced folder naming (#599) 2025-08-12 10:43:26 +02:00
dependabot[bot]
1e0c7216ad ci(deps): bump actions/checkout from 4 to 5 (#602)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-12 00:55:04 +02:00
dependabot[bot]
40b0a8a252 ci(deps): bump actions/download-artifact from 4 to 5 (#600)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-11 23:36:33 +02:00
kleinanzeigen-bot-tu[bot]
7b289fc9ba chore: Update Python dependencies (#596)
✔ Update types-requests 2.32.4.20250611 -> 2.32.4.20250809 successful
  ✔ Update charset-normalizer 3.4.2 -> 3.4.3 successful
  ✔ Update coverage 7.10.2 -> 7.10.3 successful
  ✔ Update ruff 0.12.7 -> 0.12.8 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-11 22:18:48 +02:00
kleinanzeigen-bot-tu[bot]
eeaa01f420 chore: ✔ Update packageurl-python 0.17.4 -> 0.17.5 (#595) 2025-08-07 19:07:03 +02:00
Heavenfighter
6b29b9d314 fix: "No HTML element found using CSS selector" during ad download (#594) 2025-08-06 15:15:11 +02:00
kleinanzeigen-bot-tu[bot]
9556fc2a91 chore: Update Python dependencies (#593)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-06 15:02:20 +02:00
kleinanzeigen-bot-tu[bot]
937bc67225 chore: Update Python dependencies (#591)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-03 13:07:29 +02:00
kleinanzeigen-bot-tu[bot]
202c77e3cb chore: Update Python dependencies (#589) 2025-07-31 21:34:26 +02:00
kleinanzeigen-bot-tu[bot]
fc77c4fc6a chore: ✔ Update py-serializable 2.0.0 -> 2.1.0 (#588)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-21 12:35:43 +02:00
Jens Bergmann
4e9c6b073d fix: improve update check logic and UTC log clarity (#587) 2025-07-18 23:31:15 +02:00
kleinanzeigen-bot-tu[bot]
5713679d24 chore: ✔ Update ruff 0.12.3 -> 0.12.4 (#586)
✔ Update ruff 0.12.3 -> 0.12.4 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-18 22:24:16 +02:00
Heavenfighter
8070a95d26 fix: refactored setting shipping size (#584)
Co-authored-by: Jens Bergmann <1742418+1cu@users.noreply.github.com>
2025-07-17 12:25:01 +02:00
kleinanzeigen-bot-tu[bot]
4a7284a46e chore: ✔ Update basedpyright 1.30.1 -> 1.31.0 (#585)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-17 12:24:31 +02:00
kleinanzeigen-bot-tu[bot]
20a06cf026 chore: Update Python dependencies (#583) 2025-07-17 07:14:02 +02:00
Johannes Obermeier
7a3c5fc3de fix: handle missing .versand_s for service categories like … (#579)
There are categories which are not require shipping and there is no
shipping field

## ℹ️ Description
For example category 297/298 does not require shipping, because its a
service category.
The current code did not handle that case and was searching for a path
with .versand_s, but in this category, there is no such path.

## 📋 Changes Summary

If the shipping_type is set to NOT_APPLICABLE in the configuration, the
shipping assignment step is skipped instead of being forced.

### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [x] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ]  New feature (adds new functionality without breaking existing
usage)
- [ ] 💥 Breaking change (changes that might break existing user setups,
scripts, or configurations)


##  Checklist
Before requesting a review, confirm the following:
- [x] I have reviewed my changes to ensure they meet the project's
standards.
- [x] I have tested my changes and ensured that all tests pass (`pdm run
test`).
- [x] I have formatted the code (`pdm run format`).
- [x] I have verified that linting passes (`pdm run lint`).
- [x] I have updated documentation where necessary.

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
2025-07-14 22:16:54 +02:00
Jens Bergmann
280a72cba0 feat: Refactor and expand CLI smoke tests for subcommand/config coverage (#581) 2025-07-14 12:38:23 +02:00
kleinanzeigen-bot-tu[bot]
47c68add76 chore: ✔ Update certifi 2025.7.9 -> 2025.7.14 (#582)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-14 12:36:20 +02:00
Jens Bergmann
c425193b10 feat: add create-config subcommand to generate default config (#577) 2025-07-13 13:09:40 +02:00
kleinanzeigen-bot-tu[bot]
526592047e chore: ✔ Update ruff 0.12.2 -> 0.12.3 (#578)
✔ Update ruff 0.12.2 -> 0.12.3 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-12 17:01:19 +02:00
kleinanzeigen-bot-tu[bot]
5ca9d458e7 chore: ✔ Update basedpyright 1.29.5 -> 1.30.1 (#576)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-10 12:42:41 +02:00
Jens Bergmann
1a1633e12d feat: introduce smoke test group and fail-fast test orchestration (#572) 2025-07-09 19:23:52 +02:00
kleinanzeigen-bot-tu[bot]
ed2f63f0dd chore: ✔ Update certifi 2025.6.15 -> 2025.7.9 (#575)
✔ Update certifi 2025.6.15 -> 2025.7.9 successful

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-09 12:44:01 +02:00
sebthom
3f85d9e8da chore: upgrade to Python 3.13.5 2025-07-08 21:05:58 +02:00
Heavenfighter
146d29c62c #573 refactored shipping_option (#574) 2025-07-07 19:58:30 +02:00
Sebastian Thomschke
b7882065b7 feat: detect double-click launch on Windows and abort with info message (#570)
---------

Co-authored-by: Jens Bergmann <1742418+1cu@users.noreply.github.com>
2025-07-05 13:58:24 +02:00
kleinanzeigen-bot-tu[bot]
14a917a1c7 chore: Update Python dependencies (#571)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-05 13:57:32 +02:00
Jens Bergmann
7ff005d18b fix: chores (#565) 2025-07-03 15:12:43 +02:00
github-actions[bot]
017047ba01 chore: Update Python dependencies 2025-07-03 15:11:16 +02:00
Heavenfighter
3734a73542 #567 refactored minor classes
search options
2025-07-02 17:03:33 +02:00
kleinanzeigen-bot-tu[bot]
3d937a4203 chore: Update Python dependencies (#564)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-28 00:06:18 +02:00
Jens Bergmann
5430f5cdc6 feat: update check (#561)
feat(update-check): add robust update check with interval support, state management, and CLI integration

- Implement version and interval-based update checks with configurable settings
- Add CLI command `kleinanzeigen-bot update-check` for manual checks
- Introduce state file with versioning, UTC timestamps, and migration logic
- Validate and normalize intervals (1d–4w) with fallback for invalid values
- Ensure correct handling of timezones and elapsed checks
- Improve error handling, logging, and internationalization (i18n)
- Add comprehensive test coverage for config, interval logic, migration, and CLI
- Align default config, translations, and schema with new functionality
- Improve help command UX by avoiding config/log loading for `--help`
- Update documentation and README with full feature overview
2025-06-27 07:52:40 +02:00
sebthom
4d4f3b4093 ci: update bug issue template 2025-06-24 18:07:29 +02:00
kleinanzeigen-bot-tu[bot]
267a1ca44d chore: Update Python dependencies (#562)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-23 13:35:52 +02:00
Jens Bergmann
c3499b3824 feat: add version to banner (#560) 2025-06-22 21:11:13 +02:00
kleinanzeigen-bot-tu[bot]
55776f3ff6 chore: Update Python dependencies (#558)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-22 21:05:32 +02:00
kleinanzeigen-bot-tu[bot]
bee5468942 chore: ✔ Update mypy 1.16.0 -> 1.16.1 (#556)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-17 12:26:11 +02:00
Jeppy
15b3698114 fix: dimension92 may not be defined in universalAnalyticsOpts (#555) 2025-06-16 12:46:13 +02:00
Heavenfighter
f69ebef643 feat: add new update command to update published ads (#549)
Co-authored-by: Jens Bergmann <1742418+1cu@users.noreply.github.com>
2025-06-16 11:46:51 +02:00
kleinanzeigen-bot-tu[bot]
e86f4d9df4 chore: Update Python dependencies (#554)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-15 12:41:50 +02:00
kleinanzeigen-bot-tu[bot]
bd2f081a89 chore: Update Python dependencies (#552)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-13 12:30:49 +02:00
Heavenfighter
0305a10eae Refactored category and special attribute (#550) 2025-06-12 14:08:06 +02:00
kleinanzeigen-bot-tu[bot]
86140c77f8 chore: Update Python dependencies (#551)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-12 14:07:43 +02:00
Heavenfighter
0f1cf71960 #547 refactored setting condition (#548) 2025-06-11 11:29:38 +02:00
Heavenfighter
4d48427234 fix: detect payment form and wait or user input (#520)
Co-authored-by: Jens Bergmann <1742418+1cu@users.noreply.github.com>
2025-06-10 15:51:59 +02:00
Heavenfighter
a5603e742f #545 refactored select city from zip (#546) 2025-06-10 14:47:02 +02:00
Jens Bergmann
92ac17b430 fix: improve login flow tests
Login Flow Tests:
- Fixed test_login_flow_handles_captcha to properly handle both login
attempts
- Added individual test functions for each component of the login flow:
  * test_check_and_wait_for_captcha: Tests captcha detection and user
interaction
  * test_fill_login_data_and_send: Tests login form filling and
submission
  * test_handle_after_login_logic: Tests post-login handling (device
verification, GDPR)
- Improved test assertions to match actual behavior of the login process
- Added proper async mocking to prevent test stalling

Test Init:
- Fixed test_extract_pricing_info to properly handle all price formats
- Improved test coverage for price extraction edge cases
- Ensured tests accurately reflect the actual behavior of the price
extraction logic
2025-06-09 21:39:56 +02:00
Heavenfighter
8ac57932ba fix: login does not work anymore #539
Refactored login input element ids.
Refactored captcha handling to one function
2025-06-09 21:39:56 +02:00
sebthom
c6e8175670 fix(deps): upgrade requests package 2025-06-09 21:39:56 +02:00
sebthom
ebfdbc4313 fix: shipping options are not applied when shipping_costs set to 0 #541 2025-06-09 20:58:04 +02:00
sebthom
3978d85cb4 fix: ruff PLC0207 missing-maxsplit-arg 2025-06-09 20:58:04 +02:00
kleinanzeigen-bot-tu[bot]
67805e633f chore: Update Python dependencies (#542)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-09 13:35:22 +02:00
kleinanzeigen-bot-tu[bot]
2d1e655535 chore: Update Python dependencies (#538)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-07 12:21:19 +02:00
kleinanzeigen-bot-tu[bot]
3d01119370 chore: ✔ Update ruff 0.11.12 -> 0.11.13 (#537)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-06 12:21:31 +02:00
kleinanzeigen-bot-tu[bot]
41591f70d1 chore: Update Python dependencies (#535)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-05 15:24:30 +02:00
sebthom
85bd5c2f2a fix: update config schema 2025-06-05 13:07:07 +02:00
Heavenfighter
770429f824 #533 Loading images from default config (#536) 2025-06-05 12:31:05 +02:00
sebthom
ea8af3795b fix: creating GH releases fails 2025-05-30 17:50:53 +02:00
sebthom
37c0eba7c7 fix: publishing docker image to ghcr.io fails 2025-05-30 17:22:53 +02:00
kleinanzeigen-bot-tu[bot]
5fc98a143a chore: Update Python dependencies (#534)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-30 16:56:34 +02:00
Heavenfighter
192b42a833 #525 Refactored gdpr handling (#532) 2025-05-28 14:43:23 +02:00
Heavenfighter
fbaeb80585 fix: clearing password input while logging in (#531)
* #530 Sending empty string to password input

* #530 added comment for clarification
2025-05-28 11:40:34 +02:00
kleinanzeigen-bot-tu[bot]
08f22d2257 chore: ✔ Update setuptools 80.8.0 -> 80.9.0 (#529)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-27 12:30:57 +02:00
sebthom
15461bb1a5 fix: release build not running 2025-05-26 20:50:54 +02:00
kleinanzeigen-bot-tu[bot]
bf876b15be chore: ✔ Update pytest-asyncio 0.26.0 -> 1.0.0 (#526)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-26 19:56:16 +02:00
Heavenfighter
347c67a388 fixes #512 (#519)
Refactored images extraction. Now directly using galleryimage-elements instead of carousel.
2025-05-25 22:28:20 +02:00
kleinanzeigen-bot-tu[bot]
b17b19db24 chore: Update Python dependencies (#518)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-25 16:22:04 +02:00
Heavenfighter
e94a1dd8af fixes #522 (#523)
Refactored XPATH expression for
deselecting unwanted
shipping options.
2025-05-25 16:21:09 +02:00
kleinanzeigen-bot-tu[bot]
337516cf9b chore: Update Python dependencies (#517)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-22 12:24:48 +02:00
kleinanzeigen-bot-tu[bot]
8ae9d1182e chore: Update Python dependencies (#516)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-21 12:50:47 +02:00
sebthom
d992fed9e9 ci: only publish coverage reports after all matrix builds are complete 2025-05-18 23:23:23 +02:00
sebthom
c794102452 chore: update workflow config 2025-05-18 21:06:08 +02:00
Jens Bergmann
50656ad7e2 feat: Improve test coverage (#515)
* test: implement comprehensive test coverage improvements

This commit improves test coverage across multiple modules, adding unit tests
for core functionality.

Key improvements:

1. WebScrapingMixin:
   - Add comprehensive async error handling tests
   - Add session management tests (browser crash recovery, session expiration)
   - Add element interaction tests (custom wait conditions, timeouts)
   - Add browser configuration tests (extensions, preferences)
   - Add robust awaitable mocking infrastructure
   - Rename integration test file to avoid naming conflicts

2. Error Handlers:
   - Add tests for error message formatting
   - Add tests for error recovery scenarios
   - Add tests for error logging functionality

3. Network Utilities:
   - Add tests for port checking functionality
   - Add tests for network error handling
   - Add tests for connection management

4. Pydantic Models:
   - Add tests for validation cases
   - Add tests for error handling
   - Add tests for complex validation scenarios

Technical details:
- Use TrulyAwaitableMockPage for proper async testing
- Add comprehensive mocking for browser and page objects
- Add proper cleanup in session management tests
- Add browser-specific configuration tests (Chrome/Edge)
- Add proper type hints and docstrings

Files changed:
- Renamed: tests/integration/test_web_scraping_mixin.py → tests/integration/test_web_scraping_mixin_integration.py
- Added: tests/unit/test_error_handlers.py
- Added: tests/unit/test_net.py
- Added: tests/unit/test_pydantics.py
- Added: tests/unit/test_web_scraping_mixin.py

* test: enhance test coverage with additional edge cases and scenarios

This commit extends the test coverage improvements with additional test cases
and edge case handling, focusing on browser configuration, error handling, and
file utilities.

Key improvements:

1. WebScrapingMixin:
   - Add comprehensive browser binary location detection tests
   - Add cross-platform browser path detection (Linux, macOS, Windows)
   - Add browser profile configuration tests
   - Add session state persistence tests
   - Add external process termination handling
   - Add session creation error cleanup tests
   - Improve browser argument configuration tests
   - Add extension loading validation tests

2. Error Handlers:
   - Add debug mode error handling tests
   - Add specific error type tests (AttributeError, ImportError, NameError, TypeError)
   - Improve error message formatting tests
   - Add traceback inclusion verification

3. Pydantic Models:
   - Add comprehensive validation error message tests
   - Add tests for various error codes and contexts
   - Add tests for pluralization in error messages
   - Add tests for empty error list handling
   - Add tests for context handling in validation errors

4. File Utilities:
   - Add comprehensive path resolution tests
   - Add tests for file and directory reference handling
   - Add tests for special path cases
   - Add tests for nonexistent path handling
   - Add tests for absolute and relative path conversion

Technical details:
- Add proper type casting for test fixtures
- Improve test isolation and cleanup
- Add platform-specific browser path detection
- Add proper error context handling
- Add comprehensive error message formatting tests
- Add proper cleanup in session management tests
- Add browser-specific configuration tests
- Add proper path normalization and resolution tests

* fix(test): handle Linux browser paths in web_scraping_mixin test

Update mock_exists to properly detect Linux browser binaries in test_browser_profile_configuration, fixing the "Installed browser could not be detected" error.

* fix(test): handle Windows browser paths in web_scraping_mixin test

Add Windows browser paths to mock_exists function to properly detect browser binaries on Windows platform, fixing the "Specified browser binary does not exist" error.
2025-05-18 19:02:59 +02:00
sebthom
fb00f11539 ci: update codecov config 2025-05-16 15:15:54 +02:00
kleinanzeigen-bot-tu[bot]
27282f2853 chore: Update Python dependencies (#514)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-16 13:35:47 +02:00
sebthom
23910ffbf5 ci: publish code coverage reports 2025-05-15 22:13:38 +02:00
sebthom
83c0d6adf0 refact: move temp files to /.temp/ 2025-05-15 19:52:41 +02:00
sebthom
cc25164b43 fix: replace usage of legacy pydantic validators 2025-05-15 19:12:48 +02:00
kleinanzeigen-bot-tu[bot]
3b381847ca chore: Update Python dependencies (#511)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-15 12:20:23 +02:00
sebthom
85a5cf5224 feat: improve content_hash calculation 2025-05-15 12:07:49 +02:00
sebthom
f1cd597dd8 fix: reduce distribution size 2025-05-15 12:07:49 +02:00
sebthom
6ede14596d feat: add type safe Ad model 2025-05-15 12:07:49 +02:00
sebthom
1369da1c34 feat: add type safe Config model 2025-05-15 12:07:49 +02:00
sebthom
e7a3d46d25 fix: display file paths under current working dir as relative in logs 2025-05-15 00:27:10 +02:00
sebthom
e811cd339b ci: improve issue template 2025-05-15 00:27:10 +02:00
sebthom
a863f3c63a ci: improve issue template 2025-05-14 12:35:35 +02:00
Heavenfighter
0faa022e4d fix: Unable to download single ad (#509) 2025-05-14 11:24:16 +02:00
sebthom
8e2385c078 fix: TimeoutError: Unable to close shipping dialog! #505 2025-05-13 21:06:42 +02:00
sebthom
a03b368ccd fix: active: false in ad config is ignored #502 2025-05-13 20:59:15 +02:00
sebthom
9a3c0190ba chore: improve dicts module 2025-05-13 20:42:42 +02:00
sebthom
1f9895850f fix: add missing translations and fix translation loading/testing 2025-05-13 19:27:52 +02:00
sebthom
21d7cc557d feat: extend utils.save_dict 2025-05-13 13:45:58 +02:00
sebthom
58f6ae960f refact: simplify XPATH expressions 2025-05-12 18:28:28 +02:00
sebthom
50c0323921 fix: random RuntimeError: dictionary changed size during iteration 2025-05-12 17:50:08 +02:00
sebthom
ee4146f57c fix: auto-restart when captcha was encountered 2025-05-12 17:20:51 +02:00
airwave1981
65738926ae fix: TypeError in CustomFormatter.format 2025-05-12 17:11:47 +02:00
sebthom
f2e6f0b20b chore: update pyproject.toml 2025-05-12 14:08:50 +02:00
DreckSoft
ed83052fa4 fix: Unable to close shipping dialog (#501)
Co-authored-by: Sebastian Thomschke <sebthom@users.noreply.github.com>
2025-05-11 20:29:10 +02:00
sebthom
314285583e ci: add pip-audit check 2025-05-11 20:14:38 +02:00
kleinanzeigen-bot-tu[bot]
aa00d734ea chore: Update Python dependencies (#500)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-11 20:00:22 +02:00
kleinanzeigen-bot-tu[bot]
8584311305 chore: ✔ Update pyinstaller-hooks-contrib 2025.3 -> 2025.4 (#499)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-04 12:37:03 +02:00
kleinanzeigen-bot-tu[bot]
03dd3ebb10 chore: ✔ Update setuptools 80.1.0 -> 80.3.0 (#498)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-03 12:17:29 +02:00
kleinanzeigen-bot-tu[bot]
138d365713 chore: ✔ Update ruff 0.11.7 -> 0.11.8 (#497)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-02 12:22:00 +02:00
kleinanzeigen-bot-tu[bot]
6c2c6a0064 chore: ✔ Update setuptools 80.0.1 -> 80.1.0 (#496)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-02 12:07:42 +02:00
Benedikt
8b2d61b1d4 fix: improve login detection with fallback element (#493)
- Add fallback check for user-email element when mr-medium is not found
- Improve login detection reliability
- Add test case for alternative login element
2025-04-30 17:50:58 +02:00
kleinanzeigen-bot-tu[bot]
7852985de9 chore: Update Python dependencies (#492)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-04-30 17:50:42 +02:00
Benedikt
9bcc669c48 feat: add support for multiple matching shipping options (#483) 2025-04-29 21:02:09 +02:00
sebthom
3e8072973a build: use yamlfix for yaml formatting 2025-04-28 13:17:23 +02:00
sebthom
bda0acf943 refact: enable ruff preview rules 2025-04-28 13:17:23 +02:00
sebthom
f98251ade3 fix: improve ad description length validation 2025-04-28 13:17:23 +02:00
sebthom
ef923a8337 refact: apply consistent formatting 2025-04-28 12:55:28 +02:00
sebthom
fe33a0e461 refact: replace pyright with basedpyright 2025-04-28 12:52:18 +02:00
sebthom
376ec76226 refact: use ruff instead of autopep8,bandit,pylint for linting 2025-04-28 12:51:51 +02:00
sebthom
f0b84ab335 build: simplify pytest config 2025-04-28 12:43:53 +02:00
sebthom
634cc3d9ee build: upgrade to Python 3.13.3 2025-04-28 12:43:42 +02:00
sebthom
52e1682dba fix: avoid "[PYI-28040:ERROR]" log message when run via pyinstaller 2025-04-27 14:34:56 +02:00
sebthom
7b0774874e fix: harden extract_ad_id_from_ad_url 2025-04-27 14:23:56 +02:00
DreckSoft
23929a62cc fix: logon detection and duplicate suffix in ad description (#488)
Co-authored-by: Sebastian Thomschke <sebthom@users.noreply.github.com>
2025-04-27 14:21:40 +02:00
github-actions[bot]
3909218531 chore: ✔ Update certifi 2025.1.31 -> 2025.4.26 2025-04-26 14:41:39 +02:00
Airwave1981
d87ae6e740 feat: allow auto-restart on captcha (#481)
Co-authored-by: sebthom <sebthom@users.noreply.github.com>
2025-04-26 14:40:47 +02:00
sebthom
4891c142a9 feat: add misc.format_timedelta/parse_duration 2025-04-25 21:06:25 +02:00
github-actions[bot]
e417750548 chore: Update Python dependencies 2025-04-25 21:01:11 +02:00
marvinkcode
79af6ba861 fix: Correct pagination selectors and logic for issue #477 (#479) 2025-04-21 20:26:02 +02:00
Heavenfighter
c144801d2e fixes #474
Now using ID to identify checkbox for custom shipping
2025-04-21 20:24:23 +02:00
github-actions[bot]
a03328e308 chore: Update Python dependencies 2025-04-18 13:44:48 +02:00
Heavenfighter
20f3f87864 fixes #475 CSS selector 'button' not found
Element button was changed to em.
2025-04-18 13:44:00 +02:00
sebthom
27c7bb56ca fix: downgrading nodriver to 0.39 to address failing browser launch #470 2025-04-07 22:40:41 +02:00
sebthom
79701e2833 feat: debug log web_execute 2025-04-07 22:40:41 +02:00
sebthom
21835d9d86 test: don't require translations for debug messages 2025-04-07 22:40:41 +02:00
sebthom
aeaf77e5d4 refact: use named parameters 2025-04-07 21:57:51 +02:00
github-actions[bot]
b66c9d37bf chore: Update Python dependencies 2025-04-07 20:43:19 +02:00
github-actions[bot]
b07633e661 chore: Update Python dependencies 2025-03-26 11:31:13 +01:00
github-actions[bot]
fd58f3fa45 chore: Update Python dependencies 2025-03-20 11:20:16 +01:00
github-actions[bot]
13965b8607 chore: ✔ Update setuptools 76.0.0 -> 76.1.0 2025-03-18 12:08:55 +01:00
github-actions[bot]
4a9c2ff5a8 chore: ✔ Update coverage 7.6.12 -> 7.7.0 2025-03-17 11:48:21 +01:00
Heavenfighter
33f58811cd Fixes setting shipping costs to zero.
Empty shipping costs lead to
default shipping.
2025-03-16 21:28:44 +01:00
Heavenfighter
57c89a6f64 Adding condition "Defekt" (#461) 2025-03-15 18:25:26 +01:00
Heavenfighter
9183909188 fix: setting shipping options properly (#457) 2025-03-14 12:34:39 +01:00
Heavenfighter
7742196043 fix: set custom shipping due css update #448 (#450) 2025-03-13 12:13:23 +01:00
Jens Bergmann
6bd5ba98d2 fix: Clean up obsolete translations in German language file
- Remove unused translation entries from translations.de.yaml
- Improve translation test to better detect obsolete entries
- Add KNOWN_NEEDED_MODULES for special cases
- Add helper function _message_exists_in_code for better translation verification
- Improve error messages to show both original and translated text
- Fix import sorting in test file

This commit improves the maintainability of the translation system by
removing unused entries and enhancing the verification process.
2025-03-13 12:05:46 +01:00
github-actions[bot]
a6d2d2dc5a chore: ✔ Update nodriver 0.40 -> 0.41 2025-03-13 11:58:51 +01:00
Jens Bergmann
1b004a2a3e Revert "feat: Introduce isort and Python-based code quality tools (#446)"
This reverts commit cfe2b900c7.

The custom scripts introduced to auto-format imports (to enforce project guidelines) caused issues. Specifically, isort’s hardcoded behavior for expanded standard library imports with “as” imports led to unintended formatting. This commit reverts those changes and removes the custom scripts, restoring the project to its previous, stable state.
2025-03-13 11:55:31 +01:00
github-actions[bot]
21f118ba8e chore: Update Python dependencies 2025-03-09 23:14:52 -06:00
Jens Bergmann
cfe2b900c7 feat: Introduce isort and Python-based code quality tools (#446) 2025-03-10 06:09:49 +01:00
kleinanzeigen-bot-tu[bot]
4243ba698a chore: ✔ Update nodriver 0.39 -> 0.40 (#443)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-03-01 12:02:17 -05:00
Jens Bergmann
772326003f fix: Separate 'changed' and 'due' ad selectors (#442)
This commit implements a new 'changed' selector for the --ads option that
publishes only ads that have been modified since their last publication.
The 'due' selector now only republishes ads based on the time interval,
without considering content changes.

The implementation allows combining selectors with commas (e.g., --ads=changed,due)
to publish both changed and due ads. Documentation and translations have been
updated accordingly.

Fixes #411
2025-02-28 14:53:53 -05:00
github-actions[bot]
6b3da5bc0a chore: Update Python dependencies 2025-02-28 11:21:12 -05:00
NME
7b9412677e fix: Update css class selectors fixing #440 (#441)
* fixes #440 css update
* fixed class selector
* added missing translation
---------

Co-authored-by: Jens Bergmann <1742418+1cu@users.noreply.github.com>
2025-02-28 11:16:49 -05:00
github-actions[bot]
b99be81158 chore: Update Python dependencies 2025-02-20 04:53:31 +01:00
Jens Bergmann
c7f7b832b2 fix: Make description field optional in ad_defaults
The description field in the main configuration (ad_defaults) is now optional.
Previously, the bot would fail if no description or affixes were provided in
the main configuration. This change addresses issue #435.

Changes:
- Add fallback to empty string ("") when all description prefix/suffix sources
  are None in __get_description_with_affixes method
- Add comprehensive test suite for description handling in test_init.py
- Fix coverage path in pyproject.toml from 'kleinanzeigen_bot' to
  'src/kleinanzeigen_bot'

New tests cover:
- Description handling without main config description
- New format affixes in configuration
- Mixed old/new format affixes
- Ad-level affix precedence
- None value handling in affixes
- Email address @ symbol replacement

This change maintains backward compatibility while making the description
field optional in the main configuration, improving flexibility for users.
2025-02-18 21:39:53 +01:00
kleinanzeigen-bot-tu[bot]
a8f6817c5c chore: update psutil 6.1.1 -> 7.0.0 (#430)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-02-16 21:47:13 +01:00
Jens Bergmann
610615367c fix: consolidate description text processing (#432)
Consolidate description text processing into __get_description_with_affixes method:
- Move @ -> (at) replacement into the method
- Remove duplicate prefix/suffix handling code
- Ensure consistent description text processing in one place

This fixes #432 by ensuring consistent handling of description affixes
and text transformations.
2025-02-15 19:58:09 +01:00
github-actions[bot]
34b2bc6550 chore: ✔ Update pyright 1.1.393 -> 1.1.394 2025-02-13 17:26:15 +01:00
Heavenfighter
543d46631c fix: Setting shipping options fails for commercial accounts. Fixes #394 (#424)
Co-authored-by: Jens Bergmann <1742418+1cu@users.noreply.github.com>
2025-02-13 17:13:32 +01:00
Jens Bergmann
e43ac4f1f9 feat: extend translations and add translation unit test (#427) 2025-02-12 22:25:05 +01:00
sebthom
c61c14709f ci: add PR title validation 2025-02-12 22:16:16 +01:00
github-actions[bot]
8270554507 chore: ✔ Update coverage 7.6.11 -> 7.6.12 2025-02-12 21:45:54 +01:00
sebthom
9f19cd85bd docs: fix build status badge 2025-02-12 21:40:45 +01:00
Jens Bergmann
4051620aed enh: allow per-ad overriding of global description affixes (#416) 2025-02-11 23:39:26 +01:00
Heavenfighter
a67112d936 fix: handle delayed ad publication #414 (#422) 2025-02-11 20:43:33 +01:00
Heavenfighter
820ae8966e fix: download all ads not working anymore #420 (#421)
renamed h2 to h3
2025-02-11 12:33:32 -06:00
sebthom
f3beb795b4 refact: minor cleanup 2025-02-10 22:06:03 +01:00
sebthom
5ade82b54d chore: update pyproject config 2025-02-10 21:16:38 +01:00
sebthom
367ef07798 refact: improve logger handling 2025-02-10 20:34:58 +01:00
sebthom
ec7ffedcd6 ci: add build timeout to all jobs 2025-02-10 18:51:54 +01:00
sebthom
2402ba2572 refact: reorganize utility modules 2025-02-10 06:23:17 +01:00
sebthom
e8d342dc68 docs: document ad config defaults 2025-02-10 03:23:33 +01:00
sebthom
7169975d2a fix: logging file handler not closed on bot shutdown. Fixes #405 2025-02-09 04:23:24 +01:00
github-actions[bot]
b4658407a3 chore: Update Python dependencies 2025-02-09 03:45:17 +01:00
Jens Bergmann
affde0debf test: Enhance test coverage for KleinanzeigenBot initialization and core functionality (#408) 2025-02-09 03:33:01 +01:00
Jens Bergmann
dd5f2ba5e4 fix: Ensure Consistent Content Hash Calculation (#415)
This commit addresses an issue where the content hash was being calculated on the current configuration (`ad_cfg`) instead of the original configuration (`ad_cfg_orig`). This could lead to inconsistent hash values, especially when the configuration is updated during the execution of the program.

The fix involves calculating the content hash on the original configuration (`ad_cfg_orig`) in both the `__check_ad_republication` and `publish_ad` methods. This ensures that the hash value is consistent and matches what was stored.

The relevant code changes are as follows:

- In the `__check_ad_republication` method, the content hash is now calculated on `ad_cfg_orig` instead of `ad_cfg`.
- In the `publish_ad` method, the content hash is also calculated on `ad_cfg_orig` to ensure consistency.

These changes should improve the reliability of the content hash comparison and the overall stability of the application.
2025-02-09 03:14:19 +01:00
sebthom
042525eb91 build: upgrade to Python 3.13.2 2025-02-08 22:18:16 +01:00
DreckSoft
b12118361d feat: display actual num of chars of description when too long (#403) 2025-02-08 04:03:54 +01:00
github-actions[bot]
20fb47a6e2 chore: Update Python dependencies 2025-02-05 16:39:22 -06:00
1cu
f4f00b9563 test: Add comprehensive test suite for extract.py (#400) 2025-02-05 23:35:45 +01:00
sebthom
08197eabae docs: improve disclaimer 2025-02-03 22:06:30 +01:00
sebthom
9cd4fdd693 build: use Python 3.13.1 when building with act 2025-02-03 21:12:45 +01:00
github-actions[bot]
67fd0e2724 chore: Update Python dependencies 2025-02-03 17:06:06 +01:00
1cu
76b0901166 test: reorganized unit/integration tests (#398) 2025-02-03 17:05:14 +01:00
Jens Bergmann
100f2fd8c5 style: ensure all comments and strings are in English - Update test descriptions and comments 2025-02-03 14:15:47 +01:00
Jens Bergmann
be8eee6aa0 fix: Handle None values in calculate_content_hash - Add test case to reproduce TypeError with None values - Fix handling of None values in special_attributes, shipping_options and images - Ensure consistent empty value handling (empty string instead of 'None') - Fixes #395 2025-02-03 14:15:47 +01:00
github-actions[bot]
f51dab0c3f chore: Update Python dependencies 2025-01-29 20:57:08 +01:00
1cu
fa0d43efa8 fix: Make doctests locale-independent (#390) (#391) 2025-01-27 09:22:15 +01:00
1cu
f01109c956 feat: add hash-based ad change detection (#343) (#388)
Co-authored-by: sebthom <sebthom@users.noreply.github.com>
2025-01-26 23:37:33 +01:00
sebthom
3d27755207 docs: update README 2025-01-26 20:05:28 +01:00
kleinanzeigen-bot-tu[bot]
ed7fd21272 chore: Update deprecated 1.2.15 -> 1.2.17 (#389)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-01-26 19:50:07 +01:00
sebthom
236740fc2b chore: update pyproject.toml 2025-01-24 22:13:59 +01:00
sebthom
d2eb3adc77 chore: update PR template 2025-01-24 21:50:06 +01:00
Heavenfighter
66634ce636 fix: fixed shipping button selector #385 (#387) 2025-01-20 21:40:28 +01:00
sebthom
7d9b857a46 docs: Update doc 2025-01-20 21:40:28 +01:00
Jens Bergmann
2f93e0dfda fix: correct city selection when multiple cities are available for a ZIP code
When multiple cities are available for a ZIP code, the bot now correctly selects
the city specified in the YAML file's location field instead of always choosing
the first option.

The change:
- Adds logic to select the correct city from dropdown based on location field
- Adds a small delay after ZIP code input to allow dropdown to populate
- Uses proper WebScrapingMixin method to read dropdown options
2025-01-20 12:22:16 +01:00
github-actions[bot]
46e585b96d chore: Update Python dependencies 2025-01-20 12:21:57 +01:00
sebthom
d4d5514cc0 fix: better commit message for dependency updates 2025-01-14 14:18:50 +01:00
kleinanzeigen-bot-tu[bot]
49ac8baf5c chore: Update bandit 1.8.0 -> 1.8.2 (#381)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-01-14 14:04:14 +01:00
kleinanzeigen-bot-tu[bot]
70aef618a0 chore: Update wrapt 1.17.0 -> 1.17.1 (#379)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-01-11 11:17:04 +01:00
sebthom
677c48628d fix: remove temporary workaround for #368 2025-01-10 16:21:44 +01:00
Heavenfighter
ca876e628b fix shipping options when downloading. Fixes #375 (#376) 2025-01-10 16:05:11 +01:00
github-actions[bot]
640b748b1d chore: Update Python dependencies 2025-01-10 12:30:24 +01:00
sebthom
6820a946c9 fix: escape metachars in ID and Names for selector queries #368 2025-01-09 21:14:13 +01:00
Heavenfighter
33a43e3ff6 fix: setting shipping options regression #367 (#374)
Button with given label occurs too often. Path must be corrected.
2025-01-09 20:30:24 +01:00
Heavenfighter
f9eb6185c7 fix: failed to set special attributes #334 (#370) 2025-01-09 17:01:48 +01:00
Heavenfighter
e590a32aa2 fix: re-publishing without images #371 (#372)
added detection of image-question
2025-01-09 17:00:51 +01:00
kleinanzeigen-bot-tu[bot]
7668026eda chore: Update setuptools 75.7.0 -> 75.8.0 (#369)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-01-09 17:00:15 +01:00
Heavenfighter
5829df66e4 fix setting shipping options #367 2025-01-09 16:59:38 +01:00
Heavenfighter
f3a7cf0150 feat: don't republish reserved ads. fixes #365 (#366) 2025-01-08 18:21:34 +01:00
github-actions[bot]
cd955a5506 chore: Update Python dependencies 2025-01-08 18:16:36 +01:00
sebthom
be78ec9736 fix: don't create a new release on every cron scheduled run 2025-01-01 16:52:07 +01:00
sebthom
2705dc7e43 refact: use colorama.just_fix_windows_console instead of colorama.init 2024-12-28 19:25:53 +01:00
sebthom
679d08502c chore: regenerate pdm.lock 2024-12-28 19:25:26 +01:00
sebthom
aec051826a chore: update project meta 2024-12-28 18:03:09 +01:00
sebthom
05f6ceb5b9 don't fail python dep update job if no updates were found 2024-12-28 17:50:25 +01:00
sebthom
e077f8d86d feat: improve colorized logging 2024-12-27 15:35:58 +01:00
sebthom
f90f848cba fix: improve online help 2024-12-27 15:33:45 +01:00
sebthom
47614887e7 fix: improve logging 2024-12-27 14:19:20 +01:00
sebthom
9841f6f48f ci: fix release build 2024-12-27 13:49:38 +01:00
sebthom
1e782beabc fix: update help text 2024-12-27 13:49:05 +01:00
sebthom
9d54a949e7 feat: add multi-language support 2024-12-27 13:04:30 +01:00
sebthom
0aa1975325 chore: Update Python dependencies 2024-12-27 12:54:36 +01:00
sebthom
7b579900c3 ci: update workflow config 2024-12-27 12:54:21 +01:00
sebthom
cde3250ab8 ci: update workflow config 2024-12-22 22:20:23 +01:00
sebthom
a738f0748d docs: add related OSS projects section 2024-12-22 22:19:44 +01:00
sebthom
8acaf7b25f chore: Update Python dependencies 2024-12-22 20:49:29 +01:00
provinzio
09f4d0f16f FIX login check has to be done case insensitive 2024-12-13 18:40:56 +01:00
github-actions[bot]
f1ae6ff8de chore: Update Python dependencies 2024-12-12 22:34:18 +01:00
sebthom
97ed41d96e ci: update issue templates 2024-12-12 22:28:25 +01:00
sebthom
ab953111d4 ci: fix linux builds 2024-12-12 22:09:57 +01:00
Heavenfighter
9a826452f9 fix: No HTML element found using CSS selector (#354)
Fixes #351
Fixes #353
2024-12-08 18:46:29 +01:00
kleinanzeigen-bot-tu[bot]
e89e311043 chore: Update Python dependencies (#350)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-12-08 18:37:17 +01:00
sebthom
26f05b5506 fix: category value incomplete when downloading ads 2024-11-25 00:03:48 +01:00
sebthom
a83ee4883e refact: minor cleanup 2024-11-25 00:03:18 +01:00
sebthom
e8dcb78951 fix: using shipping type PICKUP fails #346 2024-11-24 21:12:43 +01:00
sebthom
f7ef3c2b2e fix: don't auto delete ads directly after publishing 2024-11-22 23:54:53 +01:00
sebthom
b259977198 feat: if a category is not found try to lookup fallback category 2024-11-22 14:27:32 +01:00
sebthom
50ac195229 feat: extend categories.yaml 2024-11-22 13:43:15 +01:00
sebthom
a876add5a7 feat: by default delete old ads after republishing #338 2024-11-22 12:41:34 +01:00
sebthom
f9fdf4d158 refact: update categories 2024-11-22 12:40:27 +01:00
sebthom
a419c48805 refact: remove redundant comments 2024-11-22 12:30:50 +01:00
sebthom
ee09bb40a2 refact: add comment 2024-11-22 00:12:50 +01:00
sebthom
01d78bb000 feat: support shipping for WANTED ads #349 2024-11-21 23:53:26 +01:00
sebthom
6a315c97ce feat: remove default prefix/suffix text from downloaded ads 2024-11-21 23:28:13 +01:00
sebthom
5086721082 feat: use YAML | block style for multi-line strings on add download 2024-11-21 23:11:36 +01:00
sebthom
735e564c76 fix: save location #296 2024-11-21 22:53:49 +01:00
sebthom
86c3aeea85 fix: downloaded images have wrong file extension #348 2024-11-21 22:53:35 +01:00
sebthom
fe13131dee chore: update deps 2024-11-21 22:05:56 +01:00
sebthom
f6748de2b1 fix: add missing await keyword 2024-11-21 22:04:32 +01:00
sebthom
6e76b0ff4c build: rename "scan" script to "audit" 2024-11-21 22:04:15 +01:00
sebthom
1b326c1ce8 chore: upgrade to Python 3.13 and update deps 2024-11-15 13:31:29 +01:00
Julian Hackinger
4a3fb230f5 fix: double login required (#344) 2024-11-15 13:05:08 +01:00
sebthom
dc951d54e4 ci: remove deprecated parameter 2024-10-24 20:07:11 +02:00
github-actions[bot]
6518a1f890 chore: Update Python dependencies 2024-10-24 19:38:35 +02:00
sebthom
9b320c1d3c chore: update issue config 2024-10-08 21:41:29 +02:00
sebthom
ba6a40e373 chore: upgrade to Python 3.12.6 2024-09-16 12:10:05 +02:00
sebthom
6c5c1940e1 chore: Update Python dependencies 2024-09-16 11:56:34 +02:00
dependabot[bot]
7f9046a26d ci(deps): bump peter-evans/create-pull-request from 6 to 7
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 6 to 7.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](https://github.com/peter-evans/create-pull-request/compare/v6...v7)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-09 20:08:57 +02:00
Saghalt
b9e1f8c327 fix: ValueError when downloading ads without special_attributes (#330) 2024-09-02 20:55:21 +02:00
Saghalt
315400534b Disable search engine popup with chrome 2024-08-22 13:11:46 +02:00
sebthom
0491636666 fix: SSL: CERTIFICATE_VERIFY_FAILED when running compiled version 2024-08-05 13:50:43 +02:00
sebthom
a74c618b36 fix: ModuleNotFoundError: No module named 'backports' 2024-08-05 13:50:43 +02:00
sebthom
69de3d07f5 chore: Update Python dependencies 2024-08-05 13:43:38 +02:00
Jeppy
c1272626aa FIX id of web element to select special attribute changed 2024-07-23 12:14:19 +02:00
Jeppy
c967e901ac FIX select condition from new dialog instead 2024-07-23 12:14:19 +02:00
Jeppy
71eb632191 FIX extract special attributes from ad page
Format of special attribute changed to "key:value|key:value".
Instead of transforming the string to JSON, directly create a dictionary from belen_conf.
2024-07-23 11:42:41 +02:00
sebthom
53f155f6c0 chore: use oldest supported Python version for dep updates 2024-07-19 17:13:03 +02:00
github-actions[bot]
39f9545d9b chore: Update Python dependencies 2024-07-07 20:08:38 +02:00
github-actions[bot]
effc91c269 chore: Update Python dependencies 2024-06-11 10:56:46 +02:00
Saghalt
eab9874bdb fix: special attributes cannot be parsed as JSON #312 2024-06-11 10:55:03 +02:00
github-actions[bot]
0f87e5573a chore: Update Python dependencies 2024-05-30 22:14:37 +02:00
sebthom
ef6b25fb46 Scan final build results using clamscan 2024-05-30 22:03:16 +02:00
sebthom
1e0990580d Log Github context 2024-05-30 21:46:15 +02:00
sebthom
9d0755c359 add MacOS ARM builds 2024-05-30 21:41:02 +02:00
Jeppy
4a8b6ecdf3 FIX selection of shipping options (#307) 2024-05-30 20:54:30 +02:00
Jeppy
929459a08d FIX selecting price type
selecting the wanted index doesn't trigger a change event which is necessary to update  internal variables regarding the price type
2024-05-30 20:27:25 +02:00
Jeppy
72283bf069 UPDATE wait for user interaction to solve captcha on publishing ad (closes Second-Hand-Friends/kleinanzeigen-bot#301) 2024-05-30 20:26:39 +02:00
Jeppy
b30867ca48 FIX extract sell directly from ad page
Web element with id `j-buy-now` does not exist anymore. Fetch the `payment-buttons-sidebar` instead and check the text for `Direkt kaufen`
2024-05-30 19:26:37 +02:00
Kjell Knudsen
ba73ebb393 fix navigation button selector 2024-05-11 15:49:03 +02:00
sebthom
822d3b7e7c upgrade dependencies
- setuptools 69.1.1 -> 69.5.1
- pytest-rerunfailures 13.0 -> 14.0
- autopep8 2.0.4 -> 2.1.0
- typing-extensions 4.10.0 -> 4.11.0
- pyright 1.1.353 -> 1.1.359
- pyinstaller 6.5.0 -> 6.6.0
- pyinstaller-hooks-contrib 2024.3 -> 2024.4
- nodriver 0.27rc3 -> 0.27rc4
2024-04-17 17:49:11 +02:00
sebthom
12974285ad start clamav before checkout 2024-04-04 19:00:38 +02:00
sebthom
657eadaa59 update workflow config 2024-04-04 14:24:01 +02:00
Maksim Bock
d1f50e9b16 fix broken link to categories in config_defaults.yaml 2024-04-03 21:46:49 +02:00
Tobias Faber
2c7d165b6e Fix download on given IDs list 2024-04-01 23:03:27 +02:00
dependabot[bot]
88d9e053cb ci(deps): bump toblux/start-clamd-github-action from 0.1 to 0.2
Bumps [toblux/start-clamd-github-action](https://github.com/toblux/start-clamd-github-action) from 0.1 to 0.2.
- [Release notes](https://github.com/toblux/start-clamd-github-action/releases)
- [Commits](https://github.com/toblux/start-clamd-github-action/compare/v0.1...v0.2)

---
updated-dependencies:
- dependency-name: toblux/start-clamd-github-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-01 19:58:07 +02:00
Moritz Graf
b3cc8ef5cd Fixing missing src directory in README (#288) 2024-04-01 18:09:50 +02:00
Tobias Faber
114afb6a73 fix: download of shipping info. Fixes #282 (#286) 2024-03-29 14:45:21 +01:00
Tobias Faber
db465af9b7 Fix VB Price with thousand separator 2024-03-29 13:39:09 +01:00
SphaeroX
5c8e00df52 fix: No HTML element found with ID 'my-manageads-adlist' (#284) 2024-03-28 19:45:42 +01:00
sebthom
46b901d0cc ci: remove unused token 2024-03-18 19:08:47 +01:00
github-actions[bot]
78c9b16058 chore: Update Python dependencies 2024-03-18 19:06:22 +01:00
dependabot[bot]
750f6a0ef2 ci(deps): bump geekyeggo/delete-artifact from 4 to 5
Bumps [geekyeggo/delete-artifact](https://github.com/geekyeggo/delete-artifact) from 4 to 5.
- [Release notes](https://github.com/geekyeggo/delete-artifact/releases)
- [Changelog](https://github.com/GeekyEggo/delete-artifact/blob/main/CHANGELOG.md)
- [Commits](https://github.com/geekyeggo/delete-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: geekyeggo/delete-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-18 19:06:12 +01:00
sebthom
ef3429435b fix: root CA certs missing in docker image 2024-03-16 22:09:02 +01:00
sebthom
7c982ad502 fix: don't hardcode republication_interval. Fixes #271 2024-03-14 12:51:19 +01:00
sebthom
a8290500e7 build kleinanzeigen-bot-windows-amd64-uncompressed.exe 2024-03-11 23:08:30 +01:00
dependabot[bot]
e75936da75 ci(deps): bump actions/checkout from 2 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-11 19:56:51 +01:00
sebthom
d5ae070bb3 chore: Update Python dependencies 2024-03-11 00:41:20 +01:00
sebthom
f943078d44 ci: configure clamd anti-virus/malware daemon 2024-03-11 00:13:50 +01:00
sebthom
61f362afb4 remove obsolete chrome-driver from docker image 2024-03-08 15:21:17 +01:00
sebthom
7133b26c37 update stale config 2024-03-08 13:00:14 +01:00
Samuel
d7fec9e4ce Fix: Crash on downloading ads with prices >=1000 Eur (#267)
Co-authored-by: Sebastian Thomschke <sebthom@users.noreply.github.com>
2024-03-08 12:06:47 +01:00
Sebastian Thomschke
e99f74bc58 Handle quotes in commit messages 2024-03-08 00:08:42 +01:00
sebthom
c9f12bfeea add "pdm debug" task 2024-03-07 23:21:50 +01:00
sebthom
e7c7ba90be support re-using already open browser window 2024-03-07 23:07:23 +01:00
sebthom
d1f33bb44a improve check if already logged in 2024-03-07 22:12:26 +01:00
sebthom
a5c1219faf update workflow config 2024-03-07 20:33:34 +01:00
sebthom
a441c5de73 replace selenium with nodriver 2024-03-07 20:33:23 +01:00
107 changed files with 30809 additions and 3055 deletions

View File

@@ -1,3 +1,3 @@
{ {
"act": true "act": true
} }

2
.actrc
View File

@@ -6,4 +6,4 @@
-W .github/workflows/build.yml -W .github/workflows/build.yml
-j build -j build
--matrix os:ubuntu-latest --matrix os:ubuntu-latest
--matrix PYTHON_VERSION:3.12.1 --matrix PYTHON_VERSION:3.14

187
.coderabbit.yaml Normal file
View File

@@ -0,0 +1,187 @@
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
# CodeRabbit Configuration for Kleinanzeigen Bot
# Maintains project-specific rules for English code and translation system
# =============================================================================
# GLOBAL SETTINGS
# =============================================================================
language: "en"
tone_instructions: "Be strict about English-only code and translation system usage. Non-code may be in german. Focus on simple, maintainable solutions. Avoid unnecessary complexity and abstractions."
enable_free_tier: true
# =============================================================================
# REVIEWS
# =============================================================================
reviews:
profile: "assertive" # More feedback to catch complexity
high_level_summary: true
review_status: false
commit_status: true
changed_files_summary: true
sequence_diagrams: true
estimate_code_review_effort: true
assess_linked_issues: true
related_issues: true
related_prs: true
suggested_labels: false
suggested_reviewers: true
in_progress_fortune: false
poem: false
# Path filters to focus on important files
path_filters:
# Source code
- "src/**/*.py"
- "tests/**/*.py"
- "scripts/**/*.py"
# GitHub automation - workflows, dependabot, templates, etc.
- ".github/**"
# Root config files
- "pyproject.toml"
- "*.yaml"
- "*.yml"
- "**/*.md"
# Exclude build/cache artifacts
- "!**/__pycache__/**"
- "!**/.pytest_cache/**"
- "!**/.mypy_cache/**"
- "!**/.ruff_cache/**"
- "!dist/**"
- "!build/**"
- "!*.egg-info/**"
# Exclude IDE-specific files
- "!.vscode/**"
- "!.idea/**"
- "!.DS_Store"
# Exclude temporary files
- "!*.log"
- "!*.tmp"
- "!*.temp"
# Exclude lock files (too noisy)
- "!pdm.lock"
# Path-specific instructions for different file types
path_instructions:
- path: "src/kleinanzeigen_bot/**/*.py"
instructions: |
CRITICAL RULES FOR KLEINANZEIGEN BOT:
1. ALL code, comments, and text MUST be in English
2. NEVER access live website in tests (bot detection risk)
3. Use WebScrapingMixin for browser automation
4. Handle TimeoutError for all web operations
5. Use ensure() for critical validations
6. Don't add features until explicitly needed
7. Keep solutions simple and straightforward
8. Use async/await for I/O operations
9. Follow Pydantic model patterns
10. Use proper error handling and logging
11. Test business logic separately from web scraping
12. Include SPDX license headers on all Python files
13. Use type hints for all function parameters and return values
14. Use structured logging with context and appropriate log levels.
15. Log message strings should be plain English without `_()` (TranslatingLogger handles translation); wrap non-log user-facing strings with `_()` and add translations
16. NEVER flag PEP 8 whitespace/spacing issues (autopep8 handles these automatically via pdm run format)
- path: "tests/**/*.py"
instructions: |
TESTING RULES:
1. NEVER access live website in tests (bot detection risk)
2. Use @patch for web operations in tests
3. Use test fixtures for browser automation
4. Test Pydantic models without web scraping
5. Mock all web operations in tests
6. Use pytest markers: unit, integration, smoke
7. Unit tests: fast, isolated, no external dependencies
8. Integration tests: use mocks, test with external dependencies
9. Smoke tests: critical path, no mocks, no browser (NOT E2E tests)
10. All test code must be in English
11. Test observable behavior, not implementation
12. Use fakes/dummies instead of mocks in smoke tests
13. Focus on minimal health checks, not full user workflows
14. Include SPDX license headers
15. Use descriptive test names in English
16. NEVER flag PEP 8 whitespace/spacing issues (autopep8 handles these automatically via pdm run format)
- path: "scripts/**/*.py"
instructions: |
SCRIPT RULES:
1. All code must be in English
2. Use proper error handling
3. Follow project conventions
4. Keep scripts simple and focused
5. Use appropriate logging
6. Include SPDX license headers
7. Use type hints for all functions
- path: "docs/**/*.md"
instructions: |
DOCUMENTATION RULES:
1. All documentation must be in English
2. Use clear, concise language
3. Include practical examples
4. Include troubleshooting information
5. Follow markdown best practices
# Auto review configuration
auto_review:
enabled: true
auto_incremental_review: true
drafts: false
ignore_title_keywords: ["wip", "draft", "temp"]
labels: ["!wip", "!draft"] # Review all PRs except those with wip or draft labels
# Tools configuration
tools:
ruff:
enabled: true
gitleaks:
enabled: true
semgrep:
enabled: true
markdownlint:
enabled: true
yamllint:
enabled: true
finishing_touches:
docstrings:
enabled: false
unit_tests:
enabled: false
# =============================================================================
# KNOWLEDGE BASE
# =============================================================================
knowledge_base:
opt_out: false
web_search:
enabled: true
code_guidelines:
enabled: true
filePatterns:
- "**/.cursorrules"
- "**/CLAUDE.md"
- "**/GEMINI.md"
- "**/.cursor/rules/*"
- "**/.windsurfrules"
- "**/.clinerules/*"
- "**/.rules/*"
- "**/AGENT.md"
- "**/AGENTS.md"
- "README.md"
- "CONTRIBUTING.md"
- "docs/**/*.md"
learnings:
scope: "auto"
issues:
scope: "auto"
pull_requests:
scope: "auto"

View File

@@ -6,7 +6,23 @@ labels: ["bug"]
body: body:
- type: markdown - type: markdown
attributes: attributes:
value: Thanks for taking the time to fill out this bug report! value: |
Thank you for taking the time to submit a bug report!
This project is run by volunteers, and we depend on users like you to improve it.
Please try to investigate the issue yourself, and if possible submit a pull request with a fix.
- type: checkboxes
id: reproduce-latest
attributes:
label: 🔄 Tested on Latest Release
description: |
Only open issues for problems reproducible with the latest release:
https://github.com/Second-Hand-Friends/kleinanzeigen-bot/releases/tag/latest
options:
- label: I confirm that I can reproduce this issue on the latest version
required: true
- type: textarea - type: textarea
id: expected-behaviour id: expected-behaviour
@@ -35,6 +51,18 @@ body:
validations: validations:
required: true required: true
- type: dropdown
id: operating-system
attributes:
label: 💻 What operating systems are you seeing the problem on?
multiple: true
options:
- Linux
- MacOS
- Windows
validations:
required: true
- type: dropdown - type: dropdown
id: browsers id: browsers
attributes: attributes:
@@ -44,16 +72,6 @@ body:
- Chrome - Chrome
- Microsoft Edge - Microsoft Edge
- type: dropdown
id: operating-system
attributes:
label: 💻 What operating systems are you seeing the problem on? (if applicable)
multiple: true
options:
- Linux
- MacOS
- Windows
- type: textarea - type: textarea
id: logs id: logs
attributes: attributes:

View File

@@ -1,2 +1,6 @@
# disable blank issue creation # disable blank issue creation
blank_issues_enabled: false blank_issues_enabled: false
contact_links:
- name: Community Support
url: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/discussions
about: Please ask and answer questions here.

View File

@@ -6,7 +6,12 @@ labels: [enhancement]
body: body:
- type: markdown - type: markdown
attributes: attributes:
value: Thanks for taking the time to fill out this enhancement request! value: |
Thanks for taking the time to fill out this enhancement request!
This project is run by volunteers, and we depend on users like you to improve it.
Please consider implementing the enhancement yourself and submitting a pull request with your changes.
- type: textarea - type: textarea
id: problem id: problem

View File

@@ -1,6 +1,27 @@
*Issue #, if available:* ## Description
*Provide a concise summary of the changes introduced in this pull request.*
*Description of changes:* - Link to the related issue(s): Issue #
- Describe the motivation and context for this change.
## 📋 Changes Summary
Bullet-point key changes introduced.
Mention any dependencies, configuration changes, or additional requirements introduced.
### ⚙️ Type of Change
Select the type(s) of change(s) included in this pull request:
- [ ] 🐞 Bug fix (non-breaking change which fixes an issue)
- [ ] ✨ New feature (adds new functionality without breaking existing usage)
- [ ] 💥 Breaking change (changes that might break existing user setups, scripts, or configurations)
## ✅ Checklist
Before requesting a review, confirm the following:
- [ ] I have reviewed my changes to ensure they meet the project's standards.
- [ ] I have tested my changes and ensured that all tests pass (`pdm run test`).
- [ ] I have formatted the code (`pdm run format`).
- [ ] I have verified that linting passes (`pdm run lint`).
- [ ] I have updated documentation where necessary.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

View File

@@ -1,17 +1,20 @@
# https://help.github.com/github/administering-a-repository/configuration-options-for-dependency-updates # https://docs.github.com/en/code-security/dependabot/working-with-dependabot/dependabot-options-reference
version: 2 version: 2
updates: updates:
- package-ecosystem: github-actions - package-ecosystem: github-actions
directory: / directory: /
schedule: schedule:
interval: weekly interval: weekly
day: monday day: monday
time: "17:00" time: "14:00"
commit-message: commit-message:
prefix: fix prefix: ci
prefix-development: chore prefix-development: ci
include: scope include: scope
labels: labels:
- pinned - dependencies
- dependencies - gha
- gha - pinned
groups:
all-actions:
patterns: ["*"]

15
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
# see https://github.com/srvaroa/labeler
version: 1
issues: False
labels:
- label: "bug"
title: "^fix(\\(.*\\))?:.*"
- label: "dependencies"
title: "^deps(\\(.*\\))?:.*"
- label: "documentation"
title: "^docs(\\(.*\\))?:.*"
- label: "enhancement"
title: "^(enh|feat)(\\(.*\\))?:.*"
- label: "work-in-progress"
title: "^WIP:.*"
mergeable: false

26
.github/stale.yml vendored
View File

@@ -1,26 +0,0 @@
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 120
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 14
# Issues with these labels will never be considered stale
exemptLabels:
- enhancement
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: wontfix
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed in 7 days if no further activity occurs.
If the issue is still valid, please add a respective comment to prevent this
issue from being closed automatically. Thank you for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

View File

@@ -1,51 +1,72 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors # SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later # SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/ # SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot
# #
# https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions # https://docs.github.com/en/actions/reference/workflows-and-actions/workflow-syntax
name: Build name: Build
on: on: # https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows
schedule:
# https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows#schedule
- cron: '0 15 1 * *'
push: push:
branches-ignore: # build all branches except: branches: ['**'] # build all branches
- 'dependabot/**' # prevent GHA triggered twice (once for commit to the branch and once for opening/syncing the PR) tags-ignore: ['**'] # don't build tags
- 'dependencies/pdm' # prevent GHA triggered twice (once for commit to the branch and once for opening/syncing the PR)
tags-ignore: # don't build tags
- '**'
paths-ignore: paths-ignore:
- '**/*.md' - '**/*.md'
- '.act*'
- '.editorconfig' - '.editorconfig'
- '.git*' - '.git*'
- '.github/*.yml' - '.github/*.yml'
- '.github/ISSUE_TEMPLATE/*' - '.github/ISSUE_TEMPLATE/*'
- '.github/workflows/codeql-analysis.yml' - '.github/workflows/codeql-analysis.yml'
- '.github/workflows/publish-release.yml'
- '.github/workflows/stale.yml'
- '.github/workflows/update-python-deps.yml' - '.github/workflows/update-python-deps.yml'
- '.github/workflows/validate-pr.yml'
- 'codecov.yml'
pull_request: pull_request:
paths-ignore: paths-ignore:
- '**/*.md' - '**/*.md'
- '.act*'
- '.editorconfig' - '.editorconfig'
- '.git*' - '.git*'
- '.github/*.yml' - '.github/*.yml'
- '.github/ISSUE_TEMPLATE/*' - '.github/ISSUE_TEMPLATE/*'
- '.github/workflows/codeql-analysis.yml' - '.github/workflows/codeql-analysis.yml'
- '.github/workflows/publish-release.yml'
- '.github/workflows/stale.yml'
- '.github/workflows/update-python-deps.yml' - '.github/workflows/update-python-deps.yml'
- '.github/workflows/validate-pr.yml'
- 'codecov.yml'
workflow_dispatch: workflow_dispatch:
# https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/ # https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows#workflow_dispatch
defaults: defaults:
run: run:
shell: bash shell: bash
jobs: jobs:
########################################################### ###########################################################
build: build:
########################################################### ###########################################################
# Skip push runs for non-main/release branches in the main repo; allow forks to run on feature branches.
if: github.event_name != 'push' || github.ref_name == 'main' || github.ref_name == 'release' || github.repository != 'Second-Hand-Friends/kleinanzeigen-bot'
permissions:
packages: write
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
include: include:
- os: macos-latest - os: macos-15-intel # X86
PYTHON_VERSION: "3.10"
PUBLISH_RELEASE: false
- os: macos-latest # ARM
PYTHON_VERSION: "3.10" PYTHON_VERSION: "3.10"
PUBLISH_RELEASE: false PUBLISH_RELEASE: false
- os: ubuntu-latest - os: ubuntu-latest
@@ -54,28 +75,43 @@ jobs:
- os: windows-latest - os: windows-latest
PYTHON_VERSION: "3.10" PYTHON_VERSION: "3.10"
PUBLISH_RELEASE: false PUBLISH_RELEASE: false
- os: macos-latest - os: macos-15-intel # X86
PYTHON_VERSION: "3.12.1" PYTHON_VERSION: "3.14"
PUBLISH_RELEASE: true
- os: macos-latest # ARM
PYTHON_VERSION: "3.14"
PUBLISH_RELEASE: true PUBLISH_RELEASE: true
- os: ubuntu-latest - os: ubuntu-latest
PYTHON_VERSION: "3.12.1" PYTHON_VERSION: "3.14"
PUBLISH_RELEASE: true PUBLISH_RELEASE: true
- os: windows-latest - os: windows-latest
PYTHON_VERSION: "3.12.1" PYTHON_VERSION: "3.14"
PUBLISH_RELEASE: true PUBLISH_RELEASE: true
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }} # https://github.com/actions/runner-images#available-images
timeout-minutes: 20
steps: steps:
- name: Git checkout - name: "Show: GitHub context"
uses: actions/checkout@v4 # https://github.com/actions/checkout env:
GITHUB_CONTEXT: ${{ toJSON(github) }}
run: printf '%s' "$GITHUB_CONTEXT" | python -m json.tool
- name: "Show: environment variables"
run: env | sort
- name: Configure Fast APT Mirror - name: Configure Fast APT Mirror
uses: vegardit/fast-apt-mirror.sh@v1 uses: vegardit/fast-apt-mirror.sh@29a5ef3401107220fc3c32a0c659b6a1211f9e0f # v1.4.2
- name: Install Chromium Browser - name: Git Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.0
# https://github.com/actions/checkout
- name: "Install: Chromium Browser"
if: env.ACT == 'true' && startsWith(matrix.os, 'ubuntu') if: env.ACT == 'true' && startsWith(matrix.os, 'ubuntu')
run: | run: |
if ! hash google-chrome &>/dev/null; then if ! hash google-chrome &>/dev/null; then
@@ -85,11 +121,11 @@ jobs:
fi fi
- name: "Install Python and PDM" # https://github.com/pdm-project/setup-pdm - name: "Install: Python and PDM" # https://github.com/pdm-project/setup-pdm
uses: pdm-project/setup-pdm@v4 uses: pdm-project/setup-pdm@94a823180e06fcde4ad29308721954a521c96ed0 # v4.4
with: with:
python-version: "${{ matrix.PYTHON_VERSION }}" python-version: "${{ matrix.PYTHON_VERSION }}"
cache: true cache: ${{ !startsWith(matrix.os, 'macos') }} # https://github.com/pdm-project/setup-pdm/issues/55
- name: "Install: Python dependencies" - name: "Install: Python dependencies"
@@ -102,23 +138,42 @@ jobs:
if [[ ! -e .venv ]]; then if [[ ! -e .venv ]]; then
pdm venv create || true pdm venv create || true
fi fi
pdm install -v pdm sync --clean -v
- name: Display project metadata - name: Display project metadata
run: pdm show run: pdm show
- name: Security scan - name: Check generated schemas and default docs config
run: pdm run scan if: matrix.os == 'ubuntu-latest' && matrix.PYTHON_VERSION == '3.14'
run: pdm run python scripts/check_generated_artifacts.py
- name: Check code style - name: Check with pip-audit
run: pdm run lint # until https://github.com/astral-sh/ruff/issues/8277
run:
pdm run pip-audit --progress-spinner off --skip-editable --verbose
- name: Check with ruff
run: pdm run ruff check
- name: Check with mypy
run: pdm run mypy
- name: Check with basedpyright
run: pdm run basedpyright
- name: Prepare split coverage artifacts
run: pdm run ci:coverage:prepare
- name: Run unit tests - name: Run unit tests
run: pdm run utest run: pdm run ci:test:unit -vv
- name: Run integration tests - name: Run integration tests
@@ -126,15 +181,20 @@ jobs:
set -eux set -eux
case "${{ matrix.os }}" in case "${{ matrix.os }}" in
ubuntu-*) ubuntu-*)
sudo apt-get install --no-install-recommends -y xvfb sudo apt-get install --no-install-recommends -y xvfb
xvfb-run pdm run itest # Run tests INSIDE xvfb context
;; xvfb-run bash -c 'pdm run ci:test:integration -vv'
*) pdm run itest ;;
*) pdm run ci:test:integration -vv
;; ;;
esac esac
- name: Run smoke tests
run: pdm run ci:test:smoke -vv
- name: Run app from source - name: Run app from source
run: | run: |
echo " echo "
@@ -171,10 +231,16 @@ jobs:
/tmp/upx/upx.exe --version /tmp/upx/upx.exe --version
- name: Build self-contained executable - name: Build self-contained executable
run: | run: |
set -eux set -eux
if [[ "${{ runner.os }}" == "Windows" ]]; then
NO_UPX=1 pdm run compile
mv dist/kleinanzeigen-bot.exe dist/kleinanzeigen-bot-uncompressed.exe
fi
pdm run compile pdm run compile
ls -l dist ls -l dist
@@ -190,8 +256,8 @@ jobs:
- name: Upload self-contained executable - name: Upload self-contained executable
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
if: ${{ github.ref_name == 'main' && matrix.PUBLISH_RELEASE && !env.ACT }} if: ((github.ref_name == 'main' || github.ref_name == 'release') && matrix.PUBLISH_RELEASE || github.event_name == 'workflow_dispatch') && !env.ACT
with: with:
name: artifacts-${{ matrix.os }} name: artifacts-${{ matrix.os }}
path: dist/kleinanzeigen-bot* path: dist/kleinanzeigen-bot*
@@ -208,7 +274,7 @@ jobs:
- name: Publish Docker image - name: Publish Docker image
if: ${{ github.ref_name == 'main' && matrix.PUBLISH_RELEASE && startsWith(matrix.os, 'ubuntu') && !env.ACT }} if: github.repository_owner == 'Second-Hand-Friends' && github.ref_name == 'main' && matrix.PUBLISH_RELEASE && startsWith(matrix.os, 'ubuntu') && !env.ACT
run: | run: |
set -eux set -eux
@@ -219,24 +285,132 @@ jobs:
docker push ghcr.io/$image_name docker push ghcr.io/$image_name
- name: Collect coverage reports
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
if: (github.ref_name == 'main' || github.event_name == 'pull_request') && !env.ACT
with:
name: coverage-${{ matrix.os }}-py${{ matrix.PYTHON_VERSION }}
include-hidden-files: true
path: .temp/coverage-*.xml
if-no-files-found: error
###########################################################
publish-coverage:
###########################################################
needs: [build]
runs-on: ubuntu-latest
timeout-minutes: 5
if: (github.ref_name == 'main' || github.event_name == 'pull_request') && !github.event.act
permissions:
contents: read
steps:
- name: Git Checkout # required to avoid https://docs.codecov.com/docs/error-reference#unusable-reports
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.0
# https://github.com/actions/checkout
- name: Download coverage reports
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8.0.0
with:
pattern: coverage-*
path: coverage
- name: List coverage reports
run: find . -name coverage-*.xml
- name: Publish unit-test coverage
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.0.0
# https://github.com/codecov/codecov-action
with:
slug: ${{ github.repository }}
name: unit-coverage
flags: unit-tests
disable_search: true
files: coverage/**/coverage-unit.xml
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Publish integration-test coverage
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.0.0
# https://github.com/codecov/codecov-action
with:
slug: ${{ github.repository }}
name: integration-coverage
flags: integration-tests
disable_search: true
files: coverage/**/coverage-integration.xml
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Publish smoke-test coverage
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.0.0
# https://github.com/codecov/codecov-action
with:
slug: ${{ github.repository }}
name: smoke-coverage
flags: smoke-tests
disable_search: true
files: coverage/**/coverage-smoke.xml
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
########################################################### ###########################################################
publish-release: publish-release:
########################################################### ###########################################################
needs: [build, publish-coverage]
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: timeout-minutes: 5
- build
if: ${{ github.ref_name == 'main' && !github.event.act }} permissions:
concurrency: publish-latest-release # https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idconcurrency contents: write # to delete/create GitHub releases
packages: write # to delete untagged docker images
# run on 'main' and 'release' branch when:
# build succeeded, AND
# publish-coverage succeeded OR was skipped
if: >
always()
&& needs.build.result == 'success'
&& (needs.publish-coverage.result == 'success' || needs.publish-coverage.result == 'skipped')
&& (github.ref_name == 'main' || github.ref_name == 'release')
&& !github.event.act
concurrency: publish-${{ github.ref_name }}-release # https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idconcurrency
steps: steps:
- name: Git checkout - name: "Show: GitHub context"
env:
GITHUB_CONTEXT: ${{ toJSON(github) }}
run: echo $GITHUB_CONTEXT
- name: "Show: environment variables"
run: env | sort
- name: Configure Fast APT Mirror
uses: vegardit/fast-apt-mirror.sh@29a5ef3401107220fc3c32a0c659b6a1211f9e0f # v1.4.2
- name: Git Checkout
# only required by "gh release create" to prevent "fatal: Not a git repository" # only required by "gh release create" to prevent "fatal: Not a git repository"
uses: actions/checkout@v4 #https://github.com/actions/checkout uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.0
# https://github.com/actions/checkout
with:
fetch-depth: 0
- name: Delete untagged docker image - name: Delete untagged docker image
continue-on-error: true continue-on-error: true
uses: actions/delete-package-versions@v5 uses: actions/delete-package-versions@e5bc658cc4c965c472efe991f8beea3981499c55 # v5.0.0
with: with:
token: ${{ github.token }} token: ${{ github.token }}
delete-only-untagged-versions: true delete-only-untagged-versions: true
@@ -245,43 +419,172 @@ jobs:
- name: Download build artifacts - name: Download build artifacts
uses: actions/download-artifact@v4 uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8.0.0
- name: "Delete previous 'latest' release" - name: Rename build artifacts
run: |
mv artifacts-macos-15-intel/kleinanzeigen-bot kleinanzeigen-bot-darwin-amd64
mv artifacts-macos-latest/kleinanzeigen-bot kleinanzeigen-bot-darwin-arm64
mv artifacts-ubuntu-latest/kleinanzeigen-bot kleinanzeigen-bot-linux-amd64
mv artifacts-windows-latest/kleinanzeigen-bot-uncompressed.exe kleinanzeigen-bot-windows-amd64-uncompressed.exe
mv artifacts-windows-latest/kleinanzeigen-bot.exe kleinanzeigen-bot-windows-amd64.exe
- name: Install ClamAV
run: |
sudo apt-get update
sudo apt-get install -y clamav
sudo systemctl stop clamav-freshclam.service
sudo freshclam
- name: Scan build artifacts
run: clamscan kleinanzeigen-*
- name: "Determine release name"
id: release
if: github.event_name != 'schedule'
run: |
case "$GITHUB_REF_NAME" in
main)
echo "name=preview" >>"$GITHUB_OUTPUT"
;;
release)
echo "name=latest" >>"$GITHUB_OUTPUT"
;;
esac
- name: "Generate release notes"
if: steps.release.outputs.name && steps.release.outputs.name != ''
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_NAME: latest RELEASE_NAME: ${{ steps.release.outputs.name }}
LEGAL_NOTICE: |
---
#### ⚠️ Rechtlicher Hinweis
<p>Die Verwendung dieses Programms kann unter Umständen gegen die zum jeweiligen Zeitpunkt bei kleinanzeigen.de geltenden Nutzungsbedingungen verstoßen.
Es liegt in Ihrer Verantwortung, die rechtliche Zulässigkeit der Nutzung dieses Programms zu prüfen.
Die Entwickler übernehmen keinerlei Haftung für mögliche Schäden oder rechtliche Konsequenzen.
Die Nutzung erfolgt auf eigenes Risiko. Jede rechtswidrige Verwendung ist untersagt.</p>
#### ⚠️ Legal notice
<p>The use of this program could violate the terms of service of kleinanzeigen.de valid at the time of use.
It is your responsibility to ensure the legal compliance of its use.
The developers assume no liability for any damages or legal consequences.
Use is at your own risk. Any unlawful use is strictly prohibited.</p>
run: |
set -euo pipefail
# We reuse the moving "latest"/"preview" tags for releases. GitHub's generate-notes compares
# tag_name -> previous_tag_name. If we pass the moving tag as tag_name before it moves, the
# comparison is old -> old (empty notes). We avoid this by using a fake tag_name (not created)
# and anchoring previous_tag_name to the current moving tag. This yields old -> new notes
# without creating or pushing any tags (important: pushes can be blocked for workflow files).
if ! gh release view "$RELEASE_NAME" --json tagName --jq '.tagName' >/dev/null 2>&1; then
echo "ERROR: Failed to query existing '$RELEASE_NAME' release; cannot generate release notes." >&2
exit 1
fi
NOTES_TAG="${RELEASE_NAME}-notes-${GITHUB_RUN_ID}"
echo "Generating notes: tag_name=${NOTES_TAG}, previous_tag_name=${RELEASE_NAME}, target_commitish=${GITHUB_SHA}"
# Prefer GitHub's generate-notes API so we get PR links and @mentions
gh api -X POST "repos/${GITHUB_REPOSITORY}/releases/generate-notes" \
-f tag_name="$NOTES_TAG" \
-f target_commitish="$GITHUB_SHA" \
-f previous_tag_name="$RELEASE_NAME" \
--jq '.body' > release-notes.md
if ! grep -q '[^[:space:]]' release-notes.md; then
echo "ERROR: GitHub generate-notes returned an empty body." >&2
exit 1
fi
# Remove the "Full Changelog" line to avoid broken links from the fake tag_name.
sed -E -i.bak '/^\*\*Full Changelog\*\*:/d' release-notes.md
rm -f release-notes.md.bak
printf "\n%s\n" "$LEGAL_NOTICE" >> release-notes.md
- name: "Delete previous '${{ steps.release.outputs.name }}' release"
if: steps.release.outputs.name && steps.release.outputs.name != ''
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_NAME: ${{ steps.release.outputs.name }}
# https://cli.github.com/manual/gh_release_delete # https://cli.github.com/manual/gh_release_delete
run: | run: |
GH_DEBUG=1 gh release delete "$RELEASE_NAME" --yes --cleanup-tag || true GH_DEBUG=1 gh release delete "$RELEASE_NAME" --yes --cleanup-tag || true
- name: "Create 'latest' release" - name: "Create '${{ steps.release.outputs.name }}' Release"
if: steps.release.outputs.name && steps.release.outputs.name != ''
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_NAME: latest RELEASE_NAME: ${{ steps.release.outputs.name }}
# https://cli.github.com/manual/gh_release_create
run: | run: |
set -eux
mv artifacts-macos-latest/kleinanzeigen-bot kleinanzeigen-bot-darwin-amd64
mv artifacts-ubuntu-latest/kleinanzeigen-bot kleinanzeigen-bot-linux-amd64
mv artifacts-windows-latest/kleinanzeigen-bot.exe kleinanzeigen-bot-windows-amd64.exe
# https://cli.github.com/manual/gh_release_create
GH_DEBUG=1 gh release create "$RELEASE_NAME" \ GH_DEBUG=1 gh release create "$RELEASE_NAME" \
--title "$RELEASE_NAME" \ --title "$RELEASE_NAME" \
--latest \ ${{ steps.release.outputs.name == 'latest' && '--latest' || '' }} \
--notes "${{ github.event.head_commit.message }}" \ ${{ steps.release.outputs.name == 'preview' && '--prerelease' || '' }} \
--notes-file release-notes.md \
--target "${{ github.sha }}" \ --target "${{ github.sha }}" \
kleinanzeigen-bot-darwin-amd64 \ kleinanzeigen-bot-darwin-amd64 \
kleinanzeigen-bot-darwin-arm64 \
kleinanzeigen-bot-linux-amd64 \ kleinanzeigen-bot-linux-amd64 \
kleinanzeigen-bot-windows-amd64.exe kleinanzeigen-bot-windows-amd64.exe \
kleinanzeigen-bot-windows-amd64-uncompressed.exe
- name: "Delete intermediate build artifacts" - name: "Delete intermediate build artifacts"
uses: geekyeggo/delete-artifact@v4 # https://github.com/GeekyEggo/delete-artifact/ uses: geekyeggo/delete-artifact@f275313e70c08f6120db482d7a6b98377786765b # v5.0.0
# https://github.com/GeekyEggo/delete-artifact/
with: with:
token: ${{ secrets.GITHUB_TOKEN }}
name: "*" name: "*"
failOnError: false failOnError: false
###########################################################
dependabot-pr-auto-merge:
###########################################################
needs: build
if: github.event_name == 'pull_request' && github.actor == 'dependabot[bot]'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: write
pull-requests: write
steps:
- name: Merge Dependabot PR
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
PR_URL: ${{github.event.pull_request.html_url}}
run: gh pr merge --auto --rebase "$PR_URL"
###########################################################
pdm-pr-auto-merge:
###########################################################
needs: build
if: github.event_name == 'pull_request' && github.actor == 'kleinanzeigen-bot-tu[bot]' && github.head_ref == 'dependencies/pdm'
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: write
pull-requests: write
steps:
- name: Merge Dependabot PR
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
PR_URL: ${{github.event.pull_request.html_url}}
run: gh pr merge --auto --rebase "$PR_URL"

View File

@@ -1,31 +1,40 @@
# https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning # SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot
#
# https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning
name: "CodeQL" name: "CodeQL"
on: on: # https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows
schedule:
# https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows#schedule
- cron: '10 10 * * 1' # Mondays 10:10 UTC
push: push:
branches: branches: ['main', 'release'] # run only on protected branches to avoid duplicate PR runs
- '**' tags-ignore: ['**'] # don't build tags
tags-ignore:
- '**'
paths-ignore: paths-ignore:
- '**/*.md' - '**/*.md'
- '.github/ISSUE_TEMPLATE/*' - '.act*'
- '.github/workflows/build.yml' - '.editorconfig'
- '.github/workflows/update-python-deps.yml' - '.git*'
- 'codecov.yml'
pull_request: pull_request:
paths-ignore: paths-ignore:
- '**/*.md' - '**/*.md'
schedule: - '.act*'
- cron: '10 10 * * 1' - '.editorconfig'
- '.git*'
- 'codecov.yml'
workflow_dispatch: workflow_dispatch:
# https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/ # https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows#workflow_dispatch
defaults: defaults:
run: run:
shell: bash shell: bash
env: env:
PYTHON_VERSION: "3.12" PYTHON_VERSION: "3.14"
jobs: jobs:
@@ -33,23 +42,40 @@ jobs:
analyze: analyze:
########################################################### ###########################################################
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 10
permissions: permissions:
# required for all workflows
security-events: write security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
steps: steps:
- name: Git checkout - name: "Show: GitHub context"
uses: actions/checkout@v4 # https://github.com/actions/checkout env:
GITHUB_CONTEXT: ${{ toJSON(github) }}
run: printf '%s' "$GITHUB_CONTEXT" | python -m json.tool
- uses: actions/setup-python@v5 - name: "Show: environment variables"
run: env | sort
- name: Git Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.0
# https://github.com/actions/checkout
- name: "Install: Python and PDM" # https://github.com/pdm-project/setup-pdm
uses: pdm-project/setup-pdm@94a823180e06fcde4ad29308721954a521c96ed0 # v4.4
with: with:
python-version: "${{ env.PYTHON_VERSION }}" python-version: "${{ env.PYTHON_VERSION }}"
cache: true
- uses: actions/cache@v4
with:
path: __pypackages__
key: ${{ runner.os }}-pypackages-${{ hashFiles('pdm.lock') }}
- name: "Install: Python dependencies" - name: "Install: Python dependencies"
@@ -59,15 +85,22 @@ jobs:
python --version python --version
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install --upgrade pdm pip install --upgrade pdm
pdm install -v if [[ ! -e .venv ]]; then
pdm venv create || true
fi
pdm sync --clean -v
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v3 uses: github/codeql-action/init@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
# https://github.com/github/codeql-action/blob/main/init/action.yml
with: with:
languages: python languages: actions,python
setup-python-dependencies: false # https://github.com/github/codeql-action#build-modes
build-mode: none
# https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning#using-queries-in-ql-packs
queries: security-and-quality
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3 # https://github.com/github/codeql-action uses: github/codeql-action/analyze@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
# https://github.com/github/codeql-action

65
.github/workflows/publish-release.yml vendored Normal file
View File

@@ -0,0 +1,65 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot
#
# https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions
name: Publish Release
on:
workflow_dispatch:
# https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows
defaults:
run:
shell: bash
jobs:
###########################################################
publish-release:
###########################################################
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: write
steps:
- name: "Show: GitHub context"
env:
GITHUB_CONTEXT: ${{ toJSON(github) }}
run: echo $GITHUB_CONTEXT
- name: "Show: environment variables"
run: env | sort
- name: Generate GitHub Access Token
uses: tibdex/github-app-token@3beb63f4bd073e61482598c45c71c1019b59b73a # v2.1.0
# https://github.com/tibdex/github-app-token
id: generate_token
# see https://github.com/peter-evans/create-pull-request/blob/main/docs/concepts-guidelines.md#authenticating-with-github-app-generated-tokens
with:
# see https://github.com/organizations/Second-Hand-Friends/settings/apps/kleinanzeigen-bot-tu
app_id: ${{ secrets.DEPS_UPDATER_APP_ID }}
private_key: ${{ secrets.DEPS_UPDATER_PRIVATE_KEY }}
- name: Git Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v5.0.0
# https://github.com/actions/checkout
with:
token: ${{ steps.generate_token.outputs.token }}
ref: main
fetch-depth: 0
- name: Push main to release branch
run: |
set -eux
# Push current main state to release branch to trigger release creation
git push origin HEAD:release

55
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
# https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions
name: Stale issues
on:
schedule:
- cron: '0 15 1,15 * *'
workflow_dispatch:
# https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Git checkout
uses: actions/checkout@v6.0.2 # https://github.com/actions/checkout
- name: Run stale action
uses: actions/stale@v10 # https://github.com/actions/stale
with:
days-before-stale: 90
days-before-close: 14
stale-issue-message: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed in 14 days if no further activity occurs.
If the issue is still valid, please add a respective comment to prevent this
issue from being closed automatically. Thank you for your contributions.
stale-issue-label: stale
close-issue-label: wontfix
exempt-issue-labels: |
enhancement
pinned
security
- name: Run stale action (for enhancements)
uses: actions/stale@v10 # https://github.com/actions/stale
with:
days-before-stale: 360
days-before-close: 14
stale-issue-message: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed in 14 days if no further activity occurs.
If the issue is still valid, please add a respective comment to prevent this
issue from being closed automatically. Thank you for your contributions.
stale-issue-label: stale
close-issue-label: wontfix
only-labels: enhancement
exempt-issue-labels: |
pinned
security

View File

@@ -2,13 +2,13 @@
# SPDX-License-Identifier: AGPL-3.0-or-later # SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/ # SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
# #
# https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions # https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions
name: Update Python Dependencies name: Update Python Dependencies
on: on:
schedule: schedule:
# https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows # https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows
- cron: '0 5 * * *' # daily at 5 a.m. - cron: '0 10 * * *' # daily at 10 a.m.
workflow_dispatch: workflow_dispatch:
# https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/ # https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/
@@ -17,7 +17,11 @@ defaults:
shell: bash shell: bash
env: env:
PYTHON_VERSION: "3.12" PYTHON_VERSION: "3.10"
permissions:
contents: write
pull-requests: write
jobs: jobs:
@@ -25,10 +29,22 @@ jobs:
update-python-deps: update-python-deps:
########################################################### ###########################################################
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 10
steps: steps:
- name: "Show: GitHub context"
env:
GITHUB_CONTEXT: ${{ toJSON(github) }}
run: echo $GITHUB_CONTEXT
- name: "Show: environment variables"
run: env | sort
- name: Generate GitHub Access Token - name: Generate GitHub Access Token
uses: tibdex/github-app-token@v2 #https://github.com/tibdex/github-app-token uses: tibdex/github-app-token@3beb63f4bd073e61482598c45c71c1019b59b73a # v2.1.0
# https://github.com/tibdex/github-app-token
id: generate_token id: generate_token
# see https://github.com/peter-evans/create-pull-request/blob/main/docs/concepts-guidelines.md#authenticating-with-github-app-generated-tokens # see https://github.com/peter-evans/create-pull-request/blob/main/docs/concepts-guidelines.md#authenticating-with-github-app-generated-tokens
with: with:
@@ -37,54 +53,68 @@ jobs:
private_key: ${{ secrets.DEPS_UPDATER_PRIVATE_KEY }} private_key: ${{ secrets.DEPS_UPDATER_PRIVATE_KEY }}
- name: Git checkout - name: Git Checkout
uses: actions/checkout@v4 # https://github.com/actions/checkout uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.0
# https://github.com/actions/checkout
with: with:
token: ${{ steps.generate_token.outputs.token }} token: ${{ steps.generate_token.outputs.token }}
- uses: actions/setup-python@v5 - name: "Install: Python and PDM" # https://github.com/pdm-project/setup-pdm
uses: pdm-project/setup-pdm@94a823180e06fcde4ad29308721954a521c96ed0 # v4.4
with: with:
python-version: "${{ env.PYTHON_VERSION }}" python-version: "${{ env.PYTHON_VERSION }}"
cache: true
- name: Install Python dependencies - name: "Install: Python dependencies"
run: | run: |
set -eux set -eux
python --version python --version
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install --upgrade pdm pip install --upgrade pdm
pdm install -v if [[ ! -e .venv ]]; then
pdm venv create || true
fi
pdm sync --clean -v
- name: Update Python dependencies - name: Update Python dependencies
id: update_deps id: update_deps
run: | run: |
set -euo pipefail set -euo pipefail
set -x
exec 5>&1 exec 5>&1
updates=$(pdm update --update-all 2>&1 |tee /dev/fd/5) updates=$(pdm update --update-all 2>&1 | tee /dev/fd/5)
if git diff --exit-code pdm.lock; then if git diff --exit-code pdm.lock; then
echo "updates=" >> "$GITHUB_OUTPUT" echo "updates=" >> "$GITHUB_OUTPUT"
else else
updates="$(echo "$updates" | grep Update | grep -v kleinanzeigen-bot || true)"
if [[ $(wc -l <<< "$updates") -eq 1 ]]; then
echo "title=$(echo "$updates" | head -n 1 | sed 's/ successful//')" >> "${GITHUB_OUTPUT}"
else
echo "title=Update Python dependencies" >> "${GITHUB_OUTPUT}"
fi
# https://github.com/orgs/community/discussions/26288#discussioncomment-3876281 # https://github.com/orgs/community/discussions/26288#discussioncomment-3876281
delimiter="$(openssl rand -hex 8)" delimiter="$(openssl rand -hex 8)"
echo "updates<<${delimiter}" >> "${GITHUB_OUTPUT}" echo "updates<<${delimiter}" >> "${GITHUB_OUTPUT}"
echo "$(echo "$updates" | grep Update | grep -v kleinanzeigen-bot)" >> "${GITHUB_OUTPUT}" echo "$updates" >> "${GITHUB_OUTPUT}"
echo "${delimiter}" >> "${GITHUB_OUTPUT}" echo "${delimiter}" >> "${GITHUB_OUTPUT}"
fi fi
- name: Create PR - name: Create PR
uses: peter-evans/create-pull-request@v6 # https://github.com/peter-evans/create-pull-request uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v7.0.5
# https://github.com/peter-evans/create-pull-request
if: "${{ steps.update_deps.outputs.updates != '' }}" if: "${{ steps.update_deps.outputs.updates != '' }}"
with: with:
title: "chore: Update Python dependencies" title: "chore: ${{ steps.update_deps.outputs.title }}"
author: "github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>" author: "github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>"
committer: "github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>" committer: "github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>"
commit-message: "chore: Update Python dependencies" commit-message: "chore: ${{ steps.update_deps.outputs.title }}"
body: ${{ steps.update_deps.outputs.updates }} body: ${{ steps.update_deps.outputs.updates }}
add-paths: pdm.lock add-paths: pdm.lock
branch: dependencies/pdm branch: dependencies/pdm

49
.github/workflows/validate-pr-title.yml vendored Normal file
View File

@@ -0,0 +1,49 @@
# https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions
name: "Validate PR Title"
on:
pull_request_target:
types:
- opened
- edited
- synchronize
- reopened
jobs:
build:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- name: "Validate semantic PR title"
uses: amannn/action-semantic-pull-request@48f256284bd46cdaab1048c3721360e808335d50 # v6.0.0
# https://github.com/amannn/action-semantic-pull-request
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
# https://mazer.dev/en/git/best-practices/git-semantic-commits/
# https://github.com/commitizen/conventional-commit-types/blob/master/index.json
types: |
build
ci
chore
docs
fix
enh
feat
refact
revert
perf
style
test
scopes: |
deps
i18n
requireScope: false
- name: "Label PR"
uses: srvaroa/labeler@bf262763a8a8e191f5847873aecc0f29df84f957 # v1.14.0
# https://github.com/srvaroa/labeler
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

29
.gitignore vendored
View File

@@ -1,8 +1,13 @@
# Local work folder that is not checked in # Local work folder that is not checked in
_LOCAL/ _LOCAL/
# docker .*
.dockerignore !.act-event.json
!.actrc
!.gitattribute
!.gitignore
!.github/
!.markdownlint-cli2.jsonc
# kleinanzeigen_bot # kleinanzeigen_bot
/config.yaml /config.yaml
@@ -12,34 +17,14 @@ _LOCAL/
downloaded-ads downloaded-ads
# python # python
/.venv
__pycache__ __pycache__
/build
/dist /dist
/.eggs
/*.egg-info
/.mypy_cache
/.pdm-build/
/.pdm-python
# Eclipse
/.project
/.pydevproject
/.settings/
**/.*.md.html
# IntelliJ # IntelliJ
/.idea
/*.iml /*.iml
/*.ipr /*.ipr
/*.iws /*.iws
# Visual Studio Code
/.vscode
# OSX
.DS_Store
# Vim # Vim
*.swo *.swo
*.swp *.swp

11
.markdownlint-cli2.jsonc Normal file
View File

@@ -0,0 +1,11 @@
{
"$schema": "https://raw.githubusercontent.com/DavidAnson/markdownlint-cli2/main/schema/markdownlint-cli2-config-schema.json",
"config": {
"MD013": false,
"MD033": false
},
"ignores": [
"CODE_OF_CONDUCT.md",
"data/"
]
}

View File

@@ -1,132 +1,89 @@
# Contributor Covenant Code of Conduct # Contributor Covenant 3.0 Code of Conduct
## Our Pledge ## Our Pledge
We as members, contributors, and leaders pledge to make participation in our We pledge to make our community welcoming, safe, and equitable for all.
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, We are committed to fostering an environment that respects and promotes the dignity, rights, and contributions of all individuals, regardless of characteristics including race, ethnicity, caste, color, age, physical characteristics, neurodiversity, disability, sex or gender, gender identity or expression, sexual orientation, language, philosophy or religion, national or social origin, socio-economic position, level of education, or other status. The same privileges of participation are extended to everyone who participates in good faith and in accordance with this Covenant.
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our ## Encouraged Behaviors
community include:
* Demonstrating empathy and kindness toward other people While acknowledging differences in social norms, we all strive to meet our community's expectations for positive behavior. We also understand that our words and actions may be interpreted differently than we intend based on culture, background, or native language.
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include: With these considerations in mind, we agree to behave mindfully toward each other and act in ways that center our shared values, including:
* The use of sexualized language or imagery, and sexual attention or advances of 1. Respecting the **purpose of our community**, our activities, and our ways of gathering.
any kind 2. Engaging **kindly and honestly** with others.
* Trolling, insulting or derogatory comments, and personal or political attacks 3. Respecting **different viewpoints** and experiences.
* Public or private harassment 4. **Taking responsibility** for our actions and contributions.
* Publishing others' private information, such as a physical or email address, 5. Gracefully giving and accepting **constructive feedback**.
without their explicit permission 6. Committing to **repairing harm** when it occurs.
* Other conduct which could reasonably be considered inappropriate in a 7. Behaving in other ways that promote and sustain the **well-being of our community**.
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of ## Restricted Behaviors
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive, We agree to restrict the following behaviors in our community. Instances, threats, and promotion of these behaviors are violations of this Code of Conduct.
or harmful.
1. **Harassment.** Violating explicitly expressed boundaries or engaging in unnecessary personal attention after any clear request to stop.
2. **Character attacks.** Making insulting, demeaning, or pejorative comments directed at a community member or group of people.
3. **Stereotyping or discrimination.** Characterizing anyones personality or behavior on the basis of immutable identities or traits.
4. **Sexualization.** Behaving in a way that would generally be considered inappropriately intimate in the context or purpose of the community.
5. **Violating confidentiality**. Sharing or acting on someone's personal or private information without their permission.
6. **Endangerment.** Causing, encouraging, or threatening violence or other harm toward any person or group.
7. Behaving in other ways that **threaten the well-being** of our community.
### Other Restrictions
1. **Misleading identity.** Impersonating someone else for any reason, or pretending to be someone else to evade enforcement actions.
2. **Failing to credit sources.** Not properly crediting the sources of content you contribute.
3. **Promotional materials**. Sharing marketing or other commercial content in a way that is outside the norms of the community.
4. **Irresponsible communication.** Failing to responsibly present content which includes, links or describes any other restricted behaviors.
## Reporting an Issue
Tensions can occur between community members even when they are trying their best to collaborate. Not every conflict represents a code of conduct violation, and this Code of Conduct reinforces encouraged behaviors and norms that can help avoid conflicts and minimize harm.
When an incident does occur, it is important to report it promptly. To report a possible violation, open an issue at https://github.com/Second-Hand-Friends/kleinanzeigen-bot/issues
Community Moderators take reports of violations seriously and will make every effort to respond in a timely manner. They will investigate all reports of code of conduct violations, reviewing messages, logs, and recordings, or interviewing witnesses and other participants. Community Moderators will keep investigation and enforcement actions as transparent as possible while prioritizing safety and confidentiality. In order to honor these values, enforcement actions are carried out in private with the involved parties, but communicating to the whole community may be part of a mutually agreed upon resolution.
## Addressing and Repairing Harm
If an investigation by the Community Moderators finds that this Code of Conduct has been violated, the following enforcement ladder may be used to determine how best to repair harm, based on the incident's impact on the individuals involved and the community as a whole. Depending on the severity of a violation, lower rungs on the ladder may be skipped.
1) Warning
1) Event: A violation involving a single incident or series of incidents.
2) Consequence: A private, written warning from the Community Moderators.
3) Repair: Examples of repair include a private written apology, acknowledgement of responsibility, and seeking clarification on expectations.
2) Temporarily Limited Activities
1) Event: A repeated incidence of a violation that previously resulted in a warning, or the first incidence of a more serious violation.
2) Consequence: A private, written warning with a time-limited cooldown period designed to underscore the seriousness of the situation and give the community members involved time to process the incident. The cooldown period may be limited to particular communication channels or interactions with particular community members.
3) Repair: Examples of repair may include making an apology, using the cooldown period to reflect on actions and impact, and being thoughtful about re-entering community spaces after the period is over.
3) Temporary Suspension
1) Event: A pattern of repeated violation which the Community Moderators have tried to address with warnings, or a single serious violation.
2) Consequence: A private written warning with conditions for return from suspension. In general, temporary suspensions give the person being suspended time to reflect upon their behavior and possible corrective actions.
3) Repair: Examples of repair include respecting the spirit of the suspension, meeting the specified conditions for return, and being thoughtful about how to reintegrate with the community when the suspension is lifted.
4) Permanent Ban
1) Event: A pattern of repeated code of conduct violations that other steps on the ladder have failed to resolve, or a violation so serious that the Community Moderators determine there is no way to keep the community safe with this person as a member.
2) Consequence: Access to all community spaces, tools, and communication channels is removed. In general, permanent bans should be rarely used, should have strong reasoning behind them, and should only be resorted to if working through other remedies has failed to change the behavior.
3) Repair: There is no possible repair in cases of this severity.
This enforcement ladder is intended as a guideline. It does not limit the ability of Community Managers to use their discretion and judgment, in keeping with the best interests of our community.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope ## Scope
This Code of Conduct applies within all community spaces, and also applies when This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public or other spaces. Examples of representing our community include using an official email address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
https://github.com/Second-Hand-Friends/kleinanzeigen-bot/issues
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution ## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], This Code of Conduct is adapted from the Contributor Covenant, version 3.0, permanently available at [https://www.contributor-covenant.org/version/3/0/](https://www.contributor-covenant.org/version/3/0/).
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by Contributor Covenant is stewarded by the Organization for Ethical Source and licensed under CC BY-SA 4.0. To view a copy of this license, visit [https://creativecommons.org/licenses/by-sa/4.0/](https://creativecommons.org/licenses/by-sa/4.0/)
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at For answers to common questions about Contributor Covenant, see the FAQ at [https://www.contributor-covenant.org/faq](https://www.contributor-covenant.org/faq). Translations are provided at [https://www.contributor-covenant.org/translations](https://www.contributor-covenant.org/translations). Additional enforcement and community guideline resources can be found at [https://www.contributor-covenant.org/resources](https://www.contributor-covenant.org/resources). The enforcement ladder was inspired by the work of [Mozillas code of conduct team](https://github.com/mozilla/inclusion).
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

View File

@@ -2,35 +2,342 @@
Thanks for your interest in contributing to this project! Whether it's a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community. Thanks for your interest in contributing to this project! Whether it's a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.
We want to make contributing as easy and transparent as possible. We want to make contributing as easy and transparent as possible. Contributions via [pull requests](#pull-request-requirements) are much appreciated.
Please read through this document before submitting any contributions to ensure your contribution goes to the correct code repository and we have all the necessary information to effectively respond to your request. Please read through this document before submitting any contributions to ensure your contribution goes to the correct code repository and we have all the necessary information to effectively respond to your request.
## Table of Contents
- [Development Setup](#development-setup)
- [Development Notes](#development-notes)
- [Development Workflow](#development-workflow)
- [Testing Requirements](#testing-requirements)
- [Code Quality Standards](#code-quality-standards)
- [Bug Reports](#bug-reports)
- [Feature Requests](#feature-requests)
- [Pull Request Requirements](#pull-request-requirements)
- [Performance Considerations](#performance-considerations)
- [Security and Best Practices](#security-and-best-practices)
- [Licensing](#licensing)
- [Internationalization (i18n) and Translations](#internationalization-i18n-and-translations)
## Development Setup
### Prerequisites
- Python 3.10 or higher
- PDM for dependency management
- Git
### Local Setup
1. Fork and clone the repository
1. Install dependencies: `pdm install`
1. Run tests to verify setup: `pdm run test`
## Development Notes
This section provides quick reference commands for common development tasks. See 'Testing Requirements' below for more details on running and organizing tests.
- Format source code: `pdm run format`
- Run tests: `pdm run test` (see 'Testing Requirements' below for more details)
- Run syntax checks: `pdm run lint`
- Linting issues found by ruff can be auto-fixed using `pdm run lint:fix`
- Derive JSON schema files from Pydantic data model: `pdm run generate-schemas`
- Create platform-specific executable: `pdm run compile`
- Application bootstrap works like this:
```python
pdm run app
|-> executes 'python -m kleinanzeigen_bot'
|-> executes 'kleinanzeigen_bot/__main__.py'
|-> executes main() function of 'kleinanzeigen_bot/__init__.py'
|-> executes KleinanzeigenBot().run()
```
## Development Workflow
### Before Submitting
1. **Format your code**: Ensure your code is auto-formatted
```bash
pdm run format
```
1. **Lint your code**: Check for linting errors and warnings
```bash
pdm run lint
```
1. **Run tests**: Ensure all tests pass locally
```bash
pdm run test
```
1. **Check code quality**: Verify your code follows project standards
- Type hints are complete
- Docstrings are present
- SPDX headers are included
- Imports are properly organized
1. **Test your changes**: Add appropriate tests for new functionality
- Add smoke tests for critical paths
- Add unit tests for new components
- Add integration tests for external dependencies
### Commit Messages
Use clear, descriptive commit messages that explain:
- What was changed
- Why it was changed
- Any breaking changes or important notes
Example:
```shell
feat: add smoke test for bot startup
- Add test_bot_starts_without_crashing to verify core workflow
- Use DummyBrowser to avoid real browser dependencies
- Follows existing smoke test patterns in tests/smoke/
```
## Testing Requirements
This project uses a comprehensive testing strategy with three test types:
### Test Types
- **Unit tests** (`tests/unit/`): Isolated component tests with mocks. Run first.
- **Integration tests** (`tests/integration/`): Tests with real external dependencies. Run after unit tests.
- **Smoke tests** (`tests/smoke/`): Minimal, post-deployment health checks that verify the most essential workflows (e.g., app starts, config loads, login page reachable). Run after integration tests. Smoke tests are not end-to-end (E2E) tests and should not cover full user workflows.
### Running Tests
```bash
# Canonical unified run (quiet by default, coverage enabled)
pdm run test
pdm run test -v
pdm run test -vv
# Run specific test types
pdm run utest # Unit tests only
pdm run itest # Integration tests only
pdm run smoke # Smoke tests only
```
### Adding New Tests
1. **Determine test type** based on what you're testing:
- **Smoke tests**: Minimal, critical health checks (not full user workflows)
- **Unit tests**: Individual components, isolated functionality
- **Integration tests**: External dependencies, real network calls
1. **Place in correct directory**:
- `tests/smoke/` for smoke tests
- `tests/unit/` for unit tests
- `tests/integration/` for integration tests
1. **Add proper markers**:
```python
@pytest.mark.smoke # For smoke tests
@pytest.mark.itest # For integration tests
@pytest.mark.asyncio # For async tests
```
1. **Use existing fixtures** when possible (see `tests/conftest.py`)
For detailed testing guidelines, see [docs/TESTING.md](docs/TESTING.md).
## Code Quality Standards
### File Headers
All Python files must start with SPDX license headers:
```python
# SPDX-FileCopyrightText: © <your name> and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
```
### Import Organization
- Use absolute imports for project modules: `from kleinanzeigen_bot import KleinanzeigenBot`
- Use relative imports for test utilities: `from tests.conftest import SmokeKleinanzeigenBot`
- Group imports: standard library, third-party, local (with blank lines between groups)
### Type Hints
- Always use type hints for function parameters and return values
- Use `Any` from `typing` for complex types
- Use `Final` for constants
- Use `cast()` when type checker needs help
### Documentation
#### Docstrings
- Use docstrings for **complex functions and classes that need explanation**
- Include examples in docstrings for complex functions (see `utils/misc.py` for examples)
#### Comments
- **Use comments to explain your code logic and reasoning**
- Comment on complex algorithms, business logic, and non-obvious decisions
- Explain "why" not just "what" - the reasoning behind implementation choices
- Use comments for edge cases, workarounds, and platform-specific code
#### Module Documentation
- Add module docstrings for packages and complex modules
- Document the purpose and contents of each module
#### Model Documentation
- Use `Field(description="...")` for Pydantic model fields to document their purpose
- Include examples in field descriptions for complex configurations
- Document validation rules and constraints
#### Logging
- Use structured logging with `loggers.get_logger()`
- Include context in log messages to help with debugging
- Use appropriate log levels (DEBUG, INFO, WARNING, ERROR)
- Log important state changes and decision points
#### Timeout configuration
- The default timeout (`timeouts.default`) already wraps all standard DOM helpers (`web_find`, `web_click`, etc.) via `WebScrapingMixin._timeout/_effective_timeout`. Use it unless a workflow clearly needs a different SLA.
- Reserve `timeouts.quick_dom` for transient overlays (shipping dialogs, payment prompts, toast banners) that should render almost instantly; call `self._timeout("quick_dom")` in those spots to keep the UI responsive.
- For single selectors that occasionally need more headroom, pass an inline override instead of creating a new config key, e.g. `custom = self._timeout(override = 12.5); await self.web_find(..., timeout = custom)`.
- Use `_timeout()` when you just need the raw configured value (with optional override); use `_effective_timeout()` when you rely on the global multiplier and retry backoff for a given attempt (e.g. inside `_run_with_timeout_retries`).
- Add a new timeout key only when a recurring workflow has its own timing profile (pagination, captcha detection, publishing confirmations, Chrome probes, etc.). Whenever you add one, extend `TimeoutConfig`, document it in the sample `timeouts:` block in `docs/CONFIGURATION.md`, and explain it in `docs/BROWSER_TROUBLESHOOTING.md`.
- Encourage users to raise `timeouts.multiplier` when everything is slow, and override existing keys in `config.yaml` before introducing new ones. This keeps the configuration surface minimal.
#### Examples
```python
def parse_duration(text: str) -> timedelta:
"""
Parses a human-readable duration string into a datetime.timedelta.
Supported units:
- d: days
- h: hours
- m: minutes
- s: seconds
Examples:
>>> parse_duration("1h 30m")
datetime.timedelta(seconds=5400)
"""
# Use regex to find all duration parts
pattern = re.compile(r"(\d+)\s*([dhms])")
parts = pattern.findall(text.lower())
# Build timedelta from parsed parts
kwargs: dict[str, int] = {}
for value, unit in parts:
if unit == "d":
kwargs["days"] = kwargs.get("days", 0) + int(value)
elif unit == "h":
kwargs["hours"] = kwargs.get("hours", 0) + int(value)
# ... handle other units
return timedelta(**kwargs)
```
### Error Handling
- Use specific exception types when possible
- Include meaningful error messages
- Use `pytest.fail()` with descriptive messages in tests
- Use `pyright: ignore[reportAttributeAccessIssue]` for known type checker issues
## Reporting Bugs/Feature Requests ## Reporting Bugs/Feature Requests
We use GitHub issues to track bugs and feature requests. Please ensure your description is clear and has sufficient instructions to be able to reproduce the issue. We use GitHub issues to track bugs and feature requests. Please ensure your description is clear and has sufficient instructions to be able to reproduce the issue.
### Bug Reports
## Contributing via Pull Requests When reporting a bug, please ensure you:
Contributions via pull requests are much appreciated. - Confirm the issue is reproducible on the latest release
- Clearly describe the expected and actual behavior
- Provide detailed steps to reproduce the issue
- Include relevant log output if available
- Specify your operating system and browser (if applicable)
- Agree to the project's Code of Conduct
Before sending us a pull request, please ensure that: This helps maintainers quickly triage and address issues.
1. You are working against the latest source on the **main** branch. ### Feature Requests
1. You check existing open and recently merged pull requests to make sure someone else hasn't already addressed the issue.
To send us a pull request, please: Include:
1. Fork our repository. - Clear description of the desired feature
1. Modify the source while focusing on the specific change you are contributing. - Use case or problem it solves
1. Commit to your fork using clear, descriptive commit messages. - Any implementation ideas or considerations
1. Send us a pull request, answering any default questions in the pull request interface.
## Pull Request Requirements
Before submitting a pull request, please ensure you:
1. **Work from the latest source on the main branch**
1. **Create a feature branch** for your changes: `git checkout -b feature/your-feature-name`
1. **Format your code**: `pdm run format`
1. **Lint your code**: `pdm run lint`
1. **Run all tests**: `pdm run test`
1. **Check code quality**: Type hints, docstrings, SPDX headers, import organization
1. **Add appropriate tests** for new functionality (smoke/unit/integration as needed)
1. **Write clear, descriptive commit messages**
1. **Provide a concise summary and motivation for the change in the PR**
1. **List all key changes and dependencies**
1. **Select the correct type(s) of change** (bug fix, feature, breaking change)
1. **Complete the checklist in the PR template**
1. **Confirm your contribution can be used under the project license**
See the [Pull Request template](.github/PULL_REQUEST_TEMPLATE.md) for the full checklist and required fields.
To submit a pull request:
- Fork our repository
- Push your feature branch to your fork
- Open a pull request on GitHub, answering any default questions in the interface
GitHub provides additional documentation on [forking a repository](https://help.github.com/articles/fork-a-repo/) and [creating a pull request](https://help.github.com/articles/creating-a-pull-request/) GitHub provides additional documentation on [forking a repository](https://help.github.com/articles/fork-a-repo/) and [creating a pull request](https://help.github.com/articles/creating-a-pull-request/)
## Performance Considerations
- **Smoke tests** should be fast (< 1 second each)
- **Unit tests** should be isolated and fast
- **Integration tests** can be slower but should be minimal
- Use fakes/dummies to avoid real network calls in tests
## Security and Best Practices
- Never commit real credentials in tests
- Use temporary files and directories for test data
- Clean up resources in test teardown
- Use environment variables for configuration
- Follow the principle of least privilege in test setup
## Licensing ## Licensing
See the [LICENSE.txt](LICENSE.txt) file for our project's licensing. We will ask you to confirm the licensing of your contribution. See the [LICENSE.txt](LICENSE.txt) file for our project's licensing. All source files must include SPDX license headers as described above. We will ask you to confirm the licensing of your contribution.
## Internationalization (i18n) and Translations
- All user-facing output (log messages, print statements, CLI help, etc.) must be written in **English**.
- For every user-facing message, a **German translation** must be added to `src/kleinanzeigen_bot/resources/translations.de.yaml`.
- Log messages are auto-translated by `TranslatingLogger`; do not wrap `LOG.*`/`logger.*` message strings with `_()`.
- Non-log user-facing strings (e.g., `print`, `ainput`, exceptions, validation messages) should use `_()`.
- Use the translation system for all output—**never hardcode German or other languages** in the code.
- If you add or change a user-facing message, update the translation file and ensure that translation completeness tests pass (`tests/unit/test_translations.py`).
- Review the translation guidelines and patterns in the codebase for correct usage.

382
README.md
View File

@@ -1,83 +1,104 @@
# kleinanzeigen-bot # kleinanzeigen-bot
[![Build Status](https://github.com/Second-Hand-Friends/kleinanzeigen-bot/workflows/Build/badge.svg "GitHub Actions")](https://github.com/Second-Hand-Friends/kleinanzeigen-bot/actions?query=workflow%3A%22Build%22) [![Build Status](https://github.com/Second-Hand-Friends/kleinanzeigen-bot/actions/workflows/build.yml/badge.svg)](https://github.com/Second-Hand-Friends/kleinanzeigen-bot/actions/workflows/build.yml)
[![License](https://img.shields.io/github/license/Second-Hand-Friends/kleinanzeigen-bot.svg?color=blue)](LICENSE.txt) [![License](https://img.shields.io/github/license/Second-Hand-Friends/kleinanzeigen-bot.svg?color=blue)](LICENSE.txt)
[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.1%20adopted-ff69b4.svg)](CODE_OF_CONDUCT.md) [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v3.0%20adopted-ff69b4.svg)](CODE_OF_CONDUCT.md)
[![Maintainability](https://api.codeclimate.com/v1/badges/77b4ed9cc0dd8cfe373c/maintainability)](https://codeclimate.com/github/Second-Hand-Friends/kleinanzeigen-bot/maintainability) [![codecov](https://codecov.io/github/Second-Hand-Friends/kleinanzeigen-bot/graph/badge.svg?token=SKLDTVWHVK)](https://codecov.io/github/Second-Hand-Friends/kleinanzeigen-bot)
<!--[![Maintainability](https://qlty.sh/badges/69ff94b8-90e1-4096-91ed-3bcecf0b0597/maintainability.svg)](https://qlty.sh/gh/Second-Hand-Friends/projects/kleinanzeigen-bot)-->
**Feedback and high-quality pull requests are highly welcome!** **Feedback and high-quality pull requests are highly welcome!**
1. [About](#about) 1. [About](#about)
1. [Installation](#installation) 1. [Installation](#installation)
1. [Usage](#usage) 1. [Usage](#usage)
1. [Configuration](#config)
1. [Main configuration](#main-config)
1. [Ad configuration](#ad-config)
1. [Using an existing browser window](#existing-browser)
1. [Browser Connection Issues](#browser-connection-issues)
1. [Development Notes](#development) 1. [Development Notes](#development)
1. [Related Open-Source Projects](#related)
1. [License](#license) 1. [License](#license)
## <a name="about"></a>About ## <a name="about"></a>About
**kleinanzeigen-bot** is a console based application to ease publishing of ads to [kleinanzeigen.de](https://kleinanzeigen.de). **kleinanzeigen-bot** is a command-line application to **publish, update, delete, and republish listings** on kleinanzeigen.de.
It is the spiritual successor to [Second-Hand-Friends/ebayKleinanzeigen](https://github.com/Second-Hand-Friends/ebayKleinanzeigen) with the following advantages: ### Key Features
- supports Microsoft Edge browser (Chromium based)
- compatible chromedriver is installed automatically
- better captcha handling
- config:
- use YAML or JSON for config files
- one config file per ad
- use globbing (wildcards) to select images from local disk via [wcmatch](https://facelessuser.github.io/wcmatch/glob/#syntax)
- reference categories by name (looked up from [categories.yaml](https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/kleinanzeigen_bot/resources/categories.yaml))
- logging is configurable and colorized
- provided as self-contained executable for Windows, Linux and macOS
- source code is pylint checked and uses Python type hints
- CI builds
- **Automated Publishing**: Publish new listings from YAML/JSON configuration files
- **Smart Republishing**: Automatically republish listings at configurable intervals to keep them at the top of search results
- **Bulk Management**: Update or delete multiple listings at once
- **Download Listings**: Download existing listings from your profile to local configuration files
- **Extend Listings**: Extend ads close to expiry to keep watchers/savers and preserve the monthly ad quota
- **Browser Automation**: Uses Chromium-based browsers (Chrome, Edge, Chromium) for reliable automation
- **Flexible Configuration**: Configure defaults once, override per listing as needed
### ⚠️ Legal Disclaimer
The use of this program could violate the terms of service of kleinanzeigen.de applicable at the time of use.
It is your responsibility to ensure the legal compliance of its use.
The developers assume no liability for any damages or legal consequences.
Use is at your own risk. Any unlawful use is strictly prohibited.
### ⚠️ Rechtliche Hinweise
Die Verwendung dieses Programms kann unter Umständen gegen die zum jeweiligen Zeitpunkt bei kleinanzeigen.de geltenden Nutzungsbedingungen verstoßen.
Es liegt in Ihrer Verantwortung, die rechtliche Zulässigkeit der Nutzung dieses Programms zu prüfen.
Die Entwickler übernehmen keinerlei Haftung für mögliche Schäden oder rechtliche Konsequenzen.
Die Nutzung erfolgt auf eigenes Risiko. Jede rechtswidrige Verwendung ist untersagt.
## <a name="installation"></a>Installation ## <a name="installation"></a>Installation
### Installation using pre-compiled exe ### Installation using pre-compiled exe
1. The following components need to be installed: 1. The following components need to be installed:
1. [Chromium](https://www.chromium.org/getting-involved/download-chromium), [Google Chrome](https://www.google.com/chrome/), 1. [Chromium](https://www.chromium.org/getting-involved/download-chromium), [Google Chrome](https://www.google.com/chrome/),
or Chromium based [Microsoft Edge](https://www.microsoft.com/edge) browser or Chromium-based [Microsoft Edge](https://www.microsoft.com/edge) browser
1. Open a command/terminal window 1. Open a command/terminal window
1. Download and run the app by entering the following commands: 1. Download and run the app by entering the following commands:
1. On Windows: 1. On Windows:
```batch
curl -L https://github.com/Second-Hand-Friends/kleinanzeigen-bot/releases/download/latest/kleinanzeigen-bot-windows-amd64.exe -o kleinanzeigen-bot.exe
kleinanzeigen-bot --help ```batch
``` curl -L https://github.com/Second-Hand-Friends/kleinanzeigen-bot/releases/download/latest/kleinanzeigen-bot-windows-amd64.exe -o kleinanzeigen-bot.exe
kleinanzeigen-bot --help
```
1. On Linux: 1. On Linux:
```shell
curl -L https://github.com/Second-Hand-Friends/kleinanzeigen-bot/releases/download/latest/kleinanzeigen-bot-linux-amd64 -o kleinanzeigen-bot
chmod 755 kleinanzeigen-bot ```shell
curl -L https://github.com/Second-Hand-Friends/kleinanzeigen-bot/releases/download/latest/kleinanzeigen-bot-linux-amd64 -o kleinanzeigen-bot
./kleinanzeigen-bot --help chmod 755 kleinanzeigen-bot
```
./kleinanzeigen-bot --help
```
1. On macOS: 1. On macOS:
```shell
curl -L https://github.com/Second-Hand-Friends/kleinanzeigen-bot/releases/download/latest/kleinanzeigen-bot-darwin-amd64 -o kleinanzeigen-bot
chmod 755 kleinanzeigen-bot ```shell
curl -L https://github.com/Second-Hand-Friends/kleinanzeigen-bot/releases/download/latest/kleinanzeigen-bot-darwin-amd64 -o kleinanzeigen-bot
./kleinanzeigen-bot --help chmod 755 kleinanzeigen-bot
```
./kleinanzeigen-bot --help
```
### Installation using Docker ### Installation using Docker
1. The following components need to be installed: 1. The following components need to be installed:
1. [Docker](https://www.docker.com/) 1. [Docker](https://www.docker.com/)
1. [Bash](https://www.gnu.org/software/bash/) (on Windows e.g. via [Cygwin](https://www.cygwin.com/), [MSys2](https://www.msys2.org/) or git) 1. [Bash](https://www.gnu.org/software/bash/) (on Windows e.g. via [Cygwin](https://www.cygwin.com/), [MSys2](https://www.msys2.org/) or git)
1. [X11 - X Window System](https://en.wikipedia.org/wiki/X_Window_System) display server (on Windows e.g. https://github.com/P-St/Portable-X-Server/releases/latest) 1. [X11 - X Window System](https://en.wikipedia.org/wiki/X_Window_System) display server (on Windows e.g. [Portable-X-Server](https://github.com/P-St/Portable-X-Server/releases/latest))
**Running the docker image:** **Running the docker image:**
1. Ensure the X11 Server is running 1. Ensure the X11 Server is running
1. Run the docker image: 1. Run the docker image:
@@ -99,42 +120,53 @@ It is the spiritual successor to [Second-Hand-Friends/ebayKleinanzeigen](https:/
### Installation from source ### Installation from source
1. The following components need to be installed: 1. The following components need to be installed:
1. [Chromium](https://www.chromium.org/getting-involved/download-chromium), [Google Chrome](https://www.google.com/chrome/), 1. [Chromium](https://www.chromium.org/getting-involved/download-chromium), [Google Chrome](https://www.google.com/chrome/),
or Chromium based [Microsoft Edge](https://www.microsoft.com/edge) browser or Chromium-based [Microsoft Edge](https://www.microsoft.com/edge) browser
1. [Python](https://www.python.org/) **3.10** or newer 1. [Python](https://www.python.org/) **3.10** or newer
1. [pip](https://pypi.org/project/pip/) 1. [pip](https://pypi.org/project/pip/)
1. [git client](https://git-scm.com/downloads) 1. [git client](https://git-scm.com/downloads)
1. Open a command/terminal window 1. Open a command/terminal window
1. Clone the repo using 1. Clone the repo using
```
```bash
git clone https://github.com/Second-Hand-Friends/kleinanzeigen-bot/ git clone https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
``` ```
1. Change into the directory: 1. Change into the directory:
```
```bash
cd kleinanzeigen-bot cd kleinanzeigen-bot
``` ```
1. Install the Python dependencies using: 1. Install the Python dependencies using:
```bash ```bash
pip install pdm pip install pdm
pdm install pdm install
``` ```
1. Run the app: 1. Run the app:
```
```bash
pdm run app --help pdm run app --help
``` ```
### Installation from source using Docker ### Installation from source using Docker
1. The following components need to be installed: 1. The following components need to be installed:
1. [Docker](https://www.docker.com/) 1. [Docker](https://www.docker.com/)
1. [git client](https://git-scm.com/downloads) 1. [git client](https://git-scm.com/downloads)
1. [Bash](https://www.gnu.org/software/bash/) (on Windows e.g. via [Cygwin](https://www.cygwin.com/), [MSys2](https://www.msys2.org/) or git) 1. [Bash](https://www.gnu.org/software/bash/) (on Windows e.g. via [Cygwin](https://www.cygwin.com/), [MSys2](https://www.msys2.org/) or git)
1. [X11 - X Window System](https://en.wikipedia.org/wiki/X_Window_System) display server (on Windows e.g. https://github.com/P-St/Portable-X-Server/releases/latest) 1. [X11 - X Window System](https://en.wikipedia.org/wiki/X_Window_System) display server (on Windows e.g. [Portable-X-Server](https://github.com/P-St/Portable-X-Server/releases/latest))
1. Clone the repo using 1. Clone the repo using
```
```bash
git clone https://github.com/Second-Hand-Friends/kleinanzeigen-bot/ git clone https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
``` ```
@@ -142,9 +174,9 @@ It is the spiritual successor to [Second-Hand-Friends/ebayKleinanzeigen](https:/
1. Execute `bash build-image.sh` 1. Execute `bash build-image.sh`
1. Ensure the image is build: 1. Ensure the image is built:
``` ```text
$ docker image ls $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE REPOSITORY TAG IMAGE ID CREATED SIZE
second-hand-friends/kleinanzeigen-bot latest c31fd256eeea 1 minute ago 590MB second-hand-friends/kleinanzeigen-bot latest c31fd256eeea 1 minute ago 590MB
@@ -152,6 +184,7 @@ It is the spiritual successor to [Second-Hand-Friends/ebayKleinanzeigen](https:/
``` ```
**Running the docker image:** **Running the docker image:**
1. Ensure the X11 Server is running 1. Ensure the X11 Server is running
1. Run the docker image: 1. Run the docker image:
@@ -170,197 +203,202 @@ It is the spiritual successor to [Second-Hand-Friends/ebayKleinanzeigen](https:/
--help --help
``` ```
## <a name="usage"></a>Usage ## <a name="usage"></a>Usage
``` ```console
Usage: kleinanzeigen-bot COMMAND [OPTIONS] Usage: kleinanzeigen-bot COMMAND [OPTIONS]
Commands: Commands:
publish - (re-)publishes ads publish - (re-)publishes ads
verify - verifies the configuration files verify - verifies the configuration files
delete - deletes ads delete - deletes ads
update - updates published ads
download - downloads one or multiple ads download - downloads one or multiple ads
extend - extends active ads that expire soon (keeps watchers/savers and does not count towards the monthly ad quota)
update-check - checks for available updates
update-content-hash recalculates each ad's content_hash based on the current ad_defaults;
use this after changing config.yaml/ad_defaults to avoid every ad being marked "changed" and republished
create-config - creates a new default configuration file if one does not exist
diagnose - diagnoses browser connection issues and shows troubleshooting information
-- --
help - displays this help (default command) help - displays this help (default command)
version - displays the application version version - displays the application version
Options: Options:
--ads=all|due|new|<id(s)> (publish) - specifies which ads to (re-)publish (DEFAULT: due) --ads=all|due|new|changed|<id(s)> (publish) - specifies which ads to (re-)publish (DEFAULT: due)
Possible values: Possible values:
* all: (re-)publish all ads ignoring republication_interval * all: (re-)publish all ads ignoring republication_interval
* due: publish all new ads and republish ads according the republication_interval * due: publish all new ads and republish ads according the republication_interval
* new: only publish new ads (i.e. ads that have no id in the config file) * new: only publish new ads (i.e. ads that have no id in the config file)
* changed: only publish ads that have been modified since last publication
* <id(s)>: provide one or several ads by ID to (re-)publish, like e.g. "--ads=1,2,3" ignoring republication_interval * <id(s)>: provide one or several ads by ID to (re-)publish, like e.g. "--ads=1,2,3" ignoring republication_interval
* Combinations: You can combine multiple selectors with commas, e.g. "--ads=changed,due" to publish both changed and due ads
--ads=all|new|<id(s)> (download) - specifies which ads to download (DEFAULT: new) --ads=all|new|<id(s)> (download) - specifies which ads to download (DEFAULT: new)
Possible values: Possible values:
* all: downloads all ads from your profile * all: downloads all ads from your profile
* new: downloads ads from your profile that are not locally saved yet * new: downloads ads from your profile that are not locally saved yet
* <id(s)>: provide one or several ads by ID to download, like e.g. "--ads=1,2,3" * <id(s)>: provide one or several ads by ID to download, like e.g. "--ads=1,2,3"
--ads=all|<id(s)> (extend) - specifies which ads to extend (DEFAULT: all)
Possible values:
* all: extend all eligible ads in your profile
* <id(s)>: provide one or several ads by ID to extend, like e.g. "--ads=1,2,3"
* Note: kleinanzeigen.de only allows extending ads within 8 days of expiry; ads outside this window are skipped.
--ads=changed|<id(s)> (update) - specifies which ads to update (DEFAULT: changed)
Possible values:
* changed: only update ads that have been modified since last publication
* <id(s)>: provide one or several ads by ID to update, like e.g. "--ads=1,2,3"
--force - alias for '--ads=all' --force - alias for '--ads=all'
--keep-old - don't delete old ads on republication --keep-old - don't delete old ads on republication
--config=<PATH> - path to the config YAML or JSON file (DEFAULT: ./config.yaml) --config=<PATH> - path to the config YAML or JSON file (does not implicitly change workspace mode)
--logfile=<PATH> - path to the logfile (DEFAULT: ./kleinanzeigen-bot.log) --workspace-mode=portable|xdg - overrides workspace mode for this run
--logfile=<PATH> - path to the logfile (DEFAULT: depends on active workspace mode)
--lang=en|de - display language (STANDARD: system language if supported, otherwise English)
-v, --verbose - enables verbose output - only useful when troubleshooting issues -v, --verbose - enables verbose output - only useful when troubleshooting issues
``` ```
> **Note:** The output of `kleinanzeigen-bot help` is always the most up-to-date reference for available commands and options.
Limitation of `download`: It's only possible to extract the cheapest given shipping option. Limitation of `download`: It's only possible to extract the cheapest given shipping option.
### Configuration ## <a name="config"></a>Configuration
All configuration files can be in YAML or JSON format. All configuration files can be in YAML or JSON format.
#### 1) Main configuration ### Installation modes (portable vs. user directories)
When executing the app it by default looks for a `config.yaml` file in the current directory. If it does not exist it will be created automatically. On first run, the app may ask which installation mode to use. In non-interactive environments (CI/headless), it defaults to portable mode and will not prompt.
The configuration file to be used can also be specified using the `--config <PATH>` command line parameter. It must point to a YAML or JSON file. Path resolution rules:
Valid file extensions are `.json`, `.yaml` and `.yml`
The following parameters can be configured: - Runtime files are mode-dependent write locations (for example, logfile, update state, browser profile/cache, diagnostics, and downloaded ads).
- `--config` selects only the config file; it does not silently switch workspace mode.
- `--workspace-mode=portable`: runtime files are placed in the same directory as the active config file (or the current working directory if no `--config` is supplied).
- `--workspace-mode=xdg`: runtime files use OS-standard user directories.
- `--config` without `--workspace-mode`: mode is inferred from existing footprints; on ambiguity/unknown, the command fails with guidance (for example: `Could not infer workspace mode for --config ...`) and asks you to rerun with `--workspace-mode=portable` or `--workspace-mode=xdg`.
Examples:
- `kleinanzeigen-bot --config /sync/dropbox/config1.yaml verify` (no `--workspace-mode`): mode is inferred from detected footprints; if both portable and user-directories footprints are found (or none are found), the command fails and lists the found paths.
- `kleinanzeigen-bot --workspace-mode=portable --config /sync/dropbox/config1.yaml verify`: runtime files are rooted at `/sync/dropbox/` (for example `/sync/dropbox/.temp/` and `/sync/dropbox/downloaded-ads/`).
- `kleinanzeigen-bot --workspace-mode=xdg --config /sync/dropbox/config1.yaml verify`: config is read from `/sync/dropbox/config1.yaml`, while runtime files stay in user directories (on Linux: `~/.config/kleinanzeigen-bot/`, `~/.local/state/kleinanzeigen-bot/`, `~/.cache/kleinanzeigen-bot/`).
1. **Portable mode (recommended for most users, especially on Windows):**
- Stores config, logs, downloads, and state in the current working directory
- No admin permissions required
- Easy backup/migration; works from USB drives
1. **User directories mode (advanced users / multi-user setups):**
- Stores files in OS-standard locations
- Cleaner directory structure; better separation from working directory
- Requires proper permissions for user data directories
**OS notes (brief):**
- **Windows:** User directories mode uses AppData (Roaming/Local); portable keeps everything alongside the `.exe`.
- **Linux:** User directories mode uses `~/.config/kleinanzeigen-bot/config.yaml`, `~/.local/state/kleinanzeigen-bot/`, and `~/.cache/kleinanzeigen-bot/`; portable uses `./config.yaml`, `./.temp/`, and `./downloaded-ads/`.
- **macOS:** User directories mode uses `~/Library/Application Support/kleinanzeigen-bot/config.yaml` (config), `~/Library/Application Support/kleinanzeigen-bot/` (state/runtime), and `~/Library/Caches/kleinanzeigen-bot/` (cache/diagnostics); portable stays in the current working directory.
If you have footprints from both modes (portable + XDG), pass an explicit mode (for example `--workspace-mode=portable`) and then clean up unused files. See [Configuration: Installation Modes](docs/CONFIGURATION.md#installation-modes).
### <a name="main-config"></a>1) Main configuration ⚙️
The main configuration file (`config.yaml`) is **required** to run the bot. It contains your login credentials and controls all bot behavior.
**Quick start:**
```bash
# Generate a config file with all defaults
kleinanzeigen-bot create-config
# Or specify a custom location
kleinanzeigen-bot --config /path/to/config.yaml publish
```
**Minimal config.yaml:**
```yaml ```yaml
# wild card patterns to select ad configuration files # yaml-language-server: $schema=https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/config.schema.json
# if relative paths are specified, then they are relative to this configuration file
ad_files:
- "./**/ad_*.{json,yml,yaml}"
# default values for ads, can be overwritten in each ad configuration file
ad_defaults:
active: true
type: OFFER # one of: OFFER, WANTED
description:
prefix: ""
suffix: ""
price_type: NEGOTIABLE # one of: FIXED, NEGOTIABLE, GIVE_AWAY, NOT_APPLICABLE
shipping_type: SHIPPING # one of: PICKUP, SHIPPING, NOT_APPLICABLE
shipping_costs: # e.g. 2.95
sell_directly: false # requires shipping_options to take effect
contact:
name: ""
street: ""
zipcode:
phone: "" # IMPORTANT: surround phone number with quotes to prevent removal of leading zeros
republication_interval: 7 # every X days ads should be re-published
# additional name to category ID mappings, see default list at
# https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/kleinanzeigen_bot/resources/categories.yaml
categories:
#Notebooks: 161/278 # Elektronik > Notebooks
#Autoteile: 210/223/sonstige_autoteile # Auto, Rad & Boot > Autoteile & Reifen > Weitere Autoteile
# browser configuration
browser:
# https://peter.sh/experiments/chromium-command-line-switches/
arguments:
# https://stackoverflow.com/a/50725918/5116073
- --disable-dev-shm-usage
- --no-sandbox
# --headless
# --start-maximized
binary_location: # path to custom browser executable, if not specified will be looked up on PATH
extensions: [] # a list of .crx extension files to be loaded
use_private_window: true
user_data_dir: "" # see https://github.com/chromium/chromium/blob/main/docs/user_data_dir.md
profile_name: ""
# login credentials
login: login:
username: "" username: "your_username"
password: "" password: "your_password"
``` ```
#### 2) Ad configuration 📖 **[Complete Configuration Reference →](docs/CONFIGURATION.md)**
Each ad is described in a separate JSON or YAML file with prefix `ad_<filename>`. The prefix is configurable in config file. Full documentation including timeout tuning, browser settings, ad defaults, diagnostics, and all available options.
Parameter values specified in the `ad_defaults` section of the `config.yaml` file don't need to be specified again in the ad configuration file. ### <a name="ad-config"></a>2) Ad configuration 📝
The following parameters can be configured: Each ad is defined in a separate YAML/JSON file (default pattern: `ad_*.yaml`). These files specify the title, description, price, category, images, and other ad-specific settings.
**Quick example (`ad_laptop.yaml`):**
```yaml ```yaml
active: # true or false # yaml-language-server: $schema=https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/ad.schema.json
type: # one of: OFFER, WANTED active: true
title: title: "Gaming Laptop - RTX 3060"
description: # can be multiline, see syntax here https://yaml-multiline.info/ description: |
Powerful gaming laptop in excellent condition.
# built-in category name as specified in https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/kleinanzeigen_bot/resources/categories.yaml Includes original box and charger.
# or custom category name as specified in config.yaml category: "Elektronik > Notebooks"
# or category ID (e.g. 161/27) price: 450
category: Notebooks price_type: NEGOTIABLE
price:
price_type: # one of: FIXED, NEGOTIABLE, GIVE_AWAY
special_attributes:
# haus_mieten.zimmer_d: value # Zimmer
shipping_type: # one of: PICKUP, SHIPPING, NOT_APPLICABLE
shipping_costs: # e.g. 2.95
# specify shipping options / packages
# it is possible to select multiple packages, but only from one size (S, M, L)!
# possible package types for size S:
# - DHL_2
# - Hermes_Päckchen
# - Hermes_S
# possible package types for size M:
# - DHL_5
# - Hermes_M
# possible package types for size L:
# - DHL_10
# - DHL_31,5
# - Hermes_L
shipping_options: []
sell_directly: # true or false, requires shipping_options to take effect
# list of wildcard patterns to select images
# if relative paths are specified, then they are relative to this ad configuration file
images: images:
#- laptop_*.{jpg,png} - "laptop/*.jpg" # Relative to ad file location (or use absolute paths); glob patterns supported
contact:
name:
street:
zipcode:
phone: "" # IMPORTANT: surround phone number with quotes to prevent removal of leading zeros
republication_interval: # every X days the ad should be re-published
id: # set automatically
created_on: # set automatically
updated_on: # set automatically
``` ```
📖 **[Complete Ad Configuration Reference →](docs/AD_CONFIGURATION.md)**
Full documentation including automatic price reduction, shipping options, category IDs, and special attributes.
### <a name="existing-browser"></a>3) Using an existing browser window (Optional)
By default a new browser process will be launched. To reuse a manually launched browser window/process, you can enable remote debugging. This is useful for debugging or when you want to keep your browser session open.
For detailed instructions on setting up remote debugging with Chrome 136+ security requirements, see [Browser Troubleshooting - Using an Existing Browser Window](docs/BROWSER_TROUBLESHOOTING.md#using-an-existing-browser-window).
### <a name="browser-connection-issues"></a>Browser Connection Issues
If you encounter browser connection problems, the bot includes a diagnostic command to help identify issues:
**For binary users:**
```bash
kleinanzeigen-bot diagnose
```
**For source users:**
```bash
pdm run app diagnose
```
This command will check your browser setup and provide troubleshooting information. For detailed solutions to common browser connection issues, see the [Browser Connection Troubleshooting Guide](docs/BROWSER_TROUBLESHOOTING.md).
## <a name="development"></a>Development Notes ## <a name="development"></a>Development Notes
> Please read [CONTRIBUTING.md](CONTRIBUTING.md) before contributing code. Thank you! > Please read [CONTRIBUTING.md](CONTRIBUTING.md) before contributing code. Thank you!
- Format source code: `pdm run format` ## <a name="related"></a>Related Open-Source projects
- Run tests:
- unit tests: `pdm run utest`
- integration tests: `pdm run itest`
- all tests: `pdm run test`
- Run linter: `pdm run lint`
- Create platform-specific executable: `pdm run compile`
- Application bootstrap works like this:
```python
pdm run app
|-> executes 'python -m kleinanzeigen_bot'
|-> executes 'kleinanzeigen_bot/__main__.py'
|-> executes main() function of 'kleinanzeigen_bot/__init__.py'
|-> executes KleinanzeigenBot().run()
````
- [DanielWTE/ebay-kleinanzeigen-api](https://github.com/DanielWTE/ebay-kleinanzeigen-api) (Python) API interface to get random listings from kleinanzeigen.de
- [f-rolf/ebaykleinanzeiger](https://github.com/f-rolf/ebaykleinanzeiger) (Python) Discord bot that watches search results
- [r-unruh/kleinanzeigen-filter](https://github.com/r-unruh/kleinanzeigen-filter) (JavaScript) Chrome extension that filters out unwanted results from searches on kleinanzeigen.de
- [simonsagstetter/Feinanzeigen](https://github.com/simonsagstetter/feinanzeigen) (JavaScript) Chrome extension that improves search on kleinanzeigen.de
- [Superschnizel/Kleinanzeigen-Telegram-Bot](https://github.com/Superschnizel/Kleinanzeigen-Telegram-Bot) (Python) Telegram bot to scrape kleinanzeigen.de
- [tillvogt/KleinanzeigenScraper](https://github.com/tillvogt/KleinanzeigenScraper) (Python) Webscraper which stores scraped info from kleinanzeigen.de in an SQL database
- [TLINDEN/Kleingebäck](https://github.com/TLINDEN/kleingebaeck) (Go) kleinanzeigen.de Backup
## <a name="license"></a>License ## <a name="license"></a>License
All files in this repository are released under the [GNU Affero General Public License v3.0 or later](LICENSE.txt). All files in this repository are released under the [GNU Affero General Public License v3.0 or later](LICENSE.txt).
Individual files contain the following tag instead of the full license text: Individual files contain the following tag instead of the full license text:
```
```text
SPDX-License-Identifier: AGPL-3.0-or-later SPDX-License-Identifier: AGPL-3.0-or-later
``` ```
This enables machine processing of license information based on the SPDX License Identifiers that are available here: https://spdx.org/licenses/. This enables machine processing of license information based on the SPDX License Identifiers that are available here: <https://spdx.org/licenses/>.

46
codecov.yml Normal file
View File

@@ -0,0 +1,46 @@
# https://docs.codecov.com/docs/codecovyml-reference
# https://json.schemastore.org/codecov.json
codecov:
branch: main
require_ci_to_pass: true
notify:
wait_for_ci: true
coverage:
precision: 2
round: down
range: "70...100" # https://docs.codecov.com/docs/codecovyml-reference#coveragerange
status:
# Combined project coverage check (all flags together)
project:
default:
target: 70% # Minimum 70% absolute coverage required
threshold: 1.5% # Allow up to 1.5% coverage drop
informational: false # Block PRs that fail this check
# No flags specified = combines all flags (unit, integration, smoke)
# Patch coverage: check coverage on changed lines only
patch:
default:
target: 80% # Require 80% coverage on new/changed code
threshold: 0% # Don't allow any drop in patch coverage
informational: false # Block PRs that fail this check
# PR Comment Configuration
comment:
layout: "header, diff, flags, files, footer" # Show comprehensive breakdown
behavior: default # Update existing comment
require_changes: false # Always post comment
require_base: false # Post even without base report
require_head: true # Only post if head report exists
hide_project_coverage: false # Show both project and patch
# Flag configuration for visibility (not for status checks)
flags:
unit-tests:
carryforward: true # Reuse previous results if not run
integration-tests:
carryforward: true
smoke-tests:
carryforward: true

View File

@@ -23,9 +23,15 @@ RUN <<EOF
apt-get update apt-get update
echo "#################################################" echo "#################################################"
echo "Install Chromium + Driver..." echo "Installing root CAs..."
echo "#################################################" echo "#################################################"
apt-get install --no-install-recommends -y chromium chromium-driver apt-get install --no-install-recommends -y ca-certificates
update-ca-certificates
echo "#################################################"
echo "Installing Chromium..."
echo "#################################################"
apt-get install --no-install-recommends -y chromium
apt-get clean autoclean apt-get clean autoclean
apt-get autoremove --purge -y apt-get autoremove --purge -y
@@ -42,10 +48,11 @@ EOF
###################### ######################
# https://hub.docker.com/_/python/tags?name=3-slim # https://hub.docker.com/_/python/tags?name=3-slim
FROM python:3.12-slim AS build-image FROM python:3.14-slim AS build-image
ARG DEBIAN_FRONTEND=noninteractive ARG DEBIAN_FRONTEND=noninteractive
ARG LC_ALL=C ARG LC_ALL=C
ARG GIT_COMMIT_HASH
SHELL ["/bin/bash", "-euo", "pipefail", "-c"] SHELL ["/bin/bash", "-euo", "pipefail", "-c"]

330
docs/AD_CONFIGURATION.md Normal file
View File

@@ -0,0 +1,330 @@
# Ad Configuration Reference
Complete reference for ad YAML files in kleinanzeigen-bot.
## File Format
Each ad is described in a separate JSON or YAML file with the default `ad_` prefix (for example, `ad_laptop.yaml`). You can customize the prefix via the `ad_files` pattern in `config.yaml`.
Examples below use YAML, but JSON uses the same keys and structure.
Parameter values specified in the `ad_defaults` section of `config.yaml` don't need to be specified again in the ad configuration file.
## Quick Start
Generate sample ad files using the download command:
```bash
# Download all ads from your profile
kleinanzeigen-bot download --ads=all
# Download only new ads (not locally saved yet)
kleinanzeigen-bot download --ads=new
# Download specific ads by ID
kleinanzeigen-bot download --ads=1,2,3
```
For full JSON schema with IDE autocompletion support, see:
- [schemas/ad.schema.json](https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/ad.schema.json)
📖 **[Complete Main Configuration Reference →](CONFIGURATION.md)**
Full documentation for `config.yaml` including all options, timeouts, browser settings, update checks, and ad_defaults.
## Configuration Structure
### Basic Ad Properties
Description values can be multiline. See <https://yaml-multiline.info/> for YAML syntax examples.
```yaml
# yaml-language-server: $schema=https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/ad.schema.json
active: true
type: OFFER
title: "Your Ad Title"
description: |
Your ad description here.
Supports multiple lines.
```
### Description Prefix and Suffix
You can add prefix and suffix text to your ad descriptions in two ways:
#### New Format (Recommended)
In your `config.yaml` file you can specify a `description_prefix` and `description_suffix` under the `ad_defaults` section:
```yaml
ad_defaults:
description_prefix: "Prefix text"
description_suffix: "Suffix text"
```
#### Legacy Format
In your ad configuration file you can specify a `description_prefix` and `description_suffix`:
```yaml
description_prefix: "Prefix text"
description_suffix: "Suffix text"
```
#### Precedence
The ad-level setting has precedence over the `config.yaml` default. If you specify both, the ad-level setting will be used. We recommend using the `config.yaml` defaults as it is more flexible and easier to manage.
### Category
Built-in category name, custom category name from `config.yaml`, or category ID.
```yaml
# Built-in category name (see default list at
# https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/src/kleinanzeigen_bot/resources/categories.yaml)
category: "Elektronik > Notebooks"
# Custom category name (defined in config.yaml)
category: "Verschenken & Tauschen > Tauschen"
# Category ID
category: 161/278
```
### Price and Price Type
```yaml
price: # Price in euros; decimals allowed but will be rounded to nearest whole euro on processing
# (prefer whole euros for predictability)
price_type: # one of: FIXED, NEGOTIABLE, GIVE_AWAY (default: NEGOTIABLE)
```
### Automatic Price Reduction
When `auto_price_reduction.enabled` is set to `true`, the bot lowers the configured `price` every time the ad is reposted.
**Important:** Price reductions only apply when using the `publish` command (which deletes the old ad and creates a new one). Using the `update` command to modify ad content does NOT trigger price reductions or increment `repost_count`.
`repost_count` is tracked for every ad (and persisted inside the corresponding `ad_*.yaml`) so reductions continue across runs.
`min_price` is required whenever `enabled` is `true` and must be less than or equal to `price`; this makes an explicit floor (including `0`) mandatory. If `min_price` equals the current price, the bot will log a warning and perform no reduction.
**Note:** `repost_count` and price reduction counters are only incremented and persisted after a successful publish. Failed publish attempts do not advance the counters.
When automatic price reduction is enabled, each `publish` run logs one clear INFO message per ad summarizing the outcome—whether the price was reduced, kept, or the reduction was delayed (and why). The `verify` command also previews these outcomes for all configured ads so you can validate your pricing configuration without triggering a publish cycle. Ads without `auto_price_reduction` configured are silently skipped at default log level.
If you run with `-v` / `--verbose`, the bot additionally logs structured decision details (repost counts, cycle state, day delay, reference timestamps) and the full cycle-by-cycle calculation trace (base price, reduction value, rounded step result, and floor clamp).
```yaml
auto_price_reduction:
enabled: # true or false to enable automatic price reduction on reposts (default: false)
strategy: # "PERCENTAGE" or "FIXED" (required when enabled is true)
amount: # Reduction amount; interpreted as percent for PERCENTAGE or currency units for FIXED
# (prefer whole euros for predictability)
min_price: # Required when enabled is true; minimum price floor
# (use 0 for no lower bound, prefer whole euros for predictability)
delay_reposts: # Number of reposts to wait before first reduction (default: 0)
delay_days: # Number of days to wait after publication before reductions (default: 0)
```
**Note:** All prices are rounded to whole euros after each reduction step.
#### PERCENTAGE Strategy Example
```yaml
price: 150
price_type: FIXED
auto_price_reduction:
enabled: true
strategy: PERCENTAGE
amount: 10
min_price: 90
delay_reposts: 0
delay_days: 0
```
This posts the ad at 150 € the first time, then 135 € (10%), 122 € (10%), 110 € (10%), 99 € (10%), and stops decreasing at 90 €.
**Note:** The bot applies commercial rounding (ROUND_HALF_UP) to full euros after each reduction step. For example, 121.5 rounds to 122, and 109.8 rounds to 110. This step-wise rounding affects the final price progression, especially for percentage-based reductions.
#### FIXED Strategy Example
```yaml
price: 150
price_type: FIXED
auto_price_reduction:
enabled: true
strategy: FIXED
amount: 15
min_price: 90
delay_reposts: 0
delay_days: 0
```
This posts the ad at 150 € the first time, then 135 € (15 €), 120 € (15 €), 105 € (15 €), and stops decreasing at 90 €.
#### Note on `delay_days` Behavior
The `delay_days` parameter counts complete 24-hour periods (whole days) since the ad was published. For example, if `delay_days: 7` and the ad was published 6 days and 23 hours ago, the reduction will not yet apply. This ensures predictable behavior and avoids partial-day ambiguity.
Combined timeline example: with `republication_interval: 3`, `delay_reposts: 1`, and `delay_days: 2`, the first reduction is typically applied on the third publish cycle (around day 8 in a steady schedule, because due ads are republished after more than 3 full days):
- day 0: first publish, no reduction
- day 4: second publish, still waiting for repost delay
- day 8: third publish, first reduction can apply
Set `auto_price_reduction.enabled: false` (or omit the entire `auto_price_reduction` section) to keep the existing behavior—prices stay fixed and `repost_count` only acts as tracked metadata for future changes.
You can configure `auto_price_reduction` once under `ad_defaults` in `config.yaml`. The `min_price` can be set there or overridden per ad file as needed.
### Special Attributes
Special attributes are category-specific key/value pairs. Use the download command to inspect existing ads in your category and reuse the keys you see under `special_attributes`.
```yaml
special_attributes:
# Example for rental properties
# haus_mieten.zimmer_d: "3" # Number of rooms
```
### Shipping Configuration
```yaml
shipping_type: # one of: PICKUP, SHIPPING, NOT_APPLICABLE (default: SHIPPING)
shipping_costs: # e.g. 2.95 (for individual postage, keep shipping_type SHIPPING and leave shipping_options empty)
# Specify shipping options / packages
# It is possible to select multiple packages, but only from one size (S, M, L)!
# Possible package types for size S:
# - DHL_2
# - Hermes_Päckchen
# - Hermes_S
# Possible package types for size M:
# - DHL_5
# - Hermes_M
# Possible package types for size L:
# - DHL_10
# - DHL_20
# - DHL_31,5
# - Hermes_L
shipping_options: []
# Example (size S only):
# shipping_options:
# - DHL_2
# - Hermes_Päckchen
sell_directly: # true or false, requires shipping_type SHIPPING to take effect (default: false)
```
**Shipping types:**
- `PICKUP` - Buyer picks up the item
- `SHIPPING` - Item is shipped (requires shipping costs or options)
- `NOT_APPLICABLE` - Shipping not applicable for this item
**Sell Directly:**
When `sell_directly: true`, buyers can purchase the item directly through the platform without contacting the seller first. This feature only works when `shipping_type: SHIPPING`.
### Images
List of wildcard patterns to select images. If relative paths are specified, they are relative to this ad configuration file.
```yaml
images:
# - laptop_*.{jpg,png}
```
### Contact Information
Contact details for the ad. These override defaults from `config.yaml`.
```yaml
contact:
name:
street:
zipcode:
phone: "" # IMPORTANT: surround phone number with quotes to prevent removal of leading zeros
```
### Republication Interval
How often the ad should be republished (in days). Overrides `ad_defaults.republication_interval` from `config.yaml`.
```yaml
republication_interval: # every X days the ad should be re-published (default: 7)
```
### Auto-Managed Fields
The following fields are automatically managed by the bot. Do not manually edit these unless you know what you're doing.
```yaml
id: # The ID assigned by kleinanzeigen.de
created_on: # ISO timestamp when the ad was first published
updated_on: # ISO timestamp when the ad was last published
content_hash: # Hash of the ad content, used to detect changes
repost_count: # How often the ad has been (re)published; used for automatic price reductions
```
## Complete Example
```yaml
# yaml-language-server: $schema=https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/refs/heads/main/schemas/ad.schema.json
active: true
type: OFFER
title: "Example Ad Title"
description: |
This is a multi-line description.
You can add as much detail as you want here.
The bot will preserve line breaks and formatting.
description_prefix: "For sale: " # Optional ad-level override; defaults can live in config.yaml
description_suffix: " Please message if interested!" # Optional ad-level override
category: "Elektronik > Notebooks"
price: 150
price_type: FIXED
auto_price_reduction:
enabled: true
strategy: PERCENTAGE
amount: 10
min_price: 90
delay_reposts: 0
delay_days: 0
shipping_type: SHIPPING
shipping_costs: 4.95
sell_directly: true
images:
- "images/laptop_*.jpg"
contact:
name: "John Doe"
street: "Main Street 123"
zipcode: "12345"
phone: "0123456789"
republication_interval: 7
```
## Best Practices
1. **Use meaningful filenames**: Name your ad files descriptively, e.g., `ad_laptop_hp_15.yaml`
1. **Set defaults in config.yaml**: Put common values in `ad_defaults` to avoid repetition
1. **Test before bulk publishing**: Use `--ads=changed` or `--ads=new` to test changes before republishing all ads
1. **Back up your ad files**: Keep them in version control if you want to track changes
1. **Use price reductions carefully**: Set appropriate `min_price` to avoid underpricing
1. **Check shipping options**: Ensure your shipping options match the actual package size and cost
## Troubleshooting
- **Schema validation errors**: Run `kleinanzeigen-bot verify` (binary) or `pdm run app verify` (source) to see which fields fail validation.
- **Price reduction not applying**: Confirm `auto_price_reduction.enabled` is `true`, `min_price` is set, and you are using `publish` (not `update`). Run `kleinanzeigen-bot verify` to preview outcomes, or add `-v` for detailed decision data including repost/day-delay state. Remember ad-level values override `ad_defaults`.
- **Shipping configuration issues**: Use `shipping_type: SHIPPING` when setting `shipping_costs` or `shipping_options`, and pick options from a single size group (S/M/L).
- **Category not found**: Verify the category name or ID and check any custom mappings in `config.yaml`.
- **File naming/prefix mismatch**: Ensure ad files match your `ad_files` glob and prefix (default `ad_`).
- **Image path resolution**: Relative paths are resolved from the ad file location; use absolute paths and check file permissions if images are not found.

View File

@@ -0,0 +1,630 @@
# Browser Connection Troubleshooting Guide
This guide helps you resolve common browser connection issues with the kleinanzeigen-bot.
## ⚠️ Important: Chrome 136+ Security Changes (March 2025)
**If you're using Chrome 136 or later and remote debugging stopped working, this is likely the cause.**
Google implemented security changes in Chrome 136 that require `--user-data-dir` to be specified when using `--remote-debugging-port`. This prevents attackers from accessing the default Chrome profile and stealing cookies/credentials.
### Quick Fix
```bash
# Start Chrome with custom user data directory
chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-debug-profile
```
### In your config.yaml
```yaml
browser:
arguments:
- --remote-debugging-port=9222
- --user-data-dir=/tmp/chrome-debug-profile # Required for Chrome 136+
user_data_dir: "/tmp/chrome-debug-profile" # Must match the argument above
```
**The bot will automatically detect Chrome 136+ and provide clear error messages if your configuration is missing the required `--user-data-dir` setting.**
For more details, see [Chrome 136+ Security Changes](#5-chrome-136-security-changes-march-2025) below.
## Quick Diagnosis
Run the diagnostic command to automatically check your setup:
**For binary users:**
```bash
kleinanzeigen-bot diagnose
```
**For source users:**
```bash
pdm run app diagnose
```
This will check:
- Browser binary availability and permissions
- User data directory permissions
- Remote debugging port status
- Running browser processes
- Platform-specific issues
- **Chrome/Edge version detection and configuration validation**
**Automatic Chrome 136+ Validation:**
The bot automatically detects Chrome/Edge 136+ and validates your configuration. If you're using Chrome 136+ with remote debugging but missing the required `--user-data-dir` setting, you'll see clear error messages like:
```console
Chrome 136+ configuration validation failed: Chrome 136+ requires --user-data-dir
Please update your configuration to include --user-data-dir for remote debugging
```
The bot will also provide specific instructions on how to fix your configuration.
### Issue: Slow page loads or recurring TimeoutError
**Symptoms:**
- `_extract_category_from_ad_page` fails intermittently due to breadcrumb lookups timing out
- Captcha/SMS/GDPR prompts appear right after a timeout
- Requests to GitHub's API fail sporadically with timeout errors
**Solutions:**
1. Increase `timeouts.multiplier` in `config.yaml` (e.g., `2.0` doubles every timeout consistently).
1. Override specific keys under `timeouts` (e.g., `pagination_initial: 20.0`) if only a single selector is problematic.
1. For slow email verification prompts, raise `timeouts.email_verification`.
1. Keep `retry_enabled` on so that DOM lookups are retried with exponential backoff.
1. Attach `timing_data.json` when opening issues so maintainers can tune defaults from real-world timing evidence.
- It is written automatically during runs when `diagnostics.timing_collection` is enabled (default: `true`, see `CONFIGURATION.md`).
- Portable mode path: `./.temp/timing/timing_data.json`
- User directories mode path: `~/.cache/kleinanzeigen-bot/timing/timing_data.json` (Linux), `~/Library/Caches/kleinanzeigen-bot/timing/timing_data.json` (macOS), or `%LOCALAPPDATA%\kleinanzeigen-bot\timing\timing_data.json` (Windows)
- Which one applies depends on your installation mode: portable mode writes next to your config/current directory, user directories mode writes in OS-standard user paths. Check which path exists on your system, or see `CONFIGURATION.md#installation-modes` for mode selection details.
### Issue: Bot fails to detect existing login session
**Symptoms:**
- Bot re-logins despite being already authenticated
- Intermittent (50/50) login detection behavior
- More common with profiles unused for 20+ days
**How login detection works:**
The bot checks your login status using page elements first (to minimize bot-like behavior), with a fallback to a server-side request if needed.
The bot uses a **DOM-based check** as the primary method to detect login state:
1. **DOM check (preferred - stealthy)**: Checks for user profile elements in the page
- Looks for `.mr-medium` element containing username
- Falls back to `#user-email` ID
- Uses the `login_detection` timeout (default: 10.0 seconds with effective timeout with retry/backoff)
- Minimizes bot detection by avoiding JSON API requests that normal users wouldn't trigger
2. **Auth probe fallback (more reliable)**: Sends a GET request to `{root_url}/m-meine-anzeigen-verwalten.json?sort=DEFAULT`
- Returns `LOGGED_IN` if the response is HTTP 200 with valid JSON containing `"ads"` key
- Returns `LOGGED_OUT` if response is HTTP 401/403 or HTML contains login markers
- Returns `UNKNOWN` on timeouts, assertion failures, or unexpected response bodies
- Only used when DOM check is inconclusive (UNKNOWN or timed out)
3. **Diagnostics capture**: If the state remains `UNKNOWN` and `diagnostics.login_detection_capture` is enabled
- Captures a screenshot and HTML dump for troubleshooting
- Pauses for manual inspection if `diagnostics.pause_on_login_detection_failure` is enabled and running in an interactive terminal
**What `login_detection` controls:**
- Maximum time (seconds) to wait for user profile DOM elements when checking if already logged in
- Default: `10.0` seconds (effective timeout with retry/backoff)
- Used at startup before attempting login
- Note: With DOM-first order, this timeout applies to the primary DOM check path
**When to increase `login_detection`:**
- Frequent unnecessary re-logins despite being authenticated
- Slow or unstable network connection
- Using browser profiles that haven't been active for weeks
> **⚠️ PII Warning:** HTML dumps captured by diagnostics may contain your account email or other personally identifiable information. Review files in the diagnostics output directory before sharing them publicly.
**Example:**
```yaml
timeouts:
login_detection: 15.0 # For slower networks or old sessions
# Enable diagnostics when troubleshooting login detection issues
diagnostics:
login_detection_capture: true # Capture artifacts on UNKNOWN state
pause_on_login_detection_failure: true # Pause for inspection (interactive only)
output_dir: "./diagnostics" # Custom output directory (optional)
```
## Common Issues and Solutions
### Issue 1: "Failed to connect to browser" with "root" error
**Symptoms:**
- Error message mentions "One of the causes could be when you are running as root"
- Connection fails when using existing browser profiles
**Causes:**
1. Running the application as root user
1. Browser profile is locked or in use by another process
1. Insufficient permissions to access the browser profile
1. Browser is not properly started with remote debugging enabled
**Solutions:**
#### 1. Don't run as root
```bash
# ❌ Don't do this
sudo pdm run app publish
# ✅ Do this instead
pdm run app publish
```
#### 2. Close all browser instances
```bash
# On Linux/macOS
pkill -f chrome
pkill -f chromium
pkill -f msedge
# On Windows
taskkill /f /im chrome.exe
taskkill /f /im msedge.exe
```
#### 3. Remove user_data_dir temporarily
Edit your `config.yaml` and comment out or remove the `user_data_dir` line:
```yaml
browser:
# user_data_dir: C:\Users\user\AppData\Local\Microsoft\Edge\User Data # Comment this out
profile_name: "Default"
```
#### 4. Start browser manually with remote debugging
```bash
# For Chrome (macOS)
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-debug-profile
# For Chrome (Linux)
google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-debug-profile
# For Chrome (Windows)
"C:\Program Files\Google\Chrome\Application\chrome.exe" --remote-debugging-port=9222 --user-data-dir=C:\temp\chrome-debug-profile
# For Edge (macOS)
/Applications/Microsoft\ Edge.app/Contents/MacOS/Microsoft\ Edge --remote-debugging-port=9222 --user-data-dir=/tmp/edge-debug-profile
# For Edge (Linux/Windows)
msedge --remote-debugging-port=9222 --user-data-dir=/tmp/edge-debug-profile
# For Chromium (Linux)
chromium --remote-debugging-port=9222 --user-data-dir=/tmp/chromium-debug-profile
```
Then in your `config.yaml`:
```yaml
browser:
arguments:
- --remote-debugging-port=9222
- --user-data-dir=/tmp/chrome-debug-profile # Must match the command line
user_data_dir: "/tmp/chrome-debug-profile" # Must match the argument above
```
#### ⚠️ IMPORTANT: Chrome 136+ Security Requirement
Starting with Chrome 136 (March 2025), Google has implemented security changes that require `--user-data-dir` to be specified when using `--remote-debugging-port`. This prevents attackers from accessing the default Chrome profile and stealing cookies/credentials. See [Chrome's security announcement](https://developer.chrome.com/blog/remote-debugging-port?hl=de) for more details.
### Issue 2: "Browser process not reachable at 127.0.0.1:9222"
**Symptoms:**
- Port check fails when trying to connect to existing browser
- Browser appears to be running but connection fails
**Causes:**
1. Browser not started with remote debugging port
1. Port is blocked by firewall
1. Browser crashed or closed
1. Timing issue - browser not fully started
1. Browser update changed remote debugging behavior
1. Existing Chrome instance conflicts with new debugging session
1. **Chrome 136+ security requirement not met** (most common cause since March 2025)
**Solutions:**
#### 1. Verify browser is started with remote debugging
Make sure your browser is started with the correct flag:
```bash
# Check if browser is running with remote debugging
netstat -an | grep 9222 # Linux/macOS
netstat -an | findstr 9222 # Windows
```
#### 2. Start browser manually first
```bash
# Start browser with remote debugging
chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-debug
# Then run the bot
kleinanzeigen-bot publish # For binary users
# or
pdm run app publish # For source users
```
#### 3. macOS-specific: Chrome started but connection fails
If you're on macOS and Chrome is started with remote debugging but the bot still can't connect:
#### ⚠️ IMPORTANT: macOS Security Requirement
This is a Chrome/macOS security issue that requires a dedicated user data directory.
```bash
# Method 1: Use the full path to Chrome with dedicated user data directory
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
--remote-debugging-port=9222 \
--user-data-dir=/tmp/chrome-debug-profile \
--disable-dev-shm-usage
# Method 2: Use open command with proper arguments
open -a "Google Chrome" --args \
--remote-debugging-port=9222 \
--user-data-dir=/tmp/chrome-debug-profile \
--disable-dev-shm-usage
# Method 3: Check if Chrome is actually listening on the port
lsof -i :9222
curl http://localhost:9222/json/version
```
**⚠️ CRITICAL: You must also configure the same user data directory in your config.yaml:**
```yaml
browser:
arguments:
- --remote-debugging-port=9222
- --user-data-dir=/tmp/chrome-debug-profile
- --disable-dev-shm-usage
user_data_dir: "/tmp/chrome-debug-profile"
```
**Common macOS issues:**
- Chrome/macOS security restrictions require a dedicated user data directory
- The `--user-data-dir` flag is **mandatory** for remote debugging on macOS
- Use `--disable-dev-shm-usage` to avoid shared memory issues
- The user data directory must match between manual Chrome startup and config.yaml
#### 4. Browser update issues
If it worked before but stopped working after a browser update:
```bash
# Check your browser version
# macOS
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --version
# Linux
google-chrome --version
# Windows
"C:\Program Files\Google\Chrome\Application\chrome.exe" --version
# Close all browser instances first
pkill -f "Google Chrome" # macOS/Linux
# or
taskkill /f /im chrome.exe # Windows
# Start fresh with proper flags (see macOS-specific section above for details)
```
**After browser updates:**
- Chrome may have changed how remote debugging works
- Security restrictions may have been updated
- Try using a fresh user data directory to avoid conflicts
- Ensure you're using the latest version of the bot
#### 5. Chrome 136+ Security Changes (March 2025)
If you're using Chrome 136 or later and remote debugging stopped working:
**The Problem:**
Google implemented security changes in Chrome 136 that prevent `--remote-debugging-port` from working with the default user data directory. This was done to protect users from cookie theft attacks.
**The Solution:**
You must now specify a custom `--user-data-dir` when using remote debugging:
```bash
# ❌ This will NOT work with Chrome 136+
chrome --remote-debugging-port=9222
# ✅ This WILL work with Chrome 136+
chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-debug-profile
```
**In your config.yaml:**
```yaml
browser:
arguments:
- --remote-debugging-port=9222
- --user-data-dir=/tmp/chrome-debug-profile # Required for Chrome 136+
user_data_dir: "/tmp/chrome-debug-profile" # Must match the argument above
```
**Why this change was made:**
- Prevents attackers from accessing the default Chrome profile
- Protects cookies and login credentials
- Uses a different encryption key for the custom profile
- Makes debugging more secure
**For more information:**
- [Chrome's security announcement](https://developer.chrome.com/blog/remote-debugging-port?hl=de)
- [GitHub issue discussion](https://github.com/Second-Hand-Friends/kleinanzeigen-bot/issues/604)
#### 6. Check firewall settings
- Windows: Check Windows Defender Firewall
- macOS: Check System Preferences > Security & Privacy > Firewall
- Linux: Check iptables or ufw settings
#### 7. Use different port
Try a different port in case 9222 is blocked:
```yaml
browser:
arguments:
- --remote-debugging-port=9223
```
### Issue 3: Profile directory issues
**Symptoms:**
- Errors about profile directory not found
- Permission denied errors
- Profile locked errors
**Solutions:**
#### 1. Use temporary profile
```yaml
browser:
user_data_dir: "/tmp/chrome-temp" # Linux/macOS
# user_data_dir: "C:\\temp\\chrome-temp" # Windows
profile_name: "Default"
```
#### 2. Check profile permissions
```bash
# Linux/macOS
ls -la ~/.config/google-chrome/
chmod 755 ~/.config/google-chrome/
# Windows
# Check folder permissions in Properties > Security
```
#### 3. Remove profile temporarily
```yaml
browser:
# user_data_dir: "" # Comment out or remove
# profile_name: "" # Comment out or remove
use_private_window: true
```
### Issue 4: Platform-specific issues
#### Windows
- **Antivirus software**: Add browser executable to exclusions
- **Windows Defender**: Add folder to exclusions
- **UAC**: Run as administrator if needed (but not recommended)
#### macOS
- **Gatekeeper**: Allow browser in System Preferences > Security & Privacy
- **SIP**: System Integrity Protection might block some operations
- **Permissions**: Grant full disk access to terminal/IDE
#### Linux
- **Sandbox**: Add `--no-sandbox` to browser arguments
- **Root user**: Never run as root, use regular user
- **Display**: Ensure X11 or Wayland is properly configured
## Configuration Examples
### Basic working configuration
```yaml
browser:
arguments:
- --disable-dev-shm-usage
- --no-sandbox
use_private_window: true
```
### Using existing browser
```yaml
browser:
arguments:
- --remote-debugging-port=9222
- --user-data-dir=/tmp/chrome-debug-profile # Required for Chrome 136+
user_data_dir: "/tmp/chrome-debug-profile" # Must match the argument above
binary_location: "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"
```
### Using existing browser on macOS (REQUIRED configuration)
```yaml
browser:
arguments:
- --remote-debugging-port=9222
- --user-data-dir=/tmp/chrome-debug-profile
- --disable-dev-shm-usage
user_data_dir: "/tmp/chrome-debug-profile"
binary_location: "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
```
### Using specific profile
```yaml
browser:
user_data_dir: "C:\\Users\\username\\AppData\\Local\\Google\\Chrome\\User Data"
profile_name: "Profile 1"
arguments:
- --disable-dev-shm-usage
```
## Advanced Troubleshooting
### Check browser compatibility
```bash
# Test if browser can be started manually
# macOS
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --version
/Applications/Microsoft\ Edge.app/Contents/MacOS/Microsoft\ Edge --version
# Linux
google-chrome --version
msedge --version
chromium --version
# Windows
"C:\Program Files\Google\Chrome\Application\chrome.exe" --version
msedge --version
```
### Monitor browser processes
```bash
# Linux/macOS
ps aux | grep chrome
lsof -i :9222
# Windows
tasklist | findstr chrome
netstat -an | findstr 9222
```
### Debug with verbose logging
```bash
kleinanzeigen-bot -v publish # For binary users
# or
pdm run app -v publish # For source users
```
### Test browser connection manually
```bash
# Test if port is accessible
curl http://localhost:9222/json/version
```
## Using an Existing Browser Window
By default a new browser process will be launched. To reuse a manually launched browser window/process, follow these steps:
1. Manually launch your browser from the command line with the `--remote-debugging-port=<NUMBER>` flag.
You are free to choose an unused port number between 1025 and 65535, for example:
- `chrome --remote-debugging-port=9222`
- `chromium --remote-debugging-port=9222`
- `msedge --remote-debugging-port=9222`
This runs the browser in debug mode which allows it to be remote controlled by the bot.
**⚠️ IMPORTANT: Chrome 136+ Security Requirement**
Starting with Chrome 136 (March 2025), Google has implemented security changes that require `--user-data-dir` to be specified when using `--remote-debugging-port`. This prevents attackers from accessing the default Chrome profile and stealing cookies/credentials.
**You must now use:**
```bash
chrome --remote-debugging-port=9222 --user-data-dir=/path/to/custom/directory
```
**And in your config.yaml:**
```yaml
browser:
arguments:
- --remote-debugging-port=9222
- --user-data-dir=/path/to/custom/directory
user_data_dir: "/path/to/custom/directory"
```
**The bot will automatically detect Chrome 136+ and validate your configuration. If validation fails, you'll see clear error messages with specific instructions on how to fix your configuration.**
1. In your config.yaml specify the same flags as browser arguments, for example:
```yaml
browser:
arguments:
- --remote-debugging-port=9222
- --user-data-dir=/tmp/chrome-debug-profile # Required for Chrome 136+
user_data_dir: "/tmp/chrome-debug-profile" # Must match the argument above
```
1. When now publishing ads the manually launched browser will be re-used.
> NOTE: If an existing browser is used all other settings configured under `browser` in your config.yaml file will be ignored
> because they are only used to programmatically configure/launch a dedicated browser instance.
>
> **Security Note:** This change was implemented by Google to protect users from cookie theft attacks. The custom user data directory uses a different encryption key than the default profile, making it more secure for debugging purposes.
## Getting Help
If you're still experiencing issues:
1. Run the diagnostic command: `kleinanzeigen-bot diagnose` (binary) or `pdm run app diagnose` (source)
1. Check the log file for detailed error messages
1. Try the solutions above step by step
1. Create an issue on GitHub with:
- Output from the diagnose command
- Your `config.yaml` (remove sensitive information)
- Error messages from the log file
- Operating system and browser version
## Prevention
To avoid browser connection issues:
1. **Don't run as root** - Always use a regular user account
1. **Close other browser instances** - Ensure no other browser processes are running
1. **Use temporary profiles** - Avoid conflicts with existing browser sessions
1. **Keep browser updated** - Use the latest stable version
1. **Check permissions** - Ensure proper file and folder permissions
1. **Monitor system resources** - Ensure sufficient memory and disk space

419
docs/CONFIGURATION.md Normal file
View File

@@ -0,0 +1,419 @@
# Configuration Reference
Complete reference for `config.yaml`, the main configuration file for kleinanzeigen-bot.
## Quick Start
To generate a default configuration file with all current defaults:
```bash
kleinanzeigen-bot create-config
```
For full JSON schema with IDE autocompletion support, see:
- [schemas/config.schema.json](https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/config.schema.json)
A reference snapshot of default values is available at [docs/config.default.yaml](https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/docs/config.default.yaml).
To enable IDE autocompletion in `config.yaml`, add this at the top of the file:
```yaml
# yaml-language-server: $schema=https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/config.schema.json
```
For ad files, use the ad schema instead:
```yaml
# yaml-language-server: $schema=https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/ad.schema.json
```
## Minimal Configuration Example
Here's the smallest viable `config.yaml` to get started. Only the **login** section is required—everything else uses sensible defaults:
```yaml
# yaml-language-server: $schema=https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/config.schema.json
# REQUIRED: Your kleinanzeigen.de credentials
login:
username: "your_username"
password: "your_password"
# OPTIONAL: Where to find your ad files (default pattern shown)
# ad_files:
# - "./**/ad_*.{json,yml,yaml}"
# OPTIONAL: Default values for all ads
# ad_defaults:
# price_type: NEGOTIABLE
# shipping_type: SHIPPING
# republication_interval: 7
```
Run `kleinanzeigen-bot create-config` to generate a complete configuration with all available options and their default values.
The `ad_files` setting controls where the bot looks for your ad YAML files (default pattern: `./**/ad_*.{json,yml,yaml}`). The `ad_defaults` section lets you set default values that apply to all ads—things like price type, shipping options, and republication interval.
📖 **[Complete Ad Configuration Reference →](AD_CONFIGURATION.md)**
Full documentation for ad YAML files including automatic price reduction, description prefix/suffix, shipping options, category IDs, and special attributes.
## File Location
The bot looks for `config.yaml` in the current directory by default. You can specify a different location using `--config`:
```bash
kleinanzeigen-bot --config /path/to/config.yaml publish
```
`--config` selects the configuration file only. Workspace behavior is controlled by installation mode (`portable` or `xdg`) and can be overridden via `--workspace-mode=portable|xdg` (see [Installation Modes](#installation-modes)).
Valid file extensions: `.json`, `.yaml`, `.yml`
## Configuration Structure
### ad_files
Glob (wildcard) patterns to select ad configuration files. If relative paths are specified, they are relative to this configuration file.
```yaml
ad_files:
- "./**/ad_*.{json,yml,yaml}"
```
### ad_defaults
Default values for ads that can be overridden in each ad configuration file.
```yaml
ad_defaults:
active: true
type: OFFER # one of: OFFER, WANTED
description_prefix: ""
description_suffix: ""
price_type: NEGOTIABLE # one of: FIXED, NEGOTIABLE, GIVE_AWAY, NOT_APPLICABLE
shipping_type: SHIPPING # one of: PICKUP, SHIPPING, NOT_APPLICABLE
# NOTE: shipping_costs and shipping_options must be configured per-ad, not as defaults
sell_directly: false # requires shipping_type SHIPPING to take effect
contact:
name: ""
street: ""
zipcode: ""
phone: "" # IMPORTANT: surround phone number with quotes to prevent removal of leading zeros
republication_interval: 7 # every X days ads should be re-published
```
- `ad_defaults.republication_interval` controls when ads become due for republishing.
- Automatic price reductions (including `delay_reposts` and `delay_days`) are evaluated only during `publish` runs.
- Reductions do not run in the background between runs, and `update` does not evaluate or apply reductions.
- When auto price reduction is enabled, each `publish` run logs the reduction decision.
- `-v/--verbose` adds a detailed reduction calculation trace.
- For full behavior and examples (including timeline examples), see [AD_CONFIGURATION.md](./AD_CONFIGURATION.md).
> **Tip:** For current defaults of all timeout and diagnostic settings, run `kleinanzeigen-bot create-config` or see the [JSON schema](https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/config.schema.json).
### categories
Additional name to category ID mappings. See the default list at:
[https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/src/kleinanzeigen_bot/resources/categories.yaml](https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/src/kleinanzeigen_bot/resources/categories.yaml)
```yaml
categories:
Verschenken & Tauschen > Tauschen: 272/273
Verschenken & Tauschen > Verleihen: 272/274
Verschenken & Tauschen > Verschenken: 272/192
```
### timeouts
Timeout tuning for various browser operations. Adjust these if you experience slow page loads or recurring timeouts.
```yaml
timeouts:
multiplier: 1.0 # Scale all timeouts (e.g. 2.0 for slower networks)
default: 5.0 # Base timeout for web_find/web_click/etc.
page_load: 15.0 # Timeout for web_open page loads
captcha_detection: 2.0 # Timeout for captcha iframe detection
sms_verification: 4.0 # Timeout for SMS verification banners
email_verification: 4.0 # Timeout for email verification prompts
gdpr_prompt: 10.0 # Timeout when handling GDPR dialogs
login_detection: 10.0 # Timeout for DOM-based login detection (primary method)
publishing_result: 300.0 # Timeout for publishing status checks
publishing_confirmation: 20.0 # Timeout for publish confirmation redirect
image_upload: 30.0 # Timeout for image upload and server-side processing
pagination_initial: 10.0 # Timeout for first pagination lookup
pagination_follow_up: 5.0 # Timeout for subsequent pagination clicks
quick_dom: 2.0 # Generic short DOM timeout (shipping dialogs, etc.)
update_check: 10.0 # Timeout for GitHub update requests
chrome_remote_probe: 2.0 # Timeout for local remote-debugging probes
chrome_remote_debugging: 5.0 # Timeout for remote debugging API calls
chrome_binary_detection: 10.0 # Timeout for chrome --version subprocess
retry_enabled: true # Enables DOM retry/backoff when timeouts occur
retry_max_attempts: 2
retry_backoff_factor: 1.5
```
**Timeout tuning tips:**
- Slow networks or sluggish remote browsers often just need a higher `timeouts.multiplier`
- For truly problematic selectors, override specific keys directly under `timeouts`
- Keep `retry_enabled` on so DOM lookups are retried with exponential backoff
For more details on timeout configuration and troubleshooting, see [Browser Troubleshooting](./BROWSER_TROUBLESHOOTING.md).
### download
Download configuration for the `download` command.
```yaml
download:
include_all_matching_shipping_options: false # if true, all shipping options matching the package size will be included
excluded_shipping_options: [] # list of shipping options to exclude, e.g. ['DHL_2', 'DHL_5']
folder_name_max_length: 100 # maximum length for folder names when downloading ads (default: 100)
rename_existing_folders: false # if true, rename existing folders without titles to include titles (default: false)
```
### publishing
Publishing configuration.
```yaml
publishing:
delete_old_ads: "AFTER_PUBLISH" # one of: AFTER_PUBLISH, BEFORE_PUBLISH, NEVER
delete_old_ads_by_title: true # only works if delete_old_ads is set to BEFORE_PUBLISH
```
### captcha
Captcha handling configuration. Enable automatic restart to avoid manual confirmation after captchas.
```yaml
captcha:
auto_restart: true # If true, the bot aborts when a Captcha appears and retries publishing later
# If false (default), the Captcha must be solved manually to continue
restart_delay: 1h 30m # Time to wait before retrying after a Captcha was encountered (default: 6h)
```
### browser
Browser configuration. These settings control how the bot launches and connects to Chromium-based browsers.
```yaml
browser:
# See: https://peter.sh/experiments/chromium-command-line-switches/
arguments:
# Example arguments
- --disable-dev-shm-usage
- --no-sandbox
# --headless
# --start-maximized
binary_location: # path to custom browser executable, if not specified will be looked up on PATH
extensions: [] # a list of .crx extension files to be loaded
use_private_window: true
user_data_dir: "" # see https://github.com/chromium/chromium/blob/main/docs/user_data_dir.md
profile_name: ""
```
**Common browser arguments:**
- `--disable-dev-shm-usage` - Avoids shared memory issues in Docker environments
- `--no-sandbox` - Required when running as root (not recommended)
- `--headless` - Run browser in headless mode (no GUI)
- `--start-maximized` - Start browser maximized
For detailed browser connection troubleshooting, including Chrome 136+ security requirements and remote debugging setup, see [Browser Troubleshooting](./BROWSER_TROUBLESHOOTING.md).
### update_check
Update check configuration to automatically check for newer versions on GitHub.
```yaml
update_check:
enabled: true # Enable/disable update checks
channel: latest # One of: latest, preview
interval: 7d # Check interval (e.g. 7d for 7 days)
```
**Interval format:**
- `s`: seconds, `m`: minutes, `h`: hours, `d`: days
- Examples: `7d` (7 days), `12h` (12 hours), `30d` (30 days)
- Validation: minimum 1 day, maximum 30 days
**Channels:**
- `latest`: Only final releases
- `preview`: Includes pre-releases
### login
Login credentials.
```yaml
login:
username: ""
password: ""
```
> **Security Note:** Never commit your credentials to version control. Keep your `config.yaml` secure and exclude it from git if it contains sensitive information.
### diagnostics
Diagnostics configuration for troubleshooting login detection issues and publish failures.
```yaml
diagnostics:
capture_on:
login_detection: false # Capture screenshot + HTML when login state is UNKNOWN
publish: false # Capture screenshot + HTML + JSON on each failed publish attempt (timeouts/protocol errors)
capture_log_copy: false # Copy entire bot log file when diagnostics are captured (may duplicate log content)
pause_on_login_detection_failure: false # Pause for manual inspection (interactive only)
timing_collection: true # Collect timeout timing data locally for troubleshooting and tuning
output_dir: "" # Custom output directory (see "Output locations (default)" below)
```
**Migration Note:**
Old diagnostics keys have been renamed/moved. Update configs and CI/automation accordingly:
- `login_detection_capture``capture_on.login_detection`
- `publish_error_capture``capture_on.publish`
`capture_log_copy` is a new top-level flag. It may copy the same log multiple times during a single run if multiple diagnostic events are triggered.
**Login Detection Behavior:**
The bot uses a layered approach to detect login state, prioritizing stealth over reliability:
1. **DOM check (primary method - preferred for stealth)**: Checks for user profile elements
- Looks for `.mr-medium` element containing username
- Falls back to `#user-email` ID
- Uses `login_detection` timeout (default: 10.0 seconds)
- Minimizes bot-like behavior by avoiding JSON API requests
2. **Auth probe fallback (more reliable)**: Sends a GET request to `{root_url}/m-meine-anzeigen-verwalten.json?sort=DEFAULT`
- Returns `LOGGED_IN` if response is HTTP 200 with valid JSON containing `"ads"` key
- Returns `LOGGED_OUT` if response is HTTP 401/403 or HTML contains login markers
- Returns `UNKNOWN` on timeouts, assertion failures, or unexpected response bodies
- Only used when DOM check is inconclusive (UNKNOWN or timed out)
**Optional diagnostics:**
- Enable `capture_on.login_detection` to capture screenshots and HTML dumps when state is `UNKNOWN`
- Enable `capture_on.publish` to capture screenshots, HTML dumps, and JSON payloads for each failed publish attempt (e.g., attempts 13).
- Enable `capture_log_copy` to copy the entire bot log file when a diagnostic event triggers (e.g., `capture_on.publish` or `capture_on.login_detection`):
- If multiple diagnostics trigger in the same run, the log will be copied multiple times
- Review or redact artifacts before sharing publicly
- Enable `pause_on_login_detection_failure` to pause the bot for manual inspection in interactive sessions. This requires `capture_on.login_detection=true`; if this is not enabled, the runtime will fail startup with a validation error.
- Use custom `output_dir` to specify where artifacts are saved
**Output locations (default):**
- **Portable mode + `--config /path/to/config.yaml`**: `/path/to/.temp/diagnostics/` (portable runtime files are placed next to the selected config file)
- **Portable mode without `--config`**: `./.temp/diagnostics/` (current working directory)
- **User directories mode**: `~/.cache/kleinanzeigen-bot/diagnostics/` (Linux), `~/Library/Caches/kleinanzeigen-bot/diagnostics/` (macOS), or `%LOCALAPPDATA%\kleinanzeigen-bot\Cache\diagnostics\` (Windows)
- **Custom**: Path resolved relative to your `config.yaml` if `output_dir` is specified
**Timing collection output (default):**
- **Portable mode**: `./.temp/timing/timing_data.json`
- **User directories mode**: `~/.cache/kleinanzeigen-bot/timing/timing_data.json` (Linux) or `~/Library/Caches/kleinanzeigen-bot/timing/timing_data.json` (macOS)
- Data is grouped by run/session and retained for 30 days via automatic cleanup during each data write
Example structure:
```json
[
{
"session_id": "abc12345",
"command": "publish",
"started_at": "2026-02-07T10:00:00+01:00",
"ended_at": "2026-02-07T10:04:30+01:00",
"records": [
{
"operation_key": "default",
"operation_type": "web_find",
"effective_timeout_sec": 5.0,
"actual_duration_sec": 1.2,
"attempt_index": 0,
"success": true
}
]
}
]
```
How to read it quickly:
- Group by `command` and `session_id` first to compare slow vs fast runs
- Look for high `actual_duration_sec` values near `effective_timeout_sec` and repeated `success: false` entries
- `attempt_index` is zero-based (`0` first attempt, `1` first retry)
- Use `operation_key` + `operation_type` to identify which timeout bucket (`default`, `page_load`, etc.) needs tuning
- For deeper timeout tuning workflow, see [Browser Troubleshooting](./BROWSER_TROUBLESHOOTING.md)
> **⚠️ PII Warning:** HTML dumps, JSON payloads, timing data JSON files (for example `timing_data.json`), and log copies may contain PII. Typical examples include account email, ad titles/descriptions, contact info, and prices. Log copies are produced by `capture_log_copy` when diagnostics capture runs, such as `capture_on.publish` or `capture_on.login_detection`. Review or redact these artifacts before sharing them publicly.
## Installation Modes
On first run, when the `--workspace-mode` flag is not provided, the app may ask which installation mode to use. In non-interactive environments, it defaults to portable mode.
1. **Portable mode (recommended for most users, especially on Windows):**
- Stores config, logs, downloads, and state in the current directory
- No admin permissions required
- Easy backup/migration; works from USB drives
2. **User directories mode (advanced users / multi-user setups):**
- Stores files in OS-standard locations
- Cleaner directory structure; better separation from working directory
- Requires proper permissions for user data directories
**OS notes:**
- **Windows:** User directories mode uses AppData (Roaming/Local); portable keeps everything beside the `.exe`.
- **Linux:** User directories mode uses `~/.config/kleinanzeigen-bot/config.yaml`, `~/.local/state/kleinanzeigen-bot/`, and `~/.cache/kleinanzeigen-bot/`; portable stays in the current working directory (for example `./config.yaml`, `./.temp/`, `./downloaded-ads/`).
- **macOS:** User directories mode uses `~/Library/Application Support/kleinanzeigen-bot/config.yaml` (config), `~/Library/Application Support/kleinanzeigen-bot/` (state/runtime), and `~/Library/Caches/kleinanzeigen-bot/` (cache/diagnostics); portable stays in the current directory.
### Mixed footprint cleanup
If both portable and XDG footprints exist, `--config` without `--workspace-mode` is intentionally rejected to avoid silent behavior changes.
A footprint is the set of files/directories the bot creates for one mode (configuration file, runtime state/cache directories, and `downloaded-ads`).
Use one explicit run to choose a mode:
```bash
kleinanzeigen-bot --workspace-mode=portable --config /path/to/config.yaml verify
```
or
```bash
kleinanzeigen-bot --workspace-mode=xdg --config /path/to/config.yaml verify
```
Then remove the unused footprint directories/files to make auto-detection unambiguous for future runs.
- Remove **portable footprint** items in your working location: `config.yaml`, `.temp/` (Windows: `.temp\`), and `downloaded-ads/` (Windows: `downloaded-ads\`). Back up or move `config.yaml` to your desired location before deleting it.
- Remove **user directories footprint** items:
Linux: `~/.config/kleinanzeigen-bot/`, `~/.local/state/kleinanzeigen-bot/`, `~/.cache/kleinanzeigen-bot/`.
macOS: `~/Library/Application Support/kleinanzeigen-bot/`, `~/Library/Caches/kleinanzeigen-bot/`.
Windows: `%APPDATA%\kleinanzeigen-bot\`, `%LOCALAPPDATA%\kleinanzeigen-bot\`, `%LOCALAPPDATA%\kleinanzeigen-bot\Cache\`.
## Getting Current Defaults
To see all current default values, run:
```bash
kleinanzeigen-bot create-config
```
This generates a config file with `exclude_none=True`, giving you all the non-None defaults.
For the complete machine-readable reference, see the [JSON schema](https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/config.schema.json).

33
docs/INDEX.md Normal file
View File

@@ -0,0 +1,33 @@
# Documentation Index
This directory contains detailed documentation for kleinanzeigen-bot users and contributors.
## User Documentation
- [Configuration](./CONFIGURATION.md) - Complete reference for `config.yaml`, including all configuration options, timeouts, browser settings, and update check configuration.
- [Ad Configuration](./AD_CONFIGURATION.md) - Complete reference for ad YAML files, including automatic price reduction, description prefix/suffix, and shipping options.
- [Browser Troubleshooting](./BROWSER_TROUBLESHOOTING.md) - Troubleshooting guide for browser connection issues, including Chrome 136+ security requirements, remote debugging setup, and common solutions.
## Contributor Documentation
Contributor documentation is located in the main repository:
- [CONTRIBUTING.md](../CONTRIBUTING.md) - Development setup, workflow, code quality standards, testing requirements, and contribution guidelines.
- [TESTING.md](./TESTING.md) - Detailed testing strategy, test types (unit/integration/smoke), and execution instructions for contributors.
## Getting Started
New users should start with the [README](../README.md), then refer to these documents for detailed configuration and troubleshooting information.
### Quick Start (3 steps)
1. Install and run the app from the [README](../README.md).
2. Generate `config.yaml` with `kleinanzeigen-bot create-config` and review defaults in [Configuration](./CONFIGURATION.md).
3. Verify your setup with `kleinanzeigen-bot verify`, then publish with `kleinanzeigen-bot publish`.
### Common Troubleshooting Tips
- Browser connection issues: confirm remote debugging settings and Chrome 136+ requirements in [Browser Troubleshooting](./BROWSER_TROUBLESHOOTING.md).

119
docs/TESTING.md Normal file
View File

@@ -0,0 +1,119 @@
# TESTING.md
## Test Strategy and Types
This project uses a layered testing approach, with a focus on reliability and fast feedback. The test types are:
- **Unit tests**: Isolated, fast tests targeting the smallest testable units (functions, classes) in isolation. Run first.
- **Integration tests**: Tests that verify the interaction between components or with real external dependencies. Run after unit tests.
- **Smoke tests**: Minimal set of critical checks, run after a successful build and (optionally) after deployment. Their goal is to verify that the most essential workflows (e.g., app starts, config loads, login page reachable) work and that the system is stable enough for deeper testing. Smoke tests are not end-to-end (E2E) tests and should not cover full user workflows.
### Principles
- **Test observable behavior, not internal implementation**
- **Avoid mocks** in smoke tests; use custom fake components (e.g., dummy browser/page objects)
- **Write tests that verify outcomes**, not method call sequences
- **Keep tests simple and maintainable**
### Fakes vs. Mocks
- **Fakes**: Lightweight, custom classes that simulate real dependencies (e.g., DummyBrowser, DummyPage)
- **Mocks**: Avoided in smoke tests; no patching, MagicMock, or side_effect trees
### Example Smoke Tests
- Minimal checks that the application starts and does not crash
- Verifying that a config file can be loaded without error
- Checking that a login page is reachable (but not performing a full login workflow)
### Why This Approach?
- Lower maintenance burden
- Contributors can understand and extend tests
- Quick CI feedback on whether the bot still runs at all
## Smoke Test Marking and Execution
### Marking Smoke Tests
- All smoke tests **must** be marked with `@pytest.mark.smoke`.
- Place smoke tests in `tests/smoke/` for discoverability.
- Example:
```python
import pytest
@pytest.mark.smoke
@pytest.mark.asyncio
async def test_bot_starts(smoke_bot):
...
```
### Running Tests
- **Canonical unified command:**
- `pdm run test` runs all tests in one invocation.
- Output is quiet by default.
- Coverage is enabled by default with `--cov-report=term-missing`.
- **Verbosity controls:**
- `pdm run test -v` enables verbose pytest output and durations.
- `pdm run test -vv` keeps pytest's second verbosity level and durations.
- **Split runs (targeted/stable):**
- `pdm run utest` runs only unit tests.
- `pdm run itest` runs only integration tests and stays serial (`-n 0`) for browser stability.
- `pdm run smoke` runs only smoke tests.
- Split runs also include coverage by default.
### Coverage
- Local and CI-facing public commands (`test`, `utest`, `itest`, `smoke`) always enable coverage.
- Default local report output remains `term-missing`.
- CI still uploads split XML coverage files (unit/integration/smoke) to Codecov using internal `ci:*` runner commands.
### Parallel Execution and Slow-Test Tracking
- `test`, `utest`, and `smoke` run with `-n auto`.
- `itest` runs with `-n 0` by design to avoid flaky browser parallelism.
- Verbose runs (`-v`, `-vv`, `-vvv`) report the slowest 25 tests (`--durations=25 --durations-min=0.5`), while quiet/default runs omit durations.
- Long-running scenarios are tagged with `@pytest.mark.slow` (smoke CLI checks and browser integrations). Keep them in CI, but skip locally via `pytest -m "not slow"` when you only need a quick signal.
### CI Test Order
- Split suites run in this order: unit, integration, smoke.
- Internal commands (`ci:coverage:prepare`, `ci:test:unit`, `ci:test:integration`, `ci:test:smoke`) are backed by `scripts/run_tests.py`.
- Coverage for each group is uploaded separately to Codecov (with flags: `unit-tests`, `integration-tests`, `smoke-tests`).
- This ensures that foundational failures are caught early and that test types are clearly separated.
### Adding New Smoke Tests
- Add new tests to `tests/smoke/` and mark them with `@pytest.mark.smoke`.
- Use fakes/dummies for browser and page dependencies (see `tests/conftest.py`).
- Focus on minimal, critical health checks, not full user workflows.
### Why This Structure?
- **Fast feedback:** Unit and integration tests catch most issues before running smoke tests.
- **Separation:** Unit, integration, and smoke tests are not polluted by each other.
- **Coverage clarity:** You can see which code paths are covered by each test type in Codecov.
See also: `pyproject.toml` for test script definitions and `.github/workflows/build.yml` for CI setup.
For contributor workflow, setup, and submission expectations, see `CONTRIBUTING.md`.
## Why Offer Both Unified and Split Runs?
### Unified Runs (Default)
- **Single summary:** See all failing tests in one run while developing locally.
- **Coverage included:** The default `pdm run test` command reports coverage without needing a second command.
- **Lower command overhead:** One pytest startup for the whole suite.
### Split Runs (CI and Targeted Debugging)
- **Fail-fast flow in CI:** Unit, integration, and smoke runs are executed in sequence for faster failure feedback.
- **Stable browser integrations:** `pdm run itest` keeps serial execution with `-n 0`.
- **Separate coverage uploads:** CI still uses per-group coverage files/flags for Codecov.
### Trade-off
- Unified default uses `-n auto`; this can increase integration-test flakiness compared to serial integration runs.
- When integration-test stability is a concern, run `pdm run itest` directly.

312
docs/config.default.yaml Normal file
View File

@@ -0,0 +1,312 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/main/schemas/config.schema.json
# glob (wildcard) patterns to select ad configuration files
# if relative paths are specified, then they are relative to this configuration file
ad_files:
- ./**/ad_*.{json,yml,yaml}
# ################################################################################
# Default values for ads, can be overwritten in each ad configuration file
ad_defaults:
# whether the ad should be published (false = skip this ad)
active: true
# type of the ad listing
# Examples (choose one):
# • OFFER
# • WANTED
type: OFFER
# text to prepend to each ad (optional)
description_prefix: ''
# text to append to each ad (optional)
description_suffix: ''
# pricing strategy for the listing
# Examples (choose one):
# • FIXED
# • NEGOTIABLE
# • GIVE_AWAY
# • NOT_APPLICABLE
price_type: NEGOTIABLE
# automatic price reduction configuration for reposted ads
auto_price_reduction:
# automatically lower the price of reposted ads
enabled: false
# reduction strategy (required when enabled: true). PERCENTAGE = % of price, FIXED = absolute amount
# Examples (choose one):
# • PERCENTAGE
# • FIXED
strategy:
# reduction amount (required when enabled: true). For PERCENTAGE: use percent value (e.g., 10 = 10%%). For FIXED: use currency amount
# Examples (choose one):
# • 10.0
# • 5.0
# • 20.0
amount:
# minimum price floor (required when enabled: true). Use 0 for no minimum
# Examples (choose one):
# • 1.0
# • 5.0
# • 10.0
min_price:
# number of reposts to wait before applying the first automatic price reduction
delay_reposts: 0
# number of days to wait after publication before applying automatic price reductions
delay_days: 0
# shipping method for the item
# Examples (choose one):
# • PICKUP
# • SHIPPING
# • NOT_APPLICABLE
shipping_type: SHIPPING
# enable direct purchase option (only works when shipping_type is SHIPPING)
sell_directly: false
# default image glob patterns (optional). Leave empty for no default images
# Example usage:
# images:
# - "images/*.jpg"
# - "photos/*.{png,jpg}"
images: []
# default contact information for ads
contact:
# contact name displayed on the ad
name: ''
# street address for the listing
street: ''
# postal/ZIP code for the listing location
zipcode: ''
# city or locality of the listing (can include multiple districts)
# Example: Sample Town - District One
location: ''
# phone number for contact - only available for commercial accounts, personal accounts no longer support this
# Example: "01234 567890"
phone: ''
# number of days between automatic republication of ads
republication_interval: 7
# ################################################################################
# additional name to category ID mappings (optional). Leave as {} if not needed. See full list at: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/src/kleinanzeigen_bot/resources/categories.yaml To add: use format 'Category > Subcategory': 'ID'
# Examples (choose one):
# • "Elektronik > Notebooks": "161/278"
# • "Jobs > Praktika": "102/125"
categories: {}
# ################################################################################
download:
# if true, all shipping options matching the package size will be included
include_all_matching_shipping_options: false
# shipping options to exclude (optional). Leave as [] to include all. Add items like 'DHL_2' to exclude specific carriers
# Example usage:
# excluded_shipping_options:
# - "DHL_2"
# - "DHL_5"
# - "Hermes"
excluded_shipping_options: []
# maximum length for folder names when downloading ads (default: 100)
folder_name_max_length: 100
# if true, rename existing folders without titles to include titles (default: false)
rename_existing_folders: false
# ################################################################################
publishing:
# when to delete old versions of republished ads
# Examples (choose one):
# • BEFORE_PUBLISH
# • AFTER_PUBLISH
# • NEVER
delete_old_ads: AFTER_PUBLISH
# match old ads by title when deleting (only works with BEFORE_PUBLISH)
delete_old_ads_by_title: true
# ################################################################################
# Browser configuration
browser:
# additional Chromium command line switches (optional). Leave as [] for default behavior. See https://peter.sh/experiments/chromium-command-line-switches/ Common: --headless (no GUI), --disable-dev-shm-usage (Docker fix), --user-data-dir=/path
# Example usage:
# arguments:
# - "--headless"
# - "--disable-dev-shm-usage"
# - "--user-data-dir=/path/to/profile"
arguments: []
# path to custom browser executable (optional). Leave empty to use system default
binary_location: ''
# Chrome extensions to load (optional). Leave as [] for no extensions. Add .crx file paths relative to config file
# Example usage:
# extensions:
# - "extensions/adblock.crx"
# - "/absolute/path/to/extension.crx"
extensions: []
# open browser in private/incognito mode (recommended to avoid cookie conflicts)
use_private_window: true
# custom browser profile directory (optional). Leave empty for auto-configured default
user_data_dir: ''
# browser profile name (optional). Leave empty for default profile
# Example: "Profile 1"
profile_name: ''
# ################################################################################
# Login credentials
login:
# kleinanzeigen.de login email or username
username: changeme
# kleinanzeigen.de login password
password: changeme
# ################################################################################
captcha:
# if true, abort when captcha is detected and auto-retry after restart_delay (if false, wait for manual solving)
auto_restart: false
# duration to wait before retrying after captcha detection (e.g., 1h30m, 6h, 30m)
# Examples (choose one):
# • 6h
# • 1h30m
# • 30m
restart_delay: 6h
# ################################################################################
# Update check configuration
update_check:
# whether to check for updates on startup
enabled: true
# which release channel to check (latest = stable, preview = prereleases)
# Examples (choose one):
# • latest
# • preview
channel: latest
# how often to check for updates (e.g., 7d, 1d). If invalid, too short (<1d), or too long (>30d), uses defaults: 1d for 'preview' channel, 7d for 'latest' channel
# Examples (choose one):
# • 7d
# • 1d
# • 14d
interval: 7d
# ################################################################################
# Centralized timeout configuration.
timeouts:
# Global multiplier applied to all timeout values.
multiplier: 1.0
# Baseline timeout for DOM interactions.
default: 5.0
# Page load timeout for web_open.
page_load: 15.0
# Timeout for captcha iframe detection.
captcha_detection: 2.0
# Timeout for SMS verification prompts.
sms_verification: 4.0
# Timeout for email verification prompts.
email_verification: 4.0
# Timeout for GDPR/consent dialogs.
gdpr_prompt: 10.0
# Timeout for detecting existing login session via DOM elements.
login_detection: 10.0
# Timeout for publishing result checks.
publishing_result: 300.0
# Timeout for publish confirmation redirect.
publishing_confirmation: 20.0
# Timeout for image upload and server-side processing.
image_upload: 30.0
# Timeout for initial pagination lookup.
pagination_initial: 10.0
# Timeout for subsequent pagination navigation.
pagination_follow_up: 5.0
# Generic short timeout for transient UI.
quick_dom: 2.0
# Timeout for GitHub update checks.
update_check: 10.0
# Timeout for local remote-debugging probes.
chrome_remote_probe: 2.0
# Timeout for remote debugging API calls.
chrome_remote_debugging: 5.0
# Timeout for chrome --version subprocesses.
chrome_binary_detection: 10.0
# Enable built-in retry/backoff for DOM operations.
retry_enabled: true
# Max retry attempts when retry is enabled.
retry_max_attempts: 2
# Exponential factor applied per retry attempt.
retry_backoff_factor: 1.5
# ################################################################################
# diagnostics capture configuration for troubleshooting
diagnostics:
# Enable diagnostics capture for specific operations.
capture_on:
# Capture screenshot and HTML when login state detection fails
login_detection: false
# Capture screenshot, HTML, and JSON on publish failures
publish: false
# If true, copy the entire bot log file when diagnostics are captured (may duplicate log content).
capture_log_copy: false
# If true, pause (interactive runs only) after capturing login detection diagnostics so that user can inspect the browser. Requires capture_on.login_detection to be enabled.
pause_on_login_detection_failure: false
# Optional output directory for diagnostics artifacts. If omitted, a safe default is used based on installation mode.
output_dir:
# If true, collect local timeout timing data and write it to diagnostics JSON for troubleshooting and tuning.
timing_collection: true

2411
pdm.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -10,7 +10,6 @@ from PyInstaller.utils.hooks import collect_data_files
datas = [ datas = [
* collect_data_files("kleinanzeigen_bot"), # embeds *.yaml files * collect_data_files("kleinanzeigen_bot"), # embeds *.yaml files
* collect_data_files("selenium_stealth"), # embeds *.js files
# required to get version info via 'importlib.metadata.version(__package__)' # required to get version info via 'importlib.metadata.version(__package__)'
# but we use https://backend.pdm-project.org/metadata/#writing-dynamic-version-to-file # but we use https://backend.pdm-project.org/metadata/#writing-dynamic-version-to-file
@@ -20,32 +19,26 @@ datas = [
excluded_modules = [ excluded_modules = [
"_aix_support", "_aix_support",
"argparse", "argparse",
"backports",
"bz2", "bz2",
"cryptography.hazmat",
"distutils",
"doctest",
"ftplib", "ftplib",
"lzma", "lzma",
"pep517", "mypy", # wrongly included dev-dep
"pdb", "rich", # wrongly included dev-dep (transitive dep of pip-audit)
"pip",
"pydoc",
"pydoc_data",
"optparse",
"setuptools", "setuptools",
"six", "smtplib",
"statistics", "statistics",
"test", "toml", # wrongly included dev-dep (transitive dep of pip-audit)
"unittest", "tomllib",
"xml.sax" "tracemalloc",
"xml.sax",
"xmlrpc"
] ]
from sys import platform from sys import platform
if platform != "darwin": if platform != "darwin":
excluded_modules.append("_osx_support") excluded_modules.append("_osx_support")
# https://github.com/pyinstaller/pyinstaller/blob/f563dce1e83fd5ec72a20dffd2ac24be3e647150/PyInstaller/building/build_main.py#L320 # https://github.com/pyinstaller/pyinstaller/blob/adceeab4c2901fba853b29f9ae2db7bb67667030/PyInstaller/building/build_main.py#L399
analysis = Analysis( analysis = Analysis(
['src/kleinanzeigen_bot/__main__.py'], ['src/kleinanzeigen_bot/__main__.py'],
# pathex = [], # pathex = [],
@@ -60,25 +53,26 @@ analysis = Analysis(
# win_no_prefer_redirets = False, # Deprecated # win_no_prefer_redirets = False, # Deprecated
# win_private_assemblies = False, # Deprecated # win_private_assemblies = False, # Deprecated
# noarchive = False, # noarchive = False,
# module_collection_mode = None # module_collection_mode = None,
# optimize = -1
) )
# https://github.com/pyinstaller/pyinstaller/blob/f563dce1e83fd5ec72a20dffd2ac24be3e647150/PyInstaller/building/api.py#L51 # https://github.com/pyinstaller/pyinstaller/blob/adceeab4c2901fba853b29f9ae2db7bb67667030/PyInstaller/building/api.py#L52
pyz = PYZ( pyz = PYZ(
analysis.pure, # tocs analysis.pure, # tocs
analysis.zipped_data, analysis.zipped_data,
# name = None # name = None
) )
import shutil import os, shutil
# https://github.com/pyinstaller/pyinstaller/blob/f563dce1e83fd5ec72a20dffd2ac24be3e647150/PyInstaller/building/api.py#L338 # https://github.com/pyinstaller/pyinstaller/blob/adceeab4c2901fba853b29f9ae2db7bb67667030/PyInstaller/building/api.py#L363
exe = EXE(pyz, exe = EXE(pyz,
analysis.scripts, analysis.scripts,
analysis.binaries, analysis.binaries,
analysis.datas, analysis.datas,
# bootloader_ignore_signals = False, # bootloader_ignore_signals = False,
# console = True, console = True,
# hide_console = None, # hide_console = None,
# disable_windowed_traceback = False, # disable_windowed_traceback = False,
# debug = False, # debug = False,
@@ -95,7 +89,7 @@ exe = EXE(pyz,
# contents_directory = "_internal", # contents_directory = "_internal",
# using strip on windows results in "ImportError: Can't connect to HTTPS URL because the SSL module is not available." # using strip on windows results in "ImportError: Can't connect to HTTPS URL because the SSL module is not available."
strip = not platform.startswith("win") and shutil.which("strip") is not None, strip = not platform.startswith("win") and shutil.which("strip") is not None,
upx = shutil.which("upx") is not None, upx = shutil.which("upx") is not None and not os.getenv("NO_UPX"),
upx_exclude = [], upx_exclude = [],
runtime_tmpdir = None, runtime_tmpdir = None,
) )

View File

@@ -5,7 +5,7 @@
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/ # SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
# #
[build-system] # https://backend.pdm-project.org/ [build-system] # https://backend.pdm-project.org/
requires = ["pdm-backend"] requires = ["pdm-backend"]
build-backend = "pdm.backend" build-backend = "pdm.backend"
@@ -15,73 +15,124 @@ dynamic = ["version"]
description = "Command line tool to publish ads on kleinanzeigen.de" description = "Command line tool to publish ads on kleinanzeigen.de"
readme = "README.md" readme = "README.md"
authors = [ authors = [
{name = "sebthom", email = "sebthom@users.noreply.github.com"}, {name = "sebthom", email = "sebthom@users.noreply.github.com"},
] ]
license = {text = "AGPL-3.0-or-later"} license = {text = "AGPL-3.0-or-later"}
classifiers = [ # https://pypi.org/classifiers/ classifiers = [ # https://pypi.org/classifiers/
"Development Status :: 4 - Beta", "Private :: Do Not Upload",
"Environment :: Console",
"Operating System :: OS Independent",
"Intended Audience :: End Users/Desktop", "Development Status :: 5 - Production/Stable",
"Topic :: Office/Business", "Environment :: Console",
"Operating System :: OS Independent",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)", "Intended Audience :: End Users/Desktop",
"Programming Language :: Python :: 3.10" "Topic :: Office/Business",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10"
] ]
requires-python = ">=3.10,<3.13" # <3.12 required for pyinstaller requires-python = ">=3.10,<3.15"
dependencies = [ dependencies = [
"colorama~=0.4", "certifi",
"coloredlogs~=15.0", "colorama",
"overrides~=7.4", "jaraco.text", # required by pkg_resources during runtime
"ruamel.yaml~=0.18", "nodriver==0.47.*", # Pin to 0.47 until upstream fixes UTF-8 decoding issues introduced in 0.48
"pywin32==306; sys_platform == 'win32'", "platformdirs>=2.1.0",
"selenium~=4.18", "pydantic>=2.11.0",
"selenium_stealth~=1.0", "ruamel.yaml",
"wcmatch~=8.5", "psutil",
"wcmatch",
"sanitize-filename>=1.2.0",
]
[dependency-groups] # https://peps.python.org/pep-0735/
dev = [
"pip-audit",
"pytest>=8.3.4",
"pytest-asyncio>=0.25.3",
"pytest-xdist>=3.6.1",
"pytest-rerunfailures",
"pytest-cov>=6.0.0",
"ruff",
"mypy",
"basedpyright",
"autopep8",
"yamlfix",
"pyinstaller",
"types-requests>=2.32.0.20250515",
"pytest-mock>=3.14.0",
"jsonschema>=4.26.0",
] ]
[project.urls] [project.urls]
Homepage = "https://github.com/Second-Hand-Friends/kleinanzeigen-bot" Homepage = "https://github.com/Second-Hand-Friends/kleinanzeigen-bot"
Repository = "https://github.com/Second-Hand-Friends/kleinanzeigen-bot.git" Repository = "https://github.com/Second-Hand-Friends/kleinanzeigen-bot.git"
Documentation = "https://github.com/Second-Hand-Friends/kleinanzeigen-bot/README.md" Documentation = "https://github.com/Second-Hand-Friends/kleinanzeigen-bot/README.md"
Issues = "https://github.com/Second-Hand-Friends/kleinanzeigen-bot/issues" Issues = "https://github.com/Second-Hand-Friends/kleinanzeigen-bot/issues"
CI = "https://github.com/Second-Hand-Friends/kleinanzeigen-bot/actions"
##################### #####################
# pdm https://github.com/pdm-project/pdm/ # pdm https://github.com/pdm-project/pdm/
##################### #####################
[tool.pdm.version] # https://backend.pdm-project.org/metadata/#dynamic-project-version [tool.pdm.version] # https://backend.pdm-project.org/metadata/#dynamic-project-version
source = "call" source = "call"
getter = "version:get_version" getter = "version:get_version" # uses get_version() of <project_root>/version.py
write_to = "kleinanzeigen_bot/_version.py" write_to = "kleinanzeigen_bot/_version.py"
write_template = "__version__ = '{}'\n" write_template = "__version__ = '{}'\n"
[tool.pdm.dev-dependencies] [tool.pdm.scripts] # https://pdm-project.org/latest/usage/scripts/
dev = [ app = "python -m kleinanzeigen_bot"
"autopep8~=2.0", debug = "python -m pdb -m kleinanzeigen_bot"
"bandit~=1.7",
"toml", # required by bandit
"tomli", # required by bandit
"pydantic~=2.6",
"pytest~=8.1",
"pyinstaller~=6.4",
"psutil",
"pylint~=3.1",
"mypy~=1.8",
]
[tool.pdm.scripts] # https://pdm-project.org/latest/usage/scripts/ # build & packaging
app = "python -m kleinanzeigen_bot" generate-schemas = "python scripts/generate_schemas.py"
compile.cmd = "python -O -m PyInstaller pyinstaller.spec --clean" generate-config = { shell = "python -c \"from pathlib import Path; Path('docs/config.default.yaml').unlink(missing_ok=True)\" && python -m kleinanzeigen_bot --config docs/config.default.yaml create-config" }
compile.env = {PYTHONHASHSEED = "1", SOURCE_DATE_EPOCH = "0"} # https://pyinstaller.org/en/stable/advanced-topics.html#creating-a-reproducible-build generate-artifacts = { composite = ["generate-schemas", "generate-config"] }
format = "autopep8 --recursive --in-place kleinanzeigen_bot tests --verbose" compile.cmd = "python -O -m PyInstaller pyinstaller.spec --clean --workpath .temp"
lint = {shell = "pylint -v src tests && autopep8 -v --exit-code --recursive --diff src tests && echo No issues found."} compile.env = {PYTHONHASHSEED = "1", SOURCE_DATE_EPOCH = "0"} # https://pyinstaller.org/en/stable/advanced-topics.html#creating-a-reproducible-build
scan = "bandit -c pyproject.toml -r kleinanzeigen_bot"
test = "python -m pytest --capture=tee-sys -v" deps = "pdm list --fields name,version,groups"
utest = "python -m pytest --capture=tee-sys -v -m 'not itest'" "deps:tree" = "pdm list --tree"
itest = "python -m pytest --capture=tee-sys -v -m 'itest'" "deps:runtime" = "pdm list --fields name,version,groups --include default"
"deps:runtime:tree" = "pdm list --tree --include default"
# format & lint
format = { composite = ["format:py", "format:yaml"] }
"format:py" = { shell = "autopep8 --recursive --in-place scripts src tests --verbose && python scripts/post_autopep8.py scripts src tests" }
"format:yaml" = "yamlfix scripts/ src/ tests/"
lint = { composite = ["lint:ruff", "lint:mypy", "lint:pyright"] }
"lint:ruff" = "ruff check --preview"
"lint:mypy" = "mypy"
"lint:pyright" = "basedpyright"
"lint:fix" = {shell = "ruff check --preview --fix" }
# tests
# Public test commands
# - Coverage is enabled by default for all public profiles.
# - Quiet output is default; pass -v/-vv for more details and durations.
test = "python scripts/run_tests.py run test"
utest = "python scripts/run_tests.py run utest"
itest = "python scripts/run_tests.py run itest"
smoke = "python scripts/run_tests.py run smoke"
# CI/internal split coverage commands (for Codecov artifact uploads)
"ci:coverage:prepare" = "python scripts/run_tests.py ci-prepare"
"ci:test:unit" = "python scripts/run_tests.py ci-run --marker \"not itest and not smoke\" --coverage-file .temp/.coverage-unit.sqlite --xml-file .temp/coverage-unit.xml"
"ci:test:integration" = "python scripts/run_tests.py ci-run --marker \"itest and not smoke\" --coverage-file .temp/.coverage-itest.sqlite --xml-file .temp/coverage-integration.xml --workers 0"
"ci:test:smoke" = "python scripts/run_tests.py ci-run --marker smoke --coverage-file .temp/.coverage-smoke.sqlite --xml-file .temp/coverage-smoke.xml"
# Test script structure:
# - `scripts/run_tests.py` is the single implementation for public and CI test execution.
# - `test` is the canonical unified command.
# - Split groups (`utest`, `itest`, `smoke`) remain for targeted runs.
# - `itest` remains serial (-n 0) for browser stability.
# - CI uses `ci:*` commands for per-suite XML outputs consumed by Codecov.
#
# See docs/TESTING.md for more details.
##################### #####################
@@ -92,21 +143,154 @@ itest = "python -m pytest --capture=tee-sys -v -m 'itest'"
[tool.autopep8] [tool.autopep8]
max_line_length = 160 max_line_length = 160
ignore = [ # https://github.com/hhatto/autopep8#features ignore = [ # https://github.com/hhatto/autopep8#features
"E124", # Don't change indention of multi-line statements "E124", # Don't change indention of multi-line statements
"E128", # Don't change indention of multi-line statements "E128", # Don't change indention of multi-line statements
"E231", # Don't add whitespace after colon (:) on type declaration "E231", # Don't add whitespace after colon (:) on type declaration
"E251", # Don't remove whitespace around parameter '=' sign. "E251", # Don't remove whitespace around parameter '=' sign.
"E401" # Don't put imports on separate lines "E401" # Don't put imports on separate lines
] ]
aggressive = 3 aggressive = 3
##################### #####################
# bandit # ruff
# https://pypi.org/project/bandit/ # https://pypi.org/project/ruff/
# https://github.com/PyCQA/bandit # https://docs.astral.sh/ruff/configuration/
##################### #####################
[tool.bandit] [tool.ruff]
cache-dir = ".temp/cache_ruff"
include = ["pyproject.toml", "scripts/**/*.py", "src/**/*.py", "tests/**/*.py"]
line-length = 160
indent-width = 4
target-version = "py310"
[tool.ruff.lint]
# https://docs.astral.sh/ruff/rules/
select = [
"A", # flake8-builtins
"ARG", # flake8-unused-arguments
"ANN", # flake8-annotations
"ASYNC", # flake8-async
#"BLE", # flake8-blind-except
"B", # flake8-bugbear
"C4", # flake8-comprehensions
"COM", # flake8-commas
"CPY", # flake8-copyright
"DTZ", # flake8-datetimez
#"EM", # flake8-errmsg
#"ERA", # eradicate commented-out code
"EXE", # flake8-executable
"FA", # flake8-future-annotations
"FBT", # flake8-boolean-trap
"FIX", # flake8-fixme
"G", # flake8-logging-format
"ICN", # flake8-import-conventions
"ISC", # flake8-implicit-str-concat
"INP", # flake8-no-pep420
"INT", # flake8-gettext
"LOG", # flake8-logging
"PIE", # flake8-pie
"PT", # flake8-pytest-style
#"PTH", # flake8-use-pathlib
"PYI", # flake8-pyi
"Q", # flake8-quotes
"RET", # flake8-return
"RSE", # flake8-raise
"S", # flake8-bandit
"SIM", # flake8-simplify
"SLF", # flake8-self
"SLOT", # flake8-slots
"T10", # flake8-debugger
#"T20", # flake8-print
"TC", # flake8-type-checking
"TD", # flake8-todo
"TID", # flake8-flake8-tidy-import
"YTT", # flake8-2020
"E", # pycodestyle-errors
"W", # pycodestyle-warnings
#"C90", # mccabe
"D", # pydocstyle
"F", # pyflakes
"FLY", # flynt
"I", # isort
"PERF", # perflint
"PGH", # pygrep-hooks
"PL", # pylint
]
ignore = [
"ANN401", # Dynamically typed expressions (typing.Any) are disallowed
"COM812", # Trailing comma missing
"D1", # Missing docstring in ...
"D200", # One-line docstring should fit on one line
"D202", # No blank lines allowed after function docstring (found 1)
"D203", # 1 blank line required before class docstring
"D204", # 1 blank line required after class docstring
"D205", # 1 blank line required between summary line and description
"D209", # Multi-line docstring closing quotes should be on a separate line"
"D212", # Multi-line docstring summary should start at the first line
"D213", # Multi-line docstring summary should start at the second line
"D400", # First line should end with a period
"D401", # First line of docstring should be in imperative mood
"D402", # First line should not be the function's signature
"D404", # First word of the docstring should not be "This"
"D413", # Missing blank line after last section ("Returns")"
"D415", # First line should end with a period, question mark, or exclamation point
"D417", # Missing argument description in the docstring for
#"E124", # Don't change indention of multi-line statements
#"E128", # Don't change indention of multi-line statements
"E231", # Don't add whitespace after colon (:) on type declaration
"E251", # Don't remove whitespace around parameter '=' sign.
"E401", # Don't put imports on separate lines
"FIX002", # Line contains TODO, consider resolving the issue
"PERF203", # `try`-`except` within a loop incurs performance overhead
"RET504", # Unnecessary assignment to `...` before `return` statement
"PLR6301", # Method `...` could be a function, class method, or static method
"PLR0913", # Too many arguments in function definition (needed to match parent signature)
"PYI041", # Use `float` instead of `int | float`
"SIM102", # Use a single `if` statement instead of nested `if` statements
"SIM105", # Use `contextlib.suppress(TimeoutError)` instead of `try`-`except`-`pass`
"SIM114", # Combine `if` branches using logical `or` operator
"TC006", # Add quotes to type expression in `typing.cast()`
"TD002", # Missing author in TODO
"TD003", # Missing issue link for this TODO
]
[tool.ruff.lint.per-file-ignores]
"scripts/**/*.py" = [
"INP001", # File `...` is part of an implicit namespace package. Add an `__init__.py`.
]
"tests/**/*.py" = [
"ARG",
"B",
"FBT",
"INP",
"SLF",
"S101", # Use of `assert` detected
"PLR0904", # Too many public methods (12 > 10)
"PLR2004", # Magic value used in comparison
]
[tool.ruff.lint.flake8-copyright]
notice-rgx = "SPDX-FileCopyrightText: .*"
min-file-size = 256
[tool.ruff.lint.isort]
# combine-straight-imports = true # not (yet) supported by ruff
[tool.ruff.lint.pylint]
# https://pylint.pycqa.org/en/latest/user_guide/configuration/all-options.html#design-checker
# https://pylint.pycqa.org/en/latest/user_guide/checkers/features.html#design-checker-messages
max-args = 6 # max. number of args for function / method (R0913)
# max-attributes = 15 # TODO max. number of instance attrs for a class (R0902)
max-branches = 45 # max. number of branch for function / method body (R0912)
max-locals = 30 # max. number of local vars for function / method body (R0914)
max-returns = 15 # max. number of return / yield for function / method body (R0911)
max-statements = 150 # max. number of statements in function / method body (R0915)
max-public-methods = 25 # max. number of public methods for a class (R0904)
# max-positional-arguments = 5 # max. number of positional args for function / method (R0917)
##################### #####################
@@ -116,7 +300,9 @@ aggressive = 3
[tool.mypy] [tool.mypy]
# https://mypy.readthedocs.io/en/stable/config_file.html # https://mypy.readthedocs.io/en/stable/config_file.html
#mypy_path = "$MYPY_CONFIG_FILE_DIR/tests/stubs" #mypy_path = "$MYPY_CONFIG_FILE_DIR/tests/stubs"
cache_dir = ".temp/cache_mypy"
python_version = "3.10" python_version = "3.10"
files = "scripts,src,tests"
strict = true strict = true
disallow_untyped_calls = false disallow_untyped_calls = false
disallow_untyped_defs = true disallow_untyped_defs = true
@@ -128,84 +314,15 @@ verbosity = 0
##################### #####################
# pylint # basedpyright
# https://pypi.org/project/pylint/ # https://github.com/detachhead/basedpyright
# https://github.com/PyCQA/pylint
##################### #####################
[tool.pylint.master] [tool.basedpyright]
extension-pkg-whitelist = "win32api" # https://docs.basedpyright.com/latest/configuration/config-files/
ignore = "version.py" include = ["scripts", "src", "tests"]
jobs = 4 defineConstant = { DEBUG = false }
persistent = "no" pythonVersion = "3.10"
typeCheckingMode = "standard"
# https://pylint.pycqa.org/en/latest/user_guide/checkers/extensions.html
load-plugins = [
"pylint.extensions.bad_builtin",
#"pylint.extensions.broad_try_clause",
"pylint.extensions.check_elif",
"pylint.extensions.code_style",
"pylint.extensions.comparison_placement",
#"pylint.extensions.confusing_elif",
"pylint.extensions.consider_ternary_expression",
"pylint.extensions.consider_refactoring_into_while_condition",
"pylint.extensions.dict_init_mutate",
"pylint.extensions.docstyle",
#"pylint.extensions.docparams",
"pylint.extensions.dunder",
"pylint.extensions.empty_comment",
"pylint.extensions.eq_without_hash",
"pylint.extensions.for_any_all",
#"pylint.extensions.magic_value",
#"pylint.extensions.mccabe",
"pylint.extensions.set_membership",
"pylint.extensions.no_self_use",
"pylint.extensions.overlapping_exceptions",
"pylint.extensions.private_import",
"pylint.extensions.redefined_loop_name",
"pylint.extensions.redefined_variable_type",
"pylint.extensions.set_membership",
"pylint.extensions.typing",
#"pylint.extensions.while_used"
]
[tool.pylint.basic]
good-names = ["i", "j", "k", "v", "by", "ex", "fd", "_", "T"]
[tool.pylint.format]
# https://pylint.pycqa.org/en/latest/technical_reference/features.html#format-checker
# https://pylint.pycqa.org/en/latest/user_guide/checkers/features.html#format-checker-messages
max-line-length = 160 # maximum number of characters on a single line (C0301)
max-module-lines = 2000 # maximum number of lines in a module (C0302)
[tool.pylint.logging]
logging-modules = "logging"
[tool.pylint.messages_control]
# https://pylint.pycqa.org/en/latest/technical_reference/features.html#messages-control-options
disable= [
"broad-except",
"consider-using-assignment-expr",
"docstring-first-line-empty",
"missing-docstring",
"multiple-imports",
"multiple-statements",
"no-self-use",
"too-few-public-methods"
]
[tool.pylint.miscelaneous]
# https://pylint.pycqa.org/en/latest/user_guide/configuration/all-options.html#miscellaneous-checker
notes = [ "FIXME", "XXX", "TODO" ] # list of note tags to take in consideration
[tool.pylint.design]
# https://pylint.pycqa.org/en/latest/user_guide/configuration/all-options.html#design-checker
# https://pylint.pycqa.org/en/latest/user_guide/checkers/features.html#design-checker-messages
max-attributes = 15 # maximum number of instance attributes for a class (R0902)
max-branches = 30 # maximum number of branch for function / method body (R0912)
max-locals = 30 # maximum number of local variables for function / method body (R0914)
max-returns = 10 # maximum number of return / yield for function / method body (R0911)
max-statements = 100 # maximum number of statements in function / method body (R0915)
max-public-methods = 30 # maximum number of public methods for a class (R0904)
##################### #####################
@@ -213,8 +330,60 @@ max-public-methods = 30 # maximum number of public methods for a class (R0904)
# https://pypi.org/project/pytest/ # https://pypi.org/project/pytest/
##################### #####################
[tool.pytest.ini_options] [tool.pytest.ini_options]
# https://docs.pytest.org/en/stable/reference.html#confval-addopts cache_dir = ".temp/cache_pytest"
addopts = "--strict-markers -p no:cacheprovider --doctest-modules --ignore=kleinanzeigen_bot/__main__.py" testpaths = [
markers = [ "src",
"itest: marks a test as an integration test (i.e. a test with external dependencies)" "tests"
] ]
# https://docs.pytest.org/en/stable/reference.html#confval-addopts
addopts = """
--strict-markers
--doctest-modules
--cov=src/kleinanzeigen_bot
--cov-report=term-missing
"""
markers = [
"slow: marks a test as long running",
"smoke: marks a test as a high-level smoke test (critical path, no mocks)",
"itest: marks a test as an integration test (i.e. a test with external dependencies)",
"asyncio: mark test as async",
"unit: marks a test as a unit test"
]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
filterwarnings = [
"ignore:Exception ignored in:pytest.PytestUnraisableExceptionWarning",
"ignore::DeprecationWarning"
]
[tool.coverage.run]
# https://coverage.readthedocs.io/en/latest/config.html#run
data_file = ".temp/coverage.sqlite"
branch = true # track branch coverage
relative_files = true
disable_warnings = ["no-data-collected"]
[tool.coverage.report]
precision = 2
show_missing = true
skip_covered = false
include = ["src/kleinanzeigen_bot/*"]
#####################
# yamlfix
# https://lyz-code.github.io/yamlfix/
#####################
[tool.yamlfix]
allow_duplicate_keys = true
comments_min_spaces_from_content = 2
comments_require_starting_space = false # FIXME should be true but rule is buggy
comments_whitelines = 1
section_whitelines = 1
explicit_start = false
indentation = 2
line_length = 1024
preserve_quotes = true
quote_basic_values = false
quote_keys_and_basic_values = false
quote_representation = '"'
whitelines = 1

450
schemas/ad.schema.json Normal file
View File

@@ -0,0 +1,450 @@
{
"$defs": {
"AutoPriceReductionConfig": {
"properties": {
"enabled": {
"default": false,
"description": "automatically lower the price of reposted ads",
"title": "Enabled",
"type": "boolean"
},
"strategy": {
"anyOf": [
{
"enum": [
"FIXED",
"PERCENTAGE"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "reduction strategy (required when enabled: true). PERCENTAGE = % of price, FIXED = absolute amount",
"examples": [
"PERCENTAGE",
"FIXED"
],
"title": "Strategy"
},
"amount": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "reduction amount (required when enabled: true). For PERCENTAGE: use percent value (e.g., 10 = 10%%). For FIXED: use currency amount",
"examples": [
10.0,
5.0,
20.0
],
"title": "Amount"
},
"min_price": {
"anyOf": [
{
"minimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "minimum price floor (required when enabled: true). Use 0 for no minimum",
"examples": [
1.0,
5.0,
10.0
],
"title": "Min Price"
},
"delay_reposts": {
"default": 0,
"description": "number of reposts to wait before applying the first automatic price reduction",
"minimum": 0,
"title": "Delay Reposts",
"type": "integer"
},
"delay_days": {
"default": 0,
"description": "number of days to wait after publication before applying automatic price reductions",
"minimum": 0,
"title": "Delay Days",
"type": "integer"
}
},
"title": "AutoPriceReductionConfig",
"type": "object"
},
"ContactPartial": {
"properties": {
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Name"
},
"street": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Street"
},
"zipcode": {
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Zipcode"
},
"location": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Location"
},
"phone": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Phone"
}
},
"title": "ContactPartial",
"type": "object"
}
},
"properties": {
"active": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "null"
}
],
"default": null,
"title": "Active"
},
"type": {
"anyOf": [
{
"enum": [
"OFFER",
"WANTED"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Type"
},
"title": {
"minLength": 10,
"title": "Title",
"type": "string"
},
"description": {
"title": "Description",
"type": "string"
},
"description_prefix": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Description Prefix"
},
"description_suffix": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Description Suffix"
},
"category": {
"title": "Category",
"type": "string"
},
"special_attributes": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"title": "Special Attributes"
},
"price": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Price"
},
"price_type": {
"anyOf": [
{
"enum": [
"FIXED",
"NEGOTIABLE",
"GIVE_AWAY",
"NOT_APPLICABLE"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Price Type"
},
"auto_price_reduction": {
"anyOf": [
{
"$ref": "#/$defs/AutoPriceReductionConfig"
},
{
"type": "null"
}
],
"default": null,
"description": "automatic price reduction configuration"
},
"repost_count": {
"default": 0,
"description": "number of successful publications for this ad (persisted between runs)",
"minimum": 0,
"title": "Repost Count",
"type": "integer"
},
"price_reduction_count": {
"default": 0,
"description": "internal counter: number of automatic price reductions already applied",
"minimum": 0,
"title": "Price Reduction Count",
"type": "integer"
},
"shipping_type": {
"anyOf": [
{
"enum": [
"PICKUP",
"SHIPPING",
"NOT_APPLICABLE"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Shipping Type"
},
"shipping_costs": {
"anyOf": [
{
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"title": "Shipping Costs"
},
"shipping_options": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"title": "Shipping Options"
},
"sell_directly": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "null"
}
],
"default": null,
"title": "Sell Directly"
},
"images": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"title": "Images"
},
"contact": {
"anyOf": [
{
"$ref": "#/$defs/ContactPartial"
},
{
"type": "null"
}
],
"default": null
},
"republication_interval": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Republication Interval"
},
"id": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"title": "Id"
},
"created_on": {
"anyOf": [
{
"type": "null"
},
{
"pattern": "^\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}(?:\\.\\d{1,6})?(?:Z|[+-]\\d{2}:\\d{2})?$",
"type": "string"
}
],
"default": null,
"description": "ISO-8601 timestamp with optional timezone (e.g. 2024-12-25T00:00:00 or 2024-12-25T00:00:00Z)",
"title": "Created On"
},
"updated_on": {
"anyOf": [
{
"type": "null"
},
{
"pattern": "^\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}(?:\\.\\d{1,6})?(?:Z|[+-]\\d{2}:\\d{2})?$",
"type": "string"
}
],
"default": null,
"description": "ISO-8601 timestamp with optional timezone (e.g. 2024-12-25T00:00:00 or 2024-12-25T00:00:00Z)",
"title": "Updated On"
},
"content_hash": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Content Hash"
}
},
"required": [
"title",
"description",
"category"
],
"title": "AdPartial",
"type": "object",
"description": "Auto-generated JSON Schema for Ad"
}

817
schemas/config.schema.json Normal file
View File

@@ -0,0 +1,817 @@
{
"$defs": {
"AdDefaults": {
"properties": {
"active": {
"default": true,
"description": "whether the ad should be published (false = skip this ad)",
"title": "Active",
"type": "boolean"
},
"type": {
"default": "OFFER",
"description": "type of the ad listing",
"enum": [
"OFFER",
"WANTED"
],
"examples": [
"OFFER",
"WANTED"
],
"title": "Type",
"type": "string"
},
"description": {
"anyOf": [
{
"$ref": "#/$defs/DescriptionAffixes"
},
{
"type": "null"
}
],
"default": null,
"description": "DEPRECATED: Use description_prefix/description_suffix instead"
},
"description_prefix": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": "",
"description": "text to prepend to each ad (optional)",
"title": "Description Prefix"
},
"description_suffix": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": "",
"description": "text to append to each ad (optional)",
"title": "Description Suffix"
},
"price_type": {
"default": "NEGOTIABLE",
"description": "pricing strategy for the listing",
"enum": [
"FIXED",
"NEGOTIABLE",
"GIVE_AWAY",
"NOT_APPLICABLE"
],
"examples": [
"FIXED",
"NEGOTIABLE",
"GIVE_AWAY",
"NOT_APPLICABLE"
],
"title": "Price Type",
"type": "string"
},
"auto_price_reduction": {
"$ref": "#/$defs/AutoPriceReductionConfig",
"description": "automatic price reduction configuration for reposted ads"
},
"shipping_type": {
"default": "SHIPPING",
"description": "shipping method for the item",
"enum": [
"PICKUP",
"SHIPPING",
"NOT_APPLICABLE"
],
"examples": [
"PICKUP",
"SHIPPING",
"NOT_APPLICABLE"
],
"title": "Shipping Type",
"type": "string"
},
"sell_directly": {
"default": false,
"description": "enable direct purchase option (only works when shipping_type is SHIPPING)",
"title": "Sell Directly",
"type": "boolean"
},
"images": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"description": "default image glob patterns (optional). Leave empty for no default images",
"examples": [
"\"images/*.jpg\"",
"\"photos/*.{png,jpg}\""
],
"title": "Images"
},
"contact": {
"$ref": "#/$defs/ContactDefaults",
"description": "default contact information for ads"
},
"republication_interval": {
"default": 7,
"description": "number of days between automatic republication of ads",
"title": "Republication Interval",
"type": "integer"
}
},
"title": "AdDefaults",
"type": "object"
},
"AutoPriceReductionConfig": {
"properties": {
"enabled": {
"default": false,
"description": "automatically lower the price of reposted ads",
"title": "Enabled",
"type": "boolean"
},
"strategy": {
"anyOf": [
{
"enum": [
"FIXED",
"PERCENTAGE"
],
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "reduction strategy (required when enabled: true). PERCENTAGE = % of price, FIXED = absolute amount",
"examples": [
"PERCENTAGE",
"FIXED"
],
"title": "Strategy"
},
"amount": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "reduction amount (required when enabled: true). For PERCENTAGE: use percent value (e.g., 10 = 10%%). For FIXED: use currency amount",
"examples": [
10.0,
5.0,
20.0
],
"title": "Amount"
},
"min_price": {
"anyOf": [
{
"minimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "minimum price floor (required when enabled: true). Use 0 for no minimum",
"examples": [
1.0,
5.0,
10.0
],
"title": "Min Price"
},
"delay_reposts": {
"default": 0,
"description": "number of reposts to wait before applying the first automatic price reduction",
"minimum": 0,
"title": "Delay Reposts",
"type": "integer"
},
"delay_days": {
"default": 0,
"description": "number of days to wait after publication before applying automatic price reductions",
"minimum": 0,
"title": "Delay Days",
"type": "integer"
}
},
"title": "AutoPriceReductionConfig",
"type": "object"
},
"BrowserConfig": {
"properties": {
"arguments": {
"description": "additional Chromium command line switches (optional). Leave as [] for default behavior. See https://peter.sh/experiments/chromium-command-line-switches/ Common: --headless (no GUI), --disable-dev-shm-usage (Docker fix), --user-data-dir=/path",
"examples": [
"\"--headless\"",
"\"--disable-dev-shm-usage\"",
"\"--user-data-dir=/path/to/profile\""
],
"items": {
"type": "string"
},
"title": "Arguments",
"type": "array"
},
"binary_location": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": "",
"description": "path to custom browser executable (optional). Leave empty to use system default",
"title": "Binary Location"
},
"extensions": {
"description": "Chrome extensions to load (optional). Leave as [] for no extensions. Add .crx file paths relative to config file",
"examples": [
"\"extensions/adblock.crx\"",
"\"/absolute/path/to/extension.crx\""
],
"items": {
"type": "string"
},
"title": "Extensions",
"type": "array"
},
"use_private_window": {
"default": true,
"description": "open browser in private/incognito mode (recommended to avoid cookie conflicts)",
"title": "Use Private Window",
"type": "boolean"
},
"user_data_dir": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": "",
"description": "custom browser profile directory (optional). Leave empty for auto-configured default",
"title": "User Data Dir"
},
"profile_name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": "",
"description": "browser profile name (optional). Leave empty for default profile",
"examples": [
"\"Profile 1\""
],
"title": "Profile Name"
}
},
"title": "BrowserConfig",
"type": "object"
},
"CaptchaConfig": {
"properties": {
"auto_restart": {
"default": false,
"description": "if true, abort when captcha is detected and auto-retry after restart_delay (if false, wait for manual solving)",
"title": "Auto Restart",
"type": "boolean"
},
"restart_delay": {
"default": "6h",
"description": "duration to wait before retrying after captcha detection (e.g., 1h30m, 6h, 30m)",
"examples": [
"6h",
"1h30m",
"30m"
],
"title": "Restart Delay",
"type": "string"
}
},
"title": "CaptchaConfig",
"type": "object"
},
"CaptureOnConfig": {
"description": "Configuration for which operations should trigger diagnostics capture.",
"properties": {
"login_detection": {
"default": false,
"description": "Capture screenshot and HTML when login state detection fails",
"title": "Login Detection",
"type": "boolean"
},
"publish": {
"default": false,
"description": "Capture screenshot, HTML, and JSON on publish failures",
"title": "Publish",
"type": "boolean"
}
},
"title": "CaptureOnConfig",
"type": "object"
},
"ContactDefaults": {
"properties": {
"name": {
"default": "",
"description": "contact name displayed on the ad",
"title": "Name",
"type": "string"
},
"street": {
"default": "",
"description": "street address for the listing",
"title": "Street",
"type": "string"
},
"zipcode": {
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"default": "",
"description": "postal/ZIP code for the listing location",
"title": "Zipcode"
},
"location": {
"default": "",
"description": "city or locality of the listing (can include multiple districts)",
"examples": [
"Sample Town - District One"
],
"title": "Location",
"type": "string"
},
"phone": {
"default": "",
"description": "phone number for contact - only available for commercial accounts, personal accounts no longer support this",
"examples": [
"\"01234 567890\""
],
"title": "Phone",
"type": "string"
}
},
"title": "ContactDefaults",
"type": "object"
},
"DescriptionAffixes": {
"deprecated": true,
"properties": {
"prefix": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "text to prepend to the ad description (deprecated, use description_prefix)",
"title": "Prefix"
},
"suffix": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "text to append to the ad description (deprecated, use description_suffix)",
"title": "Suffix"
}
},
"title": "DescriptionAffixes",
"type": "object"
},
"DiagnosticsConfig": {
"properties": {
"capture_on": {
"$ref": "#/$defs/CaptureOnConfig",
"description": "Enable diagnostics capture for specific operations."
},
"capture_log_copy": {
"default": false,
"description": "If true, copy the entire bot log file when diagnostics are captured (may duplicate log content).",
"title": "Capture Log Copy",
"type": "boolean"
},
"pause_on_login_detection_failure": {
"default": false,
"description": "If true, pause (interactive runs only) after capturing login detection diagnostics so that user can inspect the browser. Requires capture_on.login_detection to be enabled.",
"title": "Pause On Login Detection Failure",
"type": "boolean"
},
"output_dir": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Optional output directory for diagnostics artifacts. If omitted, a safe default is used based on installation mode.",
"title": "Output Dir"
},
"timing_collection": {
"default": true,
"description": "If true, collect local timeout timing data and write it to diagnostics JSON for troubleshooting and tuning.",
"title": "Timing Collection",
"type": "boolean"
}
},
"title": "DiagnosticsConfig",
"type": "object"
},
"DownloadConfig": {
"properties": {
"include_all_matching_shipping_options": {
"default": false,
"description": "if true, all shipping options matching the package size will be included",
"title": "Include All Matching Shipping Options",
"type": "boolean"
},
"excluded_shipping_options": {
"description": "shipping options to exclude (optional). Leave as [] to include all. Add items like 'DHL_2' to exclude specific carriers",
"examples": [
"\"DHL_2\"",
"\"DHL_5\"",
"\"Hermes\""
],
"items": {
"type": "string"
},
"title": "Excluded Shipping Options",
"type": "array"
},
"folder_name_max_length": {
"default": 100,
"description": "maximum length for folder names when downloading ads (default: 100)",
"maximum": 255,
"minimum": 10,
"title": "Folder Name Max Length",
"type": "integer"
},
"rename_existing_folders": {
"default": false,
"description": "if true, rename existing folders without titles to include titles (default: false)",
"title": "Rename Existing Folders",
"type": "boolean"
}
},
"title": "DownloadConfig",
"type": "object"
},
"LoginConfig": {
"properties": {
"username": {
"description": "kleinanzeigen.de login email or username",
"minLength": 1,
"title": "Username",
"type": "string"
},
"password": {
"description": "kleinanzeigen.de login password",
"minLength": 1,
"title": "Password",
"type": "string"
}
},
"required": [
"username",
"password"
],
"title": "LoginConfig",
"type": "object"
},
"PublishingConfig": {
"properties": {
"delete_old_ads": {
"anyOf": [
{
"enum": [
"BEFORE_PUBLISH",
"AFTER_PUBLISH",
"NEVER"
],
"type": "string"
},
{
"type": "null"
}
],
"default": "AFTER_PUBLISH",
"description": "when to delete old versions of republished ads",
"examples": [
"BEFORE_PUBLISH",
"AFTER_PUBLISH",
"NEVER"
],
"title": "Delete Old Ads"
},
"delete_old_ads_by_title": {
"default": true,
"description": "match old ads by title when deleting (only works with BEFORE_PUBLISH)",
"title": "Delete Old Ads By Title",
"type": "boolean"
}
},
"title": "PublishingConfig",
"type": "object"
},
"TimeoutConfig": {
"properties": {
"multiplier": {
"default": 1.0,
"description": "Global multiplier applied to all timeout values.",
"minimum": 0.1,
"title": "Multiplier",
"type": "number"
},
"default": {
"type": "number",
"minimum": 0.0,
"default": 5.0,
"description": "Baseline timeout for DOM interactions.",
"title": "Default"
},
"page_load": {
"default": 15.0,
"description": "Page load timeout for web_open.",
"minimum": 1.0,
"title": "Page Load",
"type": "number"
},
"captcha_detection": {
"default": 2.0,
"description": "Timeout for captcha iframe detection.",
"minimum": 0.1,
"title": "Captcha Detection",
"type": "number"
},
"sms_verification": {
"default": 4.0,
"description": "Timeout for SMS verification prompts.",
"minimum": 0.1,
"title": "Sms Verification",
"type": "number"
},
"email_verification": {
"default": 4.0,
"description": "Timeout for email verification prompts.",
"minimum": 0.1,
"title": "Email Verification",
"type": "number"
},
"gdpr_prompt": {
"default": 10.0,
"description": "Timeout for GDPR/consent dialogs.",
"minimum": 1.0,
"title": "Gdpr Prompt",
"type": "number"
},
"login_detection": {
"default": 10.0,
"description": "Timeout for detecting existing login session via DOM elements.",
"minimum": 1.0,
"title": "Login Detection",
"type": "number"
},
"publishing_result": {
"default": 300.0,
"description": "Timeout for publishing result checks.",
"minimum": 10.0,
"title": "Publishing Result",
"type": "number"
},
"publishing_confirmation": {
"default": 20.0,
"description": "Timeout for publish confirmation redirect.",
"minimum": 1.0,
"title": "Publishing Confirmation",
"type": "number"
},
"image_upload": {
"default": 30.0,
"description": "Timeout for image upload and server-side processing.",
"minimum": 5.0,
"title": "Image Upload",
"type": "number"
},
"pagination_initial": {
"default": 10.0,
"description": "Timeout for initial pagination lookup.",
"minimum": 1.0,
"title": "Pagination Initial",
"type": "number"
},
"pagination_follow_up": {
"default": 5.0,
"description": "Timeout for subsequent pagination navigation.",
"minimum": 1.0,
"title": "Pagination Follow Up",
"type": "number"
},
"quick_dom": {
"default": 2.0,
"description": "Generic short timeout for transient UI.",
"minimum": 0.1,
"title": "Quick Dom",
"type": "number"
},
"update_check": {
"default": 10.0,
"description": "Timeout for GitHub update checks.",
"minimum": 1.0,
"title": "Update Check",
"type": "number"
},
"chrome_remote_probe": {
"default": 2.0,
"description": "Timeout for local remote-debugging probes.",
"minimum": 0.1,
"title": "Chrome Remote Probe",
"type": "number"
},
"chrome_remote_debugging": {
"default": 5.0,
"description": "Timeout for remote debugging API calls.",
"minimum": 1.0,
"title": "Chrome Remote Debugging",
"type": "number"
},
"chrome_binary_detection": {
"default": 10.0,
"description": "Timeout for chrome --version subprocesses.",
"minimum": 1.0,
"title": "Chrome Binary Detection",
"type": "number"
},
"retry_enabled": {
"default": true,
"description": "Enable built-in retry/backoff for DOM operations.",
"title": "Retry Enabled",
"type": "boolean"
},
"retry_max_attempts": {
"default": 2,
"description": "Max retry attempts when retry is enabled.",
"minimum": 1,
"title": "Retry Max Attempts",
"type": "integer"
},
"retry_backoff_factor": {
"default": 1.5,
"description": "Exponential factor applied per retry attempt.",
"minimum": 1.0,
"title": "Retry Backoff Factor",
"type": "number"
}
},
"title": "TimeoutConfig",
"type": "object"
},
"UpdateCheckConfig": {
"properties": {
"enabled": {
"default": true,
"description": "whether to check for updates on startup",
"title": "Enabled",
"type": "boolean"
},
"channel": {
"default": "latest",
"description": "which release channel to check (latest = stable, preview = prereleases)",
"enum": [
"latest",
"preview"
],
"examples": [
"latest",
"preview"
],
"title": "Channel",
"type": "string"
},
"interval": {
"default": "7d",
"description": "how often to check for updates (e.g., 7d, 1d). If invalid, too short (<1d), or too long (>30d), uses defaults: 1d for 'preview' channel, 7d for 'latest' channel",
"examples": [
"7d",
"1d",
"14d"
],
"title": "Interval",
"type": "string"
}
},
"title": "UpdateCheckConfig",
"type": "object"
}
},
"properties": {
"ad_files": {
"default": [
"./**/ad_*.{json,yml,yaml}"
],
"description": "\nglob (wildcard) patterns to select ad configuration files\nif relative paths are specified, then they are relative to this configuration file\n",
"items": {
"type": "string"
},
"minItems": 1,
"title": "Ad Files",
"type": "array"
},
"ad_defaults": {
"$ref": "#/$defs/AdDefaults",
"description": "Default values for ads, can be overwritten in each ad configuration file"
},
"categories": {
"additionalProperties": {
"type": "string"
},
"description": "additional name to category ID mappings (optional). Leave as {} if not needed. See full list at: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/src/kleinanzeigen_bot/resources/categories.yaml To add: use format 'Category > Subcategory': 'ID'",
"examples": [
"\"Elektronik > Notebooks\": \"161/278\"",
"\"Jobs > Praktika\": \"102/125\""
],
"title": "Categories",
"type": "object"
},
"download": {
"$ref": "#/$defs/DownloadConfig"
},
"publishing": {
"$ref": "#/$defs/PublishingConfig"
},
"browser": {
"$ref": "#/$defs/BrowserConfig",
"description": "Browser configuration"
},
"login": {
"$ref": "#/$defs/LoginConfig",
"description": "Login credentials"
},
"captcha": {
"$ref": "#/$defs/CaptchaConfig"
},
"update_check": {
"$ref": "#/$defs/UpdateCheckConfig",
"description": "Update check configuration"
},
"timeouts": {
"$ref": "#/$defs/TimeoutConfig",
"description": "Centralized timeout configuration."
},
"diagnostics": {
"$ref": "#/$defs/DiagnosticsConfig",
"description": "diagnostics capture configuration for troubleshooting"
}
},
"title": "Config",
"type": "object",
"description": "Auto-generated JSON Schema for Config"
}

View File

@@ -0,0 +1,143 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""CI guard: verifies generated schema and default-config artifacts are up-to-date."""
from __future__ import annotations
import difflib
import subprocess # noqa: S404
import sys
import tempfile
from pathlib import Path
from typing import TYPE_CHECKING, Final
from schema_utils import generate_schema_content
from kleinanzeigen_bot.model.ad_model import AdPartial
from kleinanzeigen_bot.model.config_model import Config
if TYPE_CHECKING:
from pydantic import BaseModel
SCHEMA_DEFINITIONS:Final[tuple[tuple[str, type[BaseModel], str], ...]] = (
("schemas/config.schema.json", Config, "Config"),
("schemas/ad.schema.json", AdPartial, "Ad"),
)
DEFAULT_CONFIG_PATH:Final[Path] = Path("docs/config.default.yaml")
def generate_default_config_via_cli(path:Path, repo_root:Path) -> None:
"""
Run `python -m kleinanzeigen_bot --config <path> create-config` to generate a default config snapshot.
"""
try:
subprocess.run( # noqa: S603 trusted, static command arguments
[
sys.executable,
"-m",
"kleinanzeigen_bot",
"--config",
str(path),
"create-config",
],
cwd = repo_root,
check = True,
timeout = 60,
capture_output = True,
text = True,
)
except subprocess.CalledProcessError as error:
stderr = error.stderr.strip() if error.stderr else "<empty>"
stdout = error.stdout.strip() if error.stdout else "<empty>"
raise RuntimeError(
"Failed to generate default config via CLI.\n"
f"Return code: {error.returncode}\n"
f"stderr:\n{stderr}\n"
f"stdout:\n{stdout}"
) from error
def get_schema_diffs(repo_root:Path) -> dict[str, str]:
"""
Compare committed schema files with freshly generated schema content and return unified diffs per path.
"""
diffs:dict[str, str] = {}
for schema_path, model, schema_name in SCHEMA_DEFINITIONS:
expected_schema_path = repo_root / schema_path
expected = expected_schema_path.read_text(encoding = "utf-8") if expected_schema_path.is_file() else ""
generated = generate_schema_content(model, schema_name)
if expected == generated:
continue
diffs[schema_path] = "".join(
difflib.unified_diff(
expected.splitlines(keepends = True),
generated.splitlines(keepends = True),
fromfile = schema_path,
tofile = f"<generated via: {model.__name__}.model_json_schema>",
)
)
return diffs
def get_default_config_diff(repo_root:Path) -> str:
"""
Compare docs/config.default.yaml with a freshly generated config artifact and return a unified diff string.
"""
expected_config_path = repo_root / DEFAULT_CONFIG_PATH
if not expected_config_path.is_file():
raise FileNotFoundError(f"Missing required default config file: {DEFAULT_CONFIG_PATH}")
with tempfile.TemporaryDirectory() as tmpdir:
generated_config_path = Path(tmpdir) / "config.default.yaml"
generate_default_config_via_cli(generated_config_path, repo_root)
expected = expected_config_path.read_text(encoding = "utf-8")
generated = generated_config_path.read_text(encoding = "utf-8")
if expected == generated:
return ""
return "".join(
difflib.unified_diff(
expected.splitlines(keepends = True),
generated.splitlines(keepends = True),
fromfile = str(DEFAULT_CONFIG_PATH),
tofile = "<generated via: python -m kleinanzeigen_bot --config /path/to/config.default.yaml create-config>",
)
)
def main() -> None:
repo_root = Path(__file__).resolve().parent.parent
schema_diffs = get_schema_diffs(repo_root)
default_config_diff = get_default_config_diff(repo_root)
if schema_diffs or default_config_diff:
messages:list[str] = ["Generated artifacts are not up-to-date."]
if schema_diffs:
messages.append("Outdated schema files detected:")
for path, schema_diff in schema_diffs.items():
messages.append(f"- {path}")
messages.append(schema_diff)
if default_config_diff:
messages.append("Outdated docs/config.default.yaml detected.")
messages.append(default_config_diff)
messages.append("Regenerate with one of the following:")
messages.append("- Schema files: pdm run generate-schemas")
messages.append("- Default config snapshot: pdm run generate-config")
messages.append("- Both: pdm run generate-artifacts")
raise SystemExit("\n".join(messages))
print("Generated schemas and docs/config.default.yaml are up-to-date.")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,35 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from pathlib import Path
from pydantic import BaseModel
from schema_utils import generate_schema_content
from kleinanzeigen_bot.model.ad_model import AdPartial
from kleinanzeigen_bot.model.config_model import Config
def generate_schema(model:type[BaseModel], name:str, out_dir:Path) -> None:
"""
Generate and write JSON schema for the given model.
"""
print(f"[+] Generating schema for model [{name}]...")
schema_content = generate_schema_content(model, name)
# Write JSON
json_path = out_dir / f"{name.lower()}.schema.json"
with json_path.open("w", encoding = "utf-8") as json_file:
json_file.write(schema_content)
print(f"[OK] {json_path}")
project_root = Path(__file__).parent.parent
out_dir = project_root / "schemas"
out_dir.mkdir(parents = True, exist_ok = True)
print(f"Generating schemas in: {out_dir.resolve()}")
generate_schema(Config, "Config", out_dir)
generate_schema(AdPartial, "Ad", out_dir)
print("All schemas generated successfully.")

317
scripts/post_autopep8.py Normal file
View File

@@ -0,0 +1,317 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import ast, logging, re, sys # isort: skip
from pathlib import Path
from typing import Final, List, Protocol, Tuple
from typing_extensions import override
# Configure basic logging
logging.basicConfig(level = logging.INFO, format = "%(levelname)s: %(message)s")
LOG:Final[logging.Logger] = logging.getLogger(__name__)
class FormatterRule(Protocol):
"""
A code processor that can modify source lines based on the AST.
"""
def apply(self, tree:ast.AST, lines:List[str], path:Path) -> List[str]:
raise NotImplementedError
class NoSpaceAfterColonInTypeAnnotationRule(FormatterRule):
"""
Removes whitespace between the colon (:) and the type annotation in variable and function parameter declarations.
This rule enforces `a:int` instead of `a: int`.
It is the opposite behavior of autopep8 rule E231.
Example:
# Before
def foo(a: int, b : str) -> None:
pass
# After
def foo(a:int, b:str) -> None:
pass
"""
@override
def apply(self, tree:ast.AST, lines:List[str], path:Path) -> List[str]:
ann_positions:List[Tuple[int, int]] = []
for node in ast.walk(tree):
if isinstance(node, ast.arg) and node.annotation is not None:
ann_positions.append((node.annotation.lineno - 1, node.annotation.col_offset))
elif isinstance(node, ast.AnnAssign) and node.annotation is not None:
ann = node.annotation
ann_positions.append((ann.lineno - 1, ann.col_offset))
if not ann_positions:
return lines
new_lines:List[str] = []
for idx, line in enumerate(lines):
if line.lstrip().startswith("#"):
new_lines.append(line)
continue
chars = list(line)
offsets = [col for (lin, col) in ann_positions if lin == idx]
for col in sorted(offsets, reverse = True):
prefix = "".join(chars[:col])
colon_idx = prefix.rfind(":")
if colon_idx == -1:
continue
j = colon_idx + 1
while j < len(chars) and chars[j].isspace():
del chars[j]
new_lines.append("".join(chars))
return new_lines
class EqualSignSpacingInDefaultsAndNamedArgsRule(FormatterRule):
"""
Ensures that the '=' sign in default values for function parameters and keyword arguments in function calls
is surrounded by exactly one space on each side.
This rule enforces `a:int = 3` instead of `a:int=3`, and `x = 42` instead of `x=42` or `x =42`.
It is the opposite behavior of autopep8 rule E251.
Example:
# Before
def foo(a:int=3, b :str= "bar"):
pass
foo(x=42,y = "hello")
# After
def foo(a:int = 3, b:str = "bar"):
pass
foo(x = 42, y = "hello")
"""
@override
def apply(self, tree:ast.AST, lines:List[str], path:Path) -> List[str]:
equals_positions:List[Tuple[int, int]] = []
for node in ast.walk(tree):
# --- Defaults in function definitions, async defs & lambdas ---
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef, ast.Lambda)):
# positional defaults
equals_positions.extend(
(d.lineno - 1, d.col_offset)
for d in node.args.defaults
if d is not None
)
# keyword-only defaults (only on defs, not lambdas)
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
equals_positions.extend(
(d.lineno - 1, d.col_offset)
for d in node.args.kw_defaults
if d is not None
)
# --- Keyword arguments in calls ---
if isinstance(node, ast.Call):
equals_positions.extend(
(kw.value.lineno - 1, kw.value.col_offset)
for kw in node.keywords
if kw.arg is not None
)
if not equals_positions:
return lines
new_lines:List[str] = []
for line_idx, line in enumerate(lines):
if line.lstrip().startswith("#"):
new_lines.append(line)
continue
chars = list(line)
equals_offsets = [col for (lineno, col) in equals_positions if lineno == line_idx]
for col in sorted(equals_offsets, reverse = True):
prefix = "".join(chars[:col])
equal_sign_idx = prefix.rfind("=")
if equal_sign_idx == -1:
continue
# remove spaces before '='
left_index = equal_sign_idx - 1
while left_index >= 0 and chars[left_index].isspace():
del chars[left_index]
equal_sign_idx -= 1
left_index -= 1
# remove spaces after '='
right_index = equal_sign_idx + 1
while right_index < len(chars) and chars[right_index].isspace():
del chars[right_index]
# insert single spaces
chars.insert(equal_sign_idx, " ")
chars.insert(equal_sign_idx + 2, " ")
new_lines.append("".join(chars))
return new_lines
class PreferDoubleQuotesRule(FormatterRule):
"""
Ensures string literals use double quotes unless the content contains a double quote.
Example:
# Before
foo = 'hello'
bar = 'a "quote" inside'
# After
foo = "hello"
bar = 'a "quote" inside' # kept as-is, because it contains a double quote
"""
@override
def apply(self, tree:ast.AST, lines:List[str], path:Path) -> List[str]:
new_lines = lines.copy()
# Track how much each line has shifted so far
line_shifts:dict[int, int] = dict.fromkeys(range(len(lines)), 0)
# Build a parent map for f-string detection
parent_map:dict[ast.AST, ast.AST] = {}
for parent in ast.walk(tree):
for child in ast.iter_child_nodes(parent):
parent_map[child] = parent
def is_in_fstring(node:ast.AST) -> bool:
p = parent_map.get(node)
while p:
if isinstance(p, ast.JoinedStr):
return True
p = parent_map.get(p)
return False
# Regex to locate a single- or triple-quoted literal:
# (?P<prefix>[rRbuUfF]*) optional string flags (r, b, u, f, etc.), case-insensitive
# (?P<quote>'{3}|') the opening delimiter: either three single-quotes (''') or one ('),
# but never two in a row (so we won't mis-interpret adjacent quotes)
# (?P<content>.*?) the literal's content, non-greedy up to the next same delimiter
# (?P=quote) the matching closing delimiter (same length as the opener)
literal_re = re.compile(
r"(?P<prefix>[rRbuUfF]*)(?P<quote>'{3}|')(?P<content>.*?)(?P=quote)",
re.DOTALL,
)
for node in ast.walk(tree):
# only handle simple string constants
if not (isinstance(node, ast.Constant) and isinstance(node.value, str)):
continue
# skip anything inside an f-string, at any depth
if is_in_fstring(node):
continue
starting_line_number = getattr(node, "lineno", None)
starting_col_offset = getattr(node, "col_offset", None)
if starting_line_number is None or starting_col_offset is None:
continue
start_line = starting_line_number - 1
shift = line_shifts[start_line]
raw = new_lines[start_line]
# apply shift so we match against current edited line
idx = starting_col_offset + shift
if idx >= len(raw) or raw[idx] not in {"'", "r", "u", "b", "f", "R", "U", "B", "F"}:
continue
# match literal at that column
m = literal_re.match(raw[idx:])
if not m:
continue
prefix = m.group("prefix")
quote = m.group("quote") # either "'" or "'''"
content = m.group("content") # what's inside
# skip if content has a double-quote already
if '"' in content:
continue
# build new literal with the same prefix, but doublequote delimiter
delim = '"' * len(quote)
escaped = content.replace(delim, "\\" + delim)
new_literal = f"{prefix}{delim}{escaped}{delim}"
literal_len = m.end() # how many chars we're replacing
before = raw[:idx]
after = raw[idx + literal_len:]
new_lines[start_line] = before + new_literal + after
# record shift delta for any further edits on this line
line_shifts[start_line] += len(new_literal) - literal_len
return new_lines
FORMATTER_RULES:List[FormatterRule] = [
NoSpaceAfterColonInTypeAnnotationRule(),
EqualSignSpacingInDefaultsAndNamedArgsRule(),
PreferDoubleQuotesRule(),
]
def format_file(path:Path) -> None:
# Read without newline conversion
with path.open("r", encoding = "utf-8", newline = "") as rf:
original_text = rf.read()
# Initial parse
try:
tree = ast.parse(original_text)
except SyntaxError as e:
LOG.error(
"Syntax error parsing %s[%d:%d]: %r -> %s",
path, e.lineno, e.offset, (e.text or "").rstrip(), e.msg
)
return
lines = original_text.splitlines(keepends = True)
formatted_text = original_text
success = True
for rule in FORMATTER_RULES:
lines = rule.apply(tree, lines, path)
formatted_text = "".join(lines)
# Re-parse the updated text
try:
tree = ast.parse(formatted_text)
except SyntaxError as e:
LOG.error(
"Syntax error after %s at %s[%d:%d]: %r -> %s",
rule.__class__.__name__, path, e.lineno, e.offset, (e.text or "").rstrip(), e.msg
)
success = False
break
if success and formatted_text != original_text:
with path.open("w", encoding = "utf-8", newline = "") as wf:
wf.write(formatted_text)
LOG.info("Formatted [%s].", path)
if __name__ == "__main__":
if len(sys.argv) < 2: # noqa: PLR2004 Magic value used in comparison
script_path = Path(sys.argv[0])
print(f"Usage: python {script_path} <directory1> [<directory2> ...]")
sys.exit(1)
for dir_arg in sys.argv[1:]:
root = Path(dir_arg)
if not root.exists():
LOG.warning("Directory [%s] does not exist, skipping...", root)
continue
for py_file in root.rglob("*.py"):
format_file(py_file)

165
scripts/run_tests.py Normal file
View File

@@ -0,0 +1,165 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""Unified pytest runner for public and CI test execution.
This module invokes pytest via ``pytest.main()``. Programmatic callers should
avoid repeated in-process invocations because Python's import cache can retain
test module state between runs. CLI usage via ``pdm run`` is unaffected because
each invocation runs in a fresh process.
"""
from __future__ import annotations
import argparse
import os
import sys
from pathlib import Path
from typing import Final
import pytest
ROOT:Final = Path(__file__).resolve().parent.parent
TEMP:Final = ROOT / ".temp"
# Most tests are currently unmarked, so utest intentionally uses negative markers
# to select the default "unit-like" population while excluding integration/smoke.
PROFILE_CONFIGS:Final[dict[str, tuple[str | None, str]]] = {
"test": (None, "auto"),
"utest": ("not itest and not smoke", "auto"),
"itest": ("itest and not smoke", "0"),
"smoke": ("smoke", "auto"),
}
def _append_verbosity(pytest_args:list[str], verbosity:int) -> None:
if verbosity == 0:
pytest_args.append("-q")
else:
pytest_args.append("-" + ("v" * verbosity))
pytest_args.extend([
"--durations=25",
"--durations-min=0.5",
])
def _pytest_base_args(*, workers:str, verbosity:int) -> list[str]:
# Stable pytest defaults (strict markers, doctest, coverage) live in pyproject addopts.
# This runner only adds dynamic execution policy (workers and verbosity).
pytest_args = [
"-n",
workers,
]
_append_verbosity(pytest_args, verbosity)
return pytest_args
def _resolve_path(path:Path) -> Path:
if path.is_absolute():
return path
return ROOT / path
def _display_path(path:Path) -> str:
try:
return str(path.relative_to(ROOT))
except ValueError:
return str(path)
def _cleanup_coverage_artifacts() -> None:
TEMP.mkdir(parents = True, exist_ok = True)
for pattern in ("coverage-*.xml", ".coverage-*.sqlite"):
for stale_file in TEMP.glob(pattern):
stale_file.unlink(missing_ok = True)
for stale_path in (TEMP / "coverage.sqlite", ROOT / ".coverage"):
stale_path.unlink(missing_ok = True)
def _run_profile(*, profile:str, verbosity:int, passthrough:list[str]) -> int:
marker, workers = PROFILE_CONFIGS[profile]
pytest_args = _pytest_base_args(workers = workers, verbosity = verbosity)
if marker is not None:
pytest_args.extend(["-m", marker])
pytest_args.extend(passthrough)
return pytest.main(pytest_args)
def _run_ci(*, marker:str, coverage_file:Path, xml_file:Path, workers:str, verbosity:int, passthrough:list[str]) -> int:
resolved_coverage_file = _resolve_path(coverage_file)
resolved_xml_file = _resolve_path(xml_file)
resolved_coverage_file.parent.mkdir(parents = True, exist_ok = True)
resolved_xml_file.parent.mkdir(parents = True, exist_ok = True)
previous_coverage_file = os.environ.get("COVERAGE_FILE")
os.environ["COVERAGE_FILE"] = str(resolved_coverage_file)
pytest_args = _pytest_base_args(workers = workers, verbosity = verbosity)
pytest_args.extend([
"-m",
marker,
f"--cov-report=xml:{_display_path(resolved_xml_file)}",
])
pytest_args.extend(passthrough)
try:
return pytest.main(pytest_args)
finally:
if previous_coverage_file is None:
os.environ.pop("COVERAGE_FILE", None)
else:
os.environ["COVERAGE_FILE"] = previous_coverage_file
def _build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description = "Run project tests")
subparsers = parser.add_subparsers(dest = "command", required = True)
run_parser = subparsers.add_parser("run", help = "Run tests for a predefined profile")
run_parser.add_argument("profile", choices = sorted(PROFILE_CONFIGS))
run_parser.add_argument("-v", "--verbose", action = "count", default = 0)
subparsers.add_parser("ci-prepare", help = "Clean stale coverage artifacts")
ci_run_parser = subparsers.add_parser("ci-run", help = "Run tests with explicit coverage outputs")
ci_run_parser.add_argument("--marker", required = True)
ci_run_parser.add_argument("--coverage-file", type = Path, required = True)
ci_run_parser.add_argument("--xml-file", type = Path, required = True)
ci_run_parser.add_argument("-n", "--workers", default = "auto")
ci_run_parser.add_argument("-v", "--verbose", action = "count", default = 0)
return parser
def main(argv:list[str] | None = None) -> int:
os.chdir(ROOT)
effective_argv = sys.argv[1:] if argv is None else argv
parser = _build_parser()
args, passthrough = parser.parse_known_args(effective_argv)
# This entrypoint is intended for one-shot CLI usage, not same-process
# repeated invocations that can reuse imports loaded by pytest.main().
if args.command == "run":
return _run_profile(profile = args.profile, verbosity = args.verbose, passthrough = passthrough)
if args.command == "ci-prepare":
_cleanup_coverage_artifacts()
return 0
if args.command == "ci-run":
return _run_ci(
marker = args.marker,
coverage_file = args.coverage_file,
xml_file = args.xml_file,
workers = args.workers,
verbosity = args.verbose,
passthrough = passthrough,
)
return 0
if __name__ == "__main__":
raise SystemExit(main())

21
scripts/schema_utils.py Normal file
View File

@@ -0,0 +1,21 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import json
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from pydantic import BaseModel
def generate_schema_content(model:type[BaseModel], name:str) -> str:
"""
Build normalized JSON schema output for project models.
"""
schema = model.model_json_schema(mode = "validation")
schema.setdefault("title", f"{name} Schema")
schema.setdefault("description", f"Auto-generated JSON Schema for {name}")
return json.dumps(schema, indent = 2) + "\n"

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,28 @@
""" # SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
SPDX-FileCopyrightText: © Sebastian Thomschke and contributors # SPDX-License-Identifier: AGPL-3.0-or-later
SPDX-License-Identifier: AGPL-3.0-or-later # SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/ import sys, time # isort: skip
""" from gettext import gettext as _
import sys
import kleinanzeigen_bot
kleinanzeigen_bot.main(sys.argv) import kleinanzeigen_bot
from kleinanzeigen_bot.utils.exceptions import CaptchaEncountered
from kleinanzeigen_bot.utils.launch_mode_guard import ensure_not_launched_from_windows_explorer
from kleinanzeigen_bot.utils.misc import format_timedelta
# --------------------------------------------------------------------------- #
# Refuse GUI/double-click launch on Windows
# --------------------------------------------------------------------------- #
ensure_not_launched_from_windows_explorer()
# --------------------------------------------------------------------------- #
# Main loop: run bot → if captcha → sleep → restart
# --------------------------------------------------------------------------- #
while True:
try:
kleinanzeigen_bot.main(sys.argv) # runs & returns when finished
sys.exit(0) # not using `break` to prevent process closing issues
except CaptchaEncountered as ex:
delay = ex.restart_delay
print(_("[INFO] Captcha detected. Sleeping %s before restart...") % format_timedelta(delay))
time.sleep(delay.total_seconds())
# loop continues and starts a fresh run

View File

@@ -1,238 +1,614 @@
""" # SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
SPDX-FileCopyrightText: © Sebastian Thomschke and contributors # SPDX-License-Identifier: AGPL-3.0-or-later
SPDX-License-Identifier: AGPL-3.0-or-later # SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/ import asyncio
""" from gettext import gettext as _
import json
from decimal import DecimalException
from typing import Any
from selenium.common.exceptions import NoSuchElementException import json, mimetypes, re, shutil # isort: skip
from selenium.webdriver.common.by import By import urllib.error as urllib_error
from selenium.webdriver.remote.webdriver import WebDriver import urllib.request as urllib_request
import selenium.webdriver.support.expected_conditions as EC from datetime import datetime
from pathlib import Path
from typing import Any, Final
from .selenium_mixin import SeleniumMixin from kleinanzeigen_bot.model.ad_model import ContactPartial
from .utils import parse_decimal, pause
from .model.ad_model import AdPartial
from .model.config_model import Config
from .utils import dicts, files, i18n, loggers, misc, reflect
from .utils.web_scraping_mixin import Browser, By, Element, WebScrapingMixin
__all__ = [
"AdExtractor",
]
LOG:Final[loggers.Logger] = loggers.get_logger(__name__)
_BREADCRUMB_MIN_DEPTH:Final[int] = 2
BREADCRUMB_RE = re.compile(r"/c(\d+)")
class AdExtractor(SeleniumMixin): class AdExtractor(WebScrapingMixin):
""" """
Wrapper class for ad extraction that uses an active bot´s web driver to extract specific elements from an ad page. Wrapper class for ad extraction that uses an active bot´s browser session to extract specific elements from an ad page.
""" """
def __init__(self, driver:WebDriver): def __init__(
self,
browser:Browser,
config:Config,
download_dir:Path,
published_ads_by_id:dict[int, dict[str, Any]] | None = None,
) -> None:
super().__init__() super().__init__()
self.webdriver = driver self.browser = browser
self.config:Config = config
self.download_dir:Path = download_dir
self.published_ads_by_id:dict[int, dict[str, Any]] = published_ads_by_id or {}
def extract_category_from_ad_page(self) -> str: async def download_ad(self, ad_id:int) -> None:
"""
Downloads an ad to a specific location, specified by config and ad ID.
NOTE: Requires that the driver session currently is on the ad page.
:param ad_id: the ad ID
"""
download_dir = self.download_dir
LOG.info("Using download directory: %s", download_dir)
# Extract ad info and determine final directory path
ad_cfg, final_dir = await self._extract_ad_page_info_with_directory_handling(download_dir, ad_id)
# Save the ad configuration file (offload to executor to avoid blocking the event loop)
ad_file_path = str(Path(final_dir) / f"ad_{ad_id}.yaml")
header_string = (
"# yaml-language-server: $schema=https://raw.githubusercontent.com/Second-Hand-Friends/kleinanzeigen-bot/refs/heads/main/schemas/ad.schema.json"
)
await asyncio.get_running_loop().run_in_executor(None, lambda: dicts.save_dict(ad_file_path, ad_cfg.model_dump(mode = "json"), header = header_string))
@staticmethod
def _download_and_save_image_sync(url:str, directory:str, filename_prefix:str, img_nr:int) -> str | None:
try:
with urllib_request.urlopen(url) as response: # noqa: S310 Audit URL open for permitted schemes.
content_type = response.info().get_content_type()
file_ending = mimetypes.guess_extension(content_type) or ""
# Use pathlib.Path for OS-agnostic path handling
img_path = Path(directory) / f"{filename_prefix}{img_nr}{file_ending}"
with open(img_path, "wb") as f:
shutil.copyfileobj(response, f)
return str(img_path)
except (urllib_error.URLError, urllib_error.HTTPError, OSError, shutil.Error) as e:
# Narrow exception handling to expected network/filesystem errors
LOG.warning("Failed to download image %s: %s", url, e)
return None
async def _download_images_from_ad_page(self, directory:str, ad_id:int) -> list[str]:
"""
Downloads all images of an ad.
:param directory: the path of the directory created for this ad
:param ad_id: the ID of the ad to download the images from
:return: the relative paths for all downloaded images
"""
n_images:int
img_paths = []
try:
# download all images from box
image_box = await self.web_find(By.CLASS_NAME, "galleryimage-large")
images = await self.web_find_all(By.CSS_SELECTOR, ".galleryimage-element[data-ix] > img", parent = image_box)
n_images = len(images)
LOG.info("Found %s.", i18n.pluralize("image", n_images))
img_fn_prefix = "ad_" + str(ad_id) + "__img"
img_nr = 1
dl_counter = 0
loop = asyncio.get_running_loop()
for img_element in images:
current_img_url = img_element.attrs["src"] # URL of the image
if current_img_url is None:
continue
img_path = await loop.run_in_executor(None, self._download_and_save_image_sync, str(current_img_url), directory, img_fn_prefix, img_nr)
if img_path:
dl_counter += 1
# Use pathlib.Path for OS-agnostic path handling
img_paths.append(Path(img_path).name)
img_nr += 1
LOG.info("Downloaded %s.", i18n.pluralize("image", dl_counter))
except TimeoutError: # some ads do not require images
LOG.warning("No image area found. Continuing without downloading images.")
return img_paths
def extract_ad_id_from_ad_url(self, url:str) -> int:
"""
Extracts the ID of an ad, given by its reference link.
:param url: the URL to the ad page
:return: the ad ID, a (ten-digit) integer number
"""
try:
path = url.split("?", maxsplit = 1)[0] # Remove query string if present
last_segment = path.rstrip("/").rsplit("/", maxsplit = 1)[-1] # Get last path component
id_part = last_segment.split("-", maxsplit = 1)[0] # Extract part before first hyphen
return int(id_part)
except (IndexError, ValueError) as ex:
LOG.warning("Failed to extract ad ID from URL '%s': %s", url, ex)
return -1
async def extract_own_ads_urls(self) -> list[str]:
"""
Extracts the references to all own ads.
:return: the links to your ad pages
"""
refs:list[str] = []
async def extract_page_refs(page_num:int) -> bool:
"""Extract ad reference URLs from the current page.
:param page_num: The current page number being processed
:return: True to stop pagination (e.g. ads container disappeared), False to continue to next page
"""
try:
ad_list_container = await self.web_find(By.ID, "my-manageitems-adlist")
list_items = await self.web_find_all(By.CLASS_NAME, "cardbox", parent = ad_list_container)
LOG.info("Found %s ad items on page %s.", len(list_items), page_num)
page_refs:list[str] = []
for index, li in enumerate(list_items, start = 1):
try:
link_elem = await self.web_find(By.CSS_SELECTOR, "div h3 a.text-onSurface", parent = li)
href = link_elem.attrs.get("href")
if href:
page_refs.append(str(href))
else:
LOG.warning(
"Skipping ad item %s/%s on page %s: ad reference link has no href attribute.",
index,
len(list_items),
page_num,
)
except TimeoutError:
LOG.warning(
"Skipping ad item %s/%s on page %s: no ad reference link found (likely unpublished or draft item).",
index,
len(list_items),
page_num,
)
refs.extend(page_refs)
LOG.info("Successfully extracted %s refs from page %s.", len(page_refs), page_num)
return False # Continue to next page
except TimeoutError:
LOG.warning("Could not find ad list container or ad items on page %s.", page_num)
return True # Stop pagination (ads disappeared)
except Exception as e:
# Continue despite error for resilience against transient web scraping issues
# (e.g., DOM structure changes, network glitches). LOG.exception ensures visibility.
LOG.exception("Error extracting refs on page %s: %s", page_num, e)
return False # Continue to next page
await self._navigate_paginated_ad_overview(extract_page_refs)
if not refs:
LOG.warning("No ad URLs were extracted.")
return refs
async def navigate_to_ad_page(self, id_or_url:int | str) -> bool:
"""
Navigates to an ad page specified with an ad ID; or alternatively by a given URL.
:return: whether the navigation to the ad page was successful
"""
if reflect.is_integer(id_or_url):
# navigate to search page
await self.web_open("https://www.kleinanzeigen.de/s-suchanfrage.html?keywords={0}".format(id_or_url))
else:
await self.web_open(str(id_or_url)) # navigate to URL directly given
await self.web_sleep()
# handle the case that invalid ad ID given
if self.page.url.endswith("k0"):
LOG.error("There is no ad under the given ID.")
return False
# close (warning) popup, if given
try:
await self.web_find(By.ID, "vap-ovrly-secure")
LOG.warning("A popup appeared!")
await self.web_click(By.CLASS_NAME, "mfp-close")
await self.web_sleep()
except TimeoutError:
# Popup did not appear within timeout.
pass
return True
async def _extract_title_from_ad_page(self) -> str:
"""
Extracts the title from an ad page.
Assumes that the web driver currently shows an ad page.
:return: the ad title
"""
return await self.web_text(By.ID, "viewad-title")
async def _extract_ad_page_info(self, directory:str, ad_id:int) -> AdPartial:
"""
Extracts ad information and downloads images to the specified directory.
NOTE: Requires that the driver session currently is on the ad page.
:param directory: the directory to download images to
:param ad_id: the ad ID
:return: an AdPartial object containing the ad information
"""
info:dict[str, Any] = {"active": True}
# Extract title first (needed for directory creation)
title = await self._extract_title_from_ad_page()
# Get BelenConf data which contains accurate ad_type information
belen_conf = await self.web_execute("window.BelenConf")
# Extract ad type from BelenConf - more reliable than URL pattern matching
# BelenConf contains "ad_type":"WANTED" or "ad_type":"OFFER" in dimensions
ad_type_from_conf = None
if isinstance(belen_conf, dict):
ad_type_from_conf = belen_conf.get("universalAnalyticsOpts", {}).get("dimensions", {}).get("ad_type")
info["type"] = ad_type_from_conf if ad_type_from_conf in {"OFFER", "WANTED"} else ("OFFER" if "s-anzeige" in self.page.url else "WANTED")
info["category"] = await self._extract_category_from_ad_page()
# append subcategory and change e.g. category "161/172" to "161/172/lautsprecher_kopfhoerer"
# take subcategory from third_category_name as key 'art_s' sometimes is a special attribute (e.g. gender for clothes)
# the subcategory isn't really necessary, but when set, the appropriate special attribute gets preselected
if third_category_id := belen_conf["universalAnalyticsOpts"]["dimensions"].get("l3_category_id"):
info["category"] += f"/{third_category_id}"
info["title"] = title
# Get raw description text
raw_description = (await self.web_text(By.ID, "viewad-description-text")).strip()
# Get prefix and suffix from config
prefix = self.config.ad_defaults.description_prefix
suffix = self.config.ad_defaults.description_suffix
# Remove prefix and suffix if present
description_text = raw_description
if prefix and description_text.startswith(prefix.strip()):
description_text = description_text[len(prefix.strip()):]
if suffix and description_text.endswith(suffix.strip()):
description_text = description_text[: -len(suffix.strip())]
info["description"] = description_text.strip()
info["special_attributes"] = await self._extract_special_attributes_from_ad_page(belen_conf)
if "schaden_s" in info["special_attributes"]:
# change f to 'nein' and 't' to 'ja'
info["special_attributes"]["schaden_s"] = info["special_attributes"]["schaden_s"].translate(str.maketrans({"t": "ja", "f": "nein"}))
info["price"], info["price_type"] = await self._extract_pricing_info_from_ad_page()
info["shipping_type"], info["shipping_costs"], info["shipping_options"] = await self._extract_shipping_info_from_ad_page()
info["sell_directly"] = await self._extract_sell_directly_from_ad_page()
info["images"] = await self._download_images_from_ad_page(directory, ad_id)
info["contact"] = await self._extract_contact_from_ad_page()
info["id"] = ad_id
try: # try different locations known for creation date element
creation_date = await self.web_text(By.XPATH, "/html/body/div[1]/div[2]/div/section[2]/section/section/article/div[3]/div[2]/div[2]/div[1]/span")
except TimeoutError:
creation_date = await self.web_text(By.CSS_SELECTOR, "#viewad-extra-info > div:nth-child(1) > span:nth-child(2)")
# convert creation date to ISO format
created_parts = creation_date.split(".")
creation_date_str = created_parts[2] + "-" + created_parts[1] + "-" + created_parts[0] + " 00:00:00"
creation_date_dt = datetime.fromisoformat(creation_date_str)
info["created_on"] = creation_date_dt
info["updated_on"] = None # will be set later on
ad_cfg = AdPartial.model_validate(info)
# calculate the initial hash for the downloaded ad
ad_cfg.content_hash = ad_cfg.to_ad(self.config.ad_defaults).update_content_hash().content_hash
return ad_cfg
async def _extract_ad_page_info_with_directory_handling(self, relative_directory:Path, ad_id:int) -> tuple[AdPartial, Path]:
"""
Extracts ad information and handles directory creation/renaming.
:param relative_directory: Base directory for downloads
:param ad_id: The ad ID
:return: AdPartial with directory information
"""
# First, extract basic info to get the title
info:dict[str, Any] = {"active": True}
# extract basic info
info["type"] = "OFFER" if "s-anzeige" in self.page.url else "WANTED"
title = await self._extract_title_from_ad_page()
LOG.info('Extracting title from ad %s: "%s"', ad_id, title)
# Determine the final directory path
sanitized_title = misc.sanitize_folder_name(title, self.config.download.folder_name_max_length)
final_dir = relative_directory / f"ad_{ad_id}_{sanitized_title}"
temp_dir = relative_directory / f"ad_{ad_id}"
loop = asyncio.get_running_loop()
# Handle existing directories
if await files.exists(final_dir):
# If the folder with title already exists, delete it
LOG.info("Deleting current folder of ad %s...", ad_id)
LOG.debug("Removing directory tree: %s", final_dir)
await loop.run_in_executor(None, shutil.rmtree, str(final_dir))
if await files.exists(temp_dir):
if self.config.download.rename_existing_folders:
# Rename the old folder to the new name with title
LOG.info("Renaming folder from %s to %s for ad %s...", temp_dir.name, final_dir.name, ad_id)
LOG.debug("Renaming: %s -> %s", temp_dir, final_dir)
await loop.run_in_executor(None, temp_dir.rename, final_dir)
else:
# Use the existing folder without renaming
final_dir = temp_dir
LOG.info("Using existing folder for ad %s at %s.", ad_id, final_dir)
else:
# Create new directory with title
LOG.debug("Creating new directory: %s", final_dir)
await loop.run_in_executor(None, final_dir.mkdir)
LOG.info("New directory for ad created at %s.", final_dir)
# Now extract complete ad info (including images) to the final directory
ad_cfg = await self._extract_ad_page_info(str(final_dir), ad_id)
return ad_cfg, final_dir
async def _extract_category_from_ad_page(self) -> str:
""" """
Extracts a category of an ad in numerical form. Extracts a category of an ad in numerical form.
Assumes that the web driver currently shows an ad page. Assumes that the web driver currently shows an ad page.
:return: a category string of form abc/def, where a-f are digits :return: a category string of form abc/def, where a-f are digits
""" """
category_line = self.webdriver.find_element(By.XPATH, '//*[@id="vap-brdcrmb"]') try:
category_first_part = category_line.find_element(By.XPATH, './/a[2]') category_line = await self.web_find(By.ID, "vap-brdcrmb")
category_second_part = category_line.find_element(By.XPATH, './/a[3]') except TimeoutError as exc:
cat_num_first = category_first_part.get_attribute('href').split('/')[-1][1:] LOG.warning("Breadcrumb container 'vap-brdcrmb' not found; cannot extract ad category: %s", exc)
cat_num_second = category_second_part.get_attribute('href').split('/')[-1][1:] raise
category:str = cat_num_first + '/' + cat_num_second try:
breadcrumb_links = await self.web_find_all(By.CSS_SELECTOR, "a", parent = category_line)
except TimeoutError:
breadcrumb_links = []
category_ids:list[str] = []
for link in breadcrumb_links:
href = str(link.attrs.get("href", "") or "")
matches = BREADCRUMB_RE.findall(href)
if matches:
category_ids.extend(matches)
# Use the deepest two breadcrumb category codes when available.
if len(category_ids) >= _BREADCRUMB_MIN_DEPTH:
return f"{category_ids[-2]}/{category_ids[-1]}"
if len(category_ids) == 1:
return f"{category_ids[0]}/{category_ids[0]}"
# Fallback to legacy selectors in case the breadcrumb structure is unexpected.
LOG.debug("Falling back to legacy breadcrumb selectors; collected ids: %s", category_ids)
fallback_timeout = self._effective_timeout()
try:
category_first_part = await self.web_find(By.CSS_SELECTOR, "a:nth-of-type(2)", parent = category_line)
category_second_part = await self.web_find(By.CSS_SELECTOR, "a:nth-of-type(3)", parent = category_line)
except TimeoutError as exc:
LOG.error("Legacy breadcrumb selectors not found within %.1f seconds (collected ids: %s)", fallback_timeout, category_ids)
raise TimeoutError(_("Unable to locate breadcrumb fallback selectors within %(seconds).1f seconds.") % {"seconds": fallback_timeout}) from exc
href_first:str = str(category_first_part.attrs["href"])
href_second:str = str(category_second_part.attrs["href"])
cat_num_first_raw = href_first.rsplit("/", maxsplit = 1)[-1]
cat_num_second_raw = href_second.rsplit("/", maxsplit = 1)[-1]
cat_num_first = cat_num_first_raw[1:] if cat_num_first_raw.startswith("c") else cat_num_first_raw
cat_num_second = cat_num_second_raw[1:] if cat_num_second_raw.startswith("c") else cat_num_second_raw
category:str = cat_num_first + "/" + cat_num_second
return category return category
def extract_special_attributes_from_ad_page(self) -> dict[str, Any]: async def _extract_special_attributes_from_ad_page(self, belen_conf:dict[str, Any]) -> dict[str, str]:
""" """
Extracts the special attributes from an ad page. Extracts the special attributes from an ad page.
If no items are available then special_attributes is empty
:return: a dictionary (possibly empty) where the keys are the attribute names, mapped to their values :return: a dictionary (possibly empty) where the keys are the attribute names, mapped to their values
""" """
belen_conf = self.webdriver.execute_script("return window.BelenConf")
special_attributes_str = belen_conf["universalAnalyticsOpts"]["dimensions"]["dimension108"] # e.g. "art_s:lautsprecher_kopfhoerer|condition_s:like_new|versand_s:t"
special_attributes = json.loads(special_attributes_str) special_attributes_str = belen_conf["universalAnalyticsOpts"]["dimensions"].get("ad_attributes")
if not isinstance(special_attributes, dict): if not special_attributes_str:
raise ValueError( return {}
"Failed to parse special attributes from ad page." special_attributes = dict(item.split(":") for item in special_attributes_str.split("|") if ":" in item)
f"Expected a dictionary, but got a {type(special_attributes)}" special_attributes = {k: v for k, v in special_attributes.items() if not k.endswith(".versand_s") and k != "versand_s"}
)
special_attributes = {k: v for k, v in special_attributes.items() if not k.endswith('.versand_s')}
return special_attributes return special_attributes
def extract_pricing_info_from_ad_page(self) -> tuple[float | None, str]: async def _extract_pricing_info_from_ad_page(self) -> tuple[float | None, str]:
""" """
Extracts the pricing information (price and pricing type) from an ad page. Extracts the pricing information (price and pricing type) from an ad page.
:return: the price of the offer (optional); and the pricing type :return: the price of the offer (optional); and the pricing type
""" """
try: try:
price_str:str = self.webdriver.find_element(By.CLASS_NAME, 'boxedarticle--price').text price_str:str = await self.web_text(By.ID, "viewad-price")
price_type:str price:int | None = None
price:float | None = -1 match price_str.rsplit(maxsplit = 1)[-1]:
match price_str.split()[-1]: case "":
case '': price_type = "FIXED"
price_type = 'FIXED' # replace('.', '') is to remove the thousands separator before parsing as int
price = float(parse_decimal(price_str.split()[0].replace('.', ''))) price = int(price_str.replace(".", "").split(maxsplit = 1)[0])
case 'VB': # can be either 'X € VB', or just 'VB' case "VB":
price_type = 'NEGOTIABLE' price_type = "NEGOTIABLE"
try: if price_str != "VB": # can be either 'X € VB', or just 'VB'
price = float(parse_decimal(price_str.split()[0].replace('.', ''))) price = int(price_str.replace(".", "").split(maxsplit = 1)[0])
except DecimalException: case "verschenken":
price = None price_type = "GIVE_AWAY"
case 'verschenken':
price_type = 'GIVE_AWAY'
price = None
case _: case _:
price_type = 'NOT_APPLICABLE' price_type = "NOT_APPLICABLE"
return price, price_type return price, price_type
except NoSuchElementException: # no 'commercial' ad, has no pricing box etc. except TimeoutError: # no 'commercial' ad, has no pricing box etc.
return None, 'NOT_APPLICABLE' return None, "NOT_APPLICABLE"
def extract_shipping_info_from_ad_page(self) -> tuple[str, float | None, list[str] | None]: async def _extract_shipping_info_from_ad_page(self) -> tuple[str, float | None, list[str] | None]:
""" """
Extracts shipping information from an ad page. Extracts shipping information from an ad page.
:return: the shipping type, and the shipping price (optional) :return: the shipping type, and the shipping price (optional)
""" """
ship_type, ship_costs, shipping_options = 'NOT_APPLICABLE', None, None ship_type, ship_costs, shipping_options = "NOT_APPLICABLE", None, None
try: try:
shipping_text = self.webdriver.find_element(By.CSS_SELECTOR, '.boxedarticle--details--shipping') \ shipping_text = await self.web_text(By.CLASS_NAME, "boxedarticle--details--shipping")
.text.strip()
# e.g. '+ Versand ab 5,49 €' OR 'Nur Abholung' # e.g. '+ Versand ab 5,49 €' OR 'Nur Abholung'
if shipping_text == 'Nur Abholung': if shipping_text == "Nur Abholung":
ship_type = 'PICKUP' ship_type = "PICKUP"
elif shipping_text == 'Versand möglich': elif shipping_text == "Versand möglich":
ship_type = 'SHIPPING' ship_type = "SHIPPING"
elif '' in shipping_text: elif "" in shipping_text:
shipping_price_parts = shipping_text.split(' ') shipping_price_parts = shipping_text.split(" ")
ship_type = 'SHIPPING' ship_type = "SHIPPING"
ship_costs = float(parse_decimal(shipping_price_parts[-2])) ship_costs = float(misc.parse_decimal(shipping_price_parts[-2]))
# extract shipping options # reading shipping option from kleinanzeigen
# It is only possible the extract the cheapest shipping option, # and find the right one by price
# as the other options are not shown shipping_costs = json.loads(
(await self.web_request("https://gateway.kleinanzeigen.de/postad/api/v1/shipping-options?posterType=PRIVATE"))["content"]
)["data"]["shippingOptionsResponse"]["options"]
# map to internal shipping identifiers used by kleinanzeigen-bot
shipping_option_mapping = { shipping_option_mapping = {
"DHL_2": "5,49", "DHL_001": "DHL_2",
"Hermes_Päckchen": "4,50", "DHL_002": "DHL_5",
"Hermes_S": "4,95", "DHL_003": "DHL_10",
"DHL_5": "6,99", "DHL_004": "DHL_31,5",
"Hermes_M": "5,95", "DHL_005": "DHL_20",
"DHL_10": "9,49", "HERMES_001": "Hermes_Päckchen",
"DHL_31,5": "16,49", "HERMES_002": "Hermes_S",
"Hermes_L": "10,95", "HERMES_003": "Hermes_M",
"HERMES_004": "Hermes_L",
} }
for shipping_option, shipping_price in shipping_option_mapping.items():
if shipping_price in shipping_text: # Convert Euro to cents and round to nearest integer
shipping_options = [shipping_option] price_in_cent = round(ship_costs * 100)
break
except NoSuchElementException: # no pricing box -> no shipping given # If include_all_matching_shipping_options is enabled, get all options for the same package size
ship_type = 'NOT_APPLICABLE' if self.config.download.include_all_matching_shipping_options:
# Find all options with the same price to determine the package size
matching_options = [opt for opt in shipping_costs if opt["priceInEuroCent"] == price_in_cent]
if not matching_options:
return "SHIPPING", ship_costs, None
# Use the package size of the first matching option
matching_size = matching_options[0]["packageSize"]
# Get all options of the same size
shipping_options = [
shipping_option_mapping[opt["id"]]
for opt in shipping_costs
if opt["packageSize"] == matching_size
and opt["id"] in shipping_option_mapping
and shipping_option_mapping[opt["id"]] not in self.config.download.excluded_shipping_options
]
else:
# Only use the matching option if it's not excluded
matching_option = next((x for x in shipping_costs if x["priceInEuroCent"] == price_in_cent), None)
if not matching_option:
return "SHIPPING", ship_costs, None
shipping_option = shipping_option_mapping.get(matching_option["id"])
if not shipping_option or shipping_option in self.config.download.excluded_shipping_options:
return "SHIPPING", ship_costs, None
shipping_options = [shipping_option]
except TimeoutError: # no pricing box -> no shipping given
ship_type = "NOT_APPLICABLE"
return ship_type, ship_costs, shipping_options return ship_type, ship_costs, shipping_options
def extract_sell_directly_from_ad_page(self) -> bool | None: async def _extract_sell_directly_from_ad_page(self) -> bool | None:
""" """
Extracts the sell directly option from an ad page. Extracts the sell directly option from an ad page using the published ads data.
:return: a boolean indicating whether the sell directly option is active (optional) Uses data passed at construction time (from the manage-ads JSON) to avoid
repetitive API calls that create a bot detection signature.
:return: bool | None - True if buyNowEligible, False if not eligible, None if unknown
""" """
try: try:
buy_now_is_active = self.webdriver.find_element(By.ID, 'j-buy-now').text == "Direkt kaufen" # Extract current ad ID from the page URL
return buy_now_is_active current_ad_id = self.extract_ad_id_from_ad_url(self.page.url)
except NoSuchElementException: if current_ad_id == -1:
LOG.warning("Could not extract ad ID from URL: %s", self.page.url)
return None
# Direct dict lookup (O(1) instead of O(pages) API calls)
cached_ad = self.published_ads_by_id.get(current_ad_id)
if cached_ad is not None:
buy_now_eligible = cached_ad.get("buyNowEligible")
if isinstance(buy_now_eligible, bool):
LOG.debug("sell_directly from data for ad %s: %s", current_ad_id, buy_now_eligible)
return buy_now_eligible
LOG.debug("buyNowEligible not a bool for ad %s: %s", current_ad_id, buy_now_eligible)
return None
# Ad not in user's published ads (may be someone else's ad)
LOG.debug("No data for ad %s, returning None for sell_directly", current_ad_id)
return None return None
def extract_contact_from_ad_page(self) -> dict[str, (str | None)]: except (KeyError, TypeError) as e:
LOG.debug("Could not determine sell_directly status: %s", e)
return None
async def _extract_contact_from_ad_page(self) -> ContactPartial:
""" """
Processes the address part involving street (optional), zip code + city, and phone number (optional). Processes the address part involving street (optional), zip code + city, and phone number (optional).
:return: a dictionary containing the address parts with their corresponding values :return: a dictionary containing the address parts with their corresponding values
""" """
contact:dict[str, (str | None)] = {} contact:dict[str, (str | None)] = {}
address_element = self.webdriver.find_element(By.CSS_SELECTOR, '#viewad-locality') address_text = await self.web_text(By.ID, "viewad-locality")
address_text = address_element.text.strip()
# format: e.g. (Beispiel Allee 42,) 12345 Bundesland - Stadt # format: e.g. (Beispiel Allee 42,) 12345 Bundesland - Stadt
try: try:
street_element = self.webdriver.find_element(By.XPATH, '//*[@id="street-address"]') street = (await self.web_text(By.ID, "street-address"))[:-1] # trailing comma
street = street_element.text[:-2] # trailing comma and whitespace contact["street"] = street
contact['street'] = street except TimeoutError:
except NoSuchElementException: LOG.info("No street given in the contact.")
print('No street given in the contact.')
# construct remaining address
address_halves = address_text.split(' - ')
address_left_parts = address_halves[0].split(' ') # zip code and region/city
contact['zipcode'] = address_left_parts[0]
contact_person_element = self.webdriver.find_element(By.CSS_SELECTOR, '#viewad-contact') (zipcode, location) = address_text.split(" ", maxsplit = 1)
name_element = contact_person_element.find_element(By.CLASS_NAME, 'iconlist-text') contact["zipcode"] = zipcode # e.g. 19372
contact["location"] = location # e.g. Mecklenburg-Vorpommern - Steinbeck
contact_person_element:Element = await self.web_find(By.ID, "viewad-contact")
name_element = await self.web_find(By.CLASS_NAME, "iconlist-text", parent = contact_person_element)
try: try:
name = name_element.find_element(By.TAG_NAME, 'a').text name = await self.web_text(By.TAG_NAME, "a", parent = name_element)
except NoSuchElementException: # edge case: name without link except TimeoutError: # edge case: name without link
name = name_element.find_element(By.TAG_NAME, 'span').text name = await self.web_text(By.TAG_NAME, "span", parent = name_element)
contact['name'] = name contact["name"] = name
if 'street' not in contact: if "street" not in contact:
contact['street'] = None contact["street"] = None
try: # phone number is unusual for non-professional sellers today try: # phone number is unusual for non-professional sellers today
phone_element = self.webdriver.find_element(By.CSS_SELECTOR, '#viewad-contact-phone') phone_element = await self.web_find(By.ID, "viewad-contact-phone")
phone_number = phone_element.find_element(By.TAG_NAME, 'a').text phone_number = await self.web_text(By.TAG_NAME, "a", parent = phone_element)
contact['phone'] = ''.join(phone_number.replace('-', ' ').split(' ')).replace('+49(0)', '0') contact["phone"] = "".join(phone_number.replace("-", " ").split(" ")).replace("+49(0)", "0")
except NoSuchElementException: except TimeoutError:
contact['phone'] = None # phone seems to be a deprecated feature (for non-professional users) contact["phone"] = None # phone seems to be a deprecated feature (for non-professional users)
# also see 'https://themen.kleinanzeigen.de/hilfe/deine-anzeigen/Telefon/ # also see 'https://themen.kleinanzeigen.de/hilfe/deine-anzeigen/Telefon/
return contact return ContactPartial.model_validate(contact)
def extract_own_ads_references(self) -> list[str]:
"""
Extracts the references to all own ads.
:return: the links to your ad pages
"""
# navigate to your ads page
self.webdriver.get('https://www.kleinanzeigen.de/m-meine-anzeigen.html')
self.web_await(EC.url_contains('meine-anzeigen'), 15)
pause(2000, 3000)
# collect ad references:
pagination_section = self.webdriver.find_element(By.CSS_SELECTOR, '.l-splitpage')\
.find_element(By.XPATH, './/section[4]')
# scroll down to load dynamically
self.web_scroll_page_down()
pause(2000, 3000)
# detect multi-page
try:
pagination = pagination_section.find_element(By.XPATH, './/div/div[2]/div[2]/div') # Pagination
except NoSuchElementException: # 0 ads - no pagination area
print('There currently seem to be no ads on your profile!')
return []
n_buttons = len(pagination.find_element(By.XPATH, './/div[1]').find_elements(By.TAG_NAME, 'button'))
multi_page:bool
if n_buttons > 1:
multi_page = True
print('It seems like you have many ads!')
else:
multi_page = False
print('It seems like all your ads fit on one overview page.')
refs:list[str] = []
while True: # loop reference extraction until no more forward page
# extract references
list_section = self.webdriver.find_element(By.XPATH, '//*[@id="my-manageads-adlist"]')
list_items = list_section.find_elements(By.CLASS_NAME, 'cardbox')
refs += [li.find_element(By.XPATH, 'article/section/section[2]/h2/div/a').get_attribute('href') for li in list_items]
if not multi_page: # only one iteration for single-page overview
break
# check if last page
nav_button = self.webdriver.find_elements(By.CSS_SELECTOR, 'button.jsx-2828608826')[-1]
if nav_button.get_attribute('title') != 'Nächste':
print('Last ad overview page explored.')
break
# navigate to next overview page
nav_button.click()
pause(2000, 3000)
self.web_scroll_page_down()
return refs

View File

View File

@@ -0,0 +1,364 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import hashlib, json # isort: skip
from collections.abc import Mapping, Sequence
from dataclasses import dataclass
from datetime import datetime # noqa: TC003 Move import into a type-checking block
from decimal import ROUND_CEILING, ROUND_HALF_UP, Decimal
from gettext import gettext as _
from typing import Annotated, Any, Final, Literal
from pydantic import AfterValidator, Field, field_validator, model_validator
from typing_extensions import Self
from kleinanzeigen_bot.model.config_model import AdDefaults, AutoPriceReductionConfig # noqa: TC001 Move application import into a type-checking block
from kleinanzeigen_bot.utils import dicts
from kleinanzeigen_bot.utils.misc import parse_datetime, parse_decimal
from kleinanzeigen_bot.utils.pydantics import ContextualModel
MAX_DESCRIPTION_LENGTH:Final[int] = 4000
EURO_PRECISION:Final[Decimal] = Decimal("1")
@dataclass(frozen = True)
class PriceReductionStep:
"""Single reduction step with before/after values and floor clamp state."""
cycle:int
price_before:Decimal
reduction_value:Decimal
price_after_rounding:Decimal
floor_applied:bool
def _OPTIONAL() -> Any:
return Field(default = None)
def _ISO_DATETIME(default:datetime | None = None) -> Any:
return Field(
default = default,
description = "ISO-8601 timestamp with optional timezone (e.g. 2024-12-25T00:00:00 or 2024-12-25T00:00:00Z)",
json_schema_extra = {
"anyOf": [
{"type": "null"},
{
"type": "string",
"pattern": (
r"^\d{4}-\d{2}-\d{2}T" # date + 'T'
r"\d{2}:\d{2}:\d{2}" # hh:mm:ss
r"(?:\.\d{1,6})?" # optional .micro
r"(?:Z|[+-]\d{2}:\d{2})?$" # optional Z or ±HH:MM
),
},
],
},
)
class ContactPartial(ContextualModel):
name:str | None = _OPTIONAL()
street:str | None = _OPTIONAL()
zipcode:int | str | None = _OPTIONAL()
location:str | None = _OPTIONAL()
phone:str | None = _OPTIONAL()
def _validate_shipping_option_item(v:str) -> str:
if not v.strip():
raise ValueError("must be non-empty and non-blank")
return v
ShippingOption = Annotated[str, AfterValidator(_validate_shipping_option_item)]
def _validate_auto_price_reduction_constraints(price:int | None, auto_price_reduction:AutoPriceReductionConfig | dict[str, Any] | None) -> None:
"""
Validate auto_price_reduction configuration constraints.
Raises ValueError if:
- auto_price_reduction is enabled but price is None
- min_price exceeds price
"""
if not auto_price_reduction:
return
# Handle both dict (from before validation) and AutoPriceReductionConfig (after validation)
if isinstance(auto_price_reduction, dict):
enabled = auto_price_reduction.get("enabled", False)
min_price = auto_price_reduction.get("min_price")
else:
enabled = auto_price_reduction.enabled
min_price = auto_price_reduction.min_price
if not enabled:
return
if price is None:
raise ValueError(_("price must be specified when auto_price_reduction is enabled"))
if min_price is not None:
try:
min_price_dec = Decimal(str(min_price))
price_dec = Decimal(str(price))
except Exception:
# Let Pydantic's type validation surface the underlying issue
return
if min_price_dec > price_dec:
raise ValueError(_("min_price must not exceed price"))
class AdPartial(ContextualModel):
active:bool | None = _OPTIONAL()
type:Literal["OFFER", "WANTED"] | None = _OPTIONAL()
title:str = Field(..., min_length = 10)
description:str
description_prefix:str | None = _OPTIONAL()
description_suffix:str | None = _OPTIONAL()
category:str
special_attributes:dict[str, str] | None = _OPTIONAL()
price:int | None = _OPTIONAL()
price_type:Literal["FIXED", "NEGOTIABLE", "GIVE_AWAY", "NOT_APPLICABLE"] | None = _OPTIONAL()
auto_price_reduction:AutoPriceReductionConfig | None = Field(default = None, description = "automatic price reduction configuration")
repost_count:int = Field(default = 0, ge = 0, description = "number of successful publications for this ad (persisted between runs)")
price_reduction_count:int = Field(default = 0, ge = 0, description = "internal counter: number of automatic price reductions already applied")
shipping_type:Literal["PICKUP", "SHIPPING", "NOT_APPLICABLE"] | None = _OPTIONAL()
shipping_costs:float | None = _OPTIONAL()
shipping_options:list[ShippingOption] | None = _OPTIONAL()
sell_directly:bool | None = _OPTIONAL()
images:list[str] | None = _OPTIONAL()
contact:ContactPartial | None = _OPTIONAL()
republication_interval:int | None = _OPTIONAL()
id:int | None = _OPTIONAL()
created_on:datetime | None = _ISO_DATETIME()
updated_on:datetime | None = _ISO_DATETIME()
content_hash:str | None = _OPTIONAL()
@field_validator("created_on", "updated_on", mode = "before")
@classmethod
def _parse_dates(cls, v:Any) -> Any:
return parse_datetime(v)
@field_validator("shipping_costs", mode = "before")
@classmethod
def _parse_shipping_costs(cls, v:float | int | str) -> Any:
if v is None or (isinstance(v, str) and not v.strip()):
return None
return round(parse_decimal(v), 2)
@field_validator("description")
@classmethod
def _validate_description_length(cls, v:str) -> str:
if len(v) > MAX_DESCRIPTION_LENGTH:
raise ValueError(f"description length exceeds {MAX_DESCRIPTION_LENGTH} characters")
return v
@model_validator(mode = "before")
@classmethod
def _validate_price_and_price_type(cls, values:dict[str, Any]) -> dict[str, Any]:
price_type = values.get("price_type")
price = values.get("price")
auto_price_reduction = values.get("auto_price_reduction")
if price_type == "GIVE_AWAY" and price is not None:
raise ValueError("price must not be specified when price_type is GIVE_AWAY")
if price_type == "FIXED" and price is None:
raise ValueError("price is required when price_type is FIXED")
# Validate auto_price_reduction configuration
_validate_auto_price_reduction_constraints(price, auto_price_reduction)
return values
def update_content_hash(self) -> Self:
"""Calculate and updates the content_hash value for user-modifiable fields of the ad."""
# 1) Dump to a plain dict, excluding the metadata fields:
raw = self.model_dump(
exclude = {
"id",
"created_on",
"updated_on",
"content_hash",
"repost_count",
"price_reduction_count",
},
exclude_none = True,
exclude_unset = True,
)
# 2) Recursively prune any empty containers:
def prune(obj:Any) -> Any:
if isinstance(obj, Mapping):
return {
k: prune(v)
for k, v in obj.items()
# drop keys whose values are empty list/dict/set
if not (isinstance(v, (Mapping, Sequence, set)) and not isinstance(v, (str, bytes)) and len(v) == 0)
}
if isinstance(obj, Sequence) and not isinstance(obj, (str, bytes)):
return [prune(v) for v in obj if not (isinstance(v, (Mapping, Sequence, set)) and not isinstance(v, (str, bytes)) and len(v) == 0)]
return obj
pruned = prune(raw)
# 3) Produce a canonical JSON string and hash it:
json_string = json.dumps(pruned, sort_keys = True)
self.content_hash = hashlib.sha256(json_string.encode()).hexdigest()
return self
def to_ad(self, ad_defaults:AdDefaults) -> Ad:
"""
Returns a complete, validated Ad by merging this partial with values from ad_defaults.
Any field that is `None` or `""` is filled from `ad_defaults` when it's not a list.
Raises `ValidationError` when, after merging with `ad_defaults`, not all fields required by `Ad` are populated.
"""
ad_cfg = self.model_dump()
dicts.apply_defaults(
target = ad_cfg,
defaults = ad_defaults.model_dump(),
ignore = lambda k, _: k == "description", # ignore legacy global description config
override = lambda _, v: (
not isinstance(v, list) and (v is None or (isinstance(v, str) and v == "")) # noqa: PLC1901
),
)
# Ensure internal counters are integers (not user-configurable)
if not isinstance(ad_cfg.get("price_reduction_count"), int):
ad_cfg["price_reduction_count"] = 0
if not isinstance(ad_cfg.get("repost_count"), int):
ad_cfg["repost_count"] = 0
return Ad.model_validate(ad_cfg)
def _calculate_auto_price_internal(
*, base_price:int | float | None, auto_price_reduction:AutoPriceReductionConfig | None, target_reduction_cycle:int, with_trace:bool
) -> tuple[int | None, list[PriceReductionStep], Decimal | None]:
"""
Calculate the effective price for the current run using commercial rounding.
Args:
base_price: original configured price used as the starting point.
auto_price_reduction: reduction configuration (enabled, strategy, amount, min_price, delays).
target_reduction_cycle: which reduction cycle to calculate the price for (0 = no reduction, 1 = first reduction, etc.).
Percentage reductions apply to the current price each cycle (compounded). Each reduction step is rounded
to full euros (commercial rounding with ROUND_HALF_UP) before the next reduction is applied.
Returns an int representing whole euros, or None when base_price is None.
"""
if base_price is None:
return None, [], None
price = Decimal(str(base_price))
if not auto_price_reduction or not auto_price_reduction.enabled or target_reduction_cycle <= 0:
return int(price.quantize(EURO_PRECISION, rounding = ROUND_HALF_UP)), [], None
if auto_price_reduction.strategy is None or auto_price_reduction.amount is None:
return int(price.quantize(EURO_PRECISION, rounding = ROUND_HALF_UP)), [], None
if auto_price_reduction.min_price is None:
raise ValueError(_("min_price must be specified when auto_price_reduction is enabled"))
# Prices are published as whole euros; ensure the configured floor cannot be undercut by int() conversion.
price_floor = Decimal(str(auto_price_reduction.min_price)).quantize(EURO_PRECISION, rounding = ROUND_CEILING)
repost_cycles = target_reduction_cycle
steps:list[PriceReductionStep] = []
for cycle_idx in range(repost_cycles):
price_before = price
reduction_value = (
price * Decimal(str(auto_price_reduction.amount)) / Decimal("100")
if auto_price_reduction.strategy == "PERCENTAGE"
else Decimal(str(auto_price_reduction.amount))
)
price -= reduction_value
# Commercial rounding: round to full euros after each reduction step
price = price.quantize(EURO_PRECISION, rounding = ROUND_HALF_UP)
floor_applied = False
if price <= price_floor:
price = price_floor
floor_applied = True
if with_trace:
steps.append(
PriceReductionStep(
cycle = cycle_idx + 1,
price_before = price_before,
reduction_value = reduction_value,
price_after_rounding = price,
floor_applied = floor_applied,
)
)
if floor_applied:
break
return int(price), steps, price_floor
def calculate_auto_price(*, base_price:int | float | None, auto_price_reduction:AutoPriceReductionConfig | None, target_reduction_cycle:int) -> int | None:
return _calculate_auto_price_internal(
base_price = base_price,
auto_price_reduction = auto_price_reduction,
target_reduction_cycle = target_reduction_cycle,
with_trace = False,
)[0]
def calculate_auto_price_with_trace(
*, base_price:int | float | None, auto_price_reduction:AutoPriceReductionConfig | None, target_reduction_cycle:int
) -> tuple[int | None, list[PriceReductionStep], Decimal | None]:
"""Calculate auto price and return a step-by-step reduction trace.
Args:
base_price: starting price before reductions.
auto_price_reduction: reduction configuration (strategy, amount, floor, enabled).
target_reduction_cycle: reduction cycle to compute (0 = no reduction, 1 = first reduction).
Returns:
A tuple of ``(price, steps, price_floor)`` where:
- ``price`` is the computed effective price (``int``) or ``None`` when ``base_price`` is ``None``.
- ``steps`` is a list of ``PriceReductionStep`` entries containing the cycle trace.
- ``price_floor`` is the rounded ``Decimal`` floor used for clamping, or ``None`` when not applicable.
"""
return _calculate_auto_price_internal(
base_price = base_price,
auto_price_reduction = auto_price_reduction,
target_reduction_cycle = target_reduction_cycle,
with_trace = True,
)
# pyright: reportGeneralTypeIssues=false, reportIncompatibleVariableOverride=false
class Contact(ContactPartial):
name:str
zipcode:int | str
# pyright: reportGeneralTypeIssues=false, reportIncompatibleVariableOverride=false
class Ad(AdPartial):
active:bool
type:Literal["OFFER", "WANTED"]
description:str
price_type:Literal["FIXED", "NEGOTIABLE", "GIVE_AWAY", "NOT_APPLICABLE"]
shipping_type:Literal["PICKUP", "SHIPPING", "NOT_APPLICABLE"]
sell_directly:bool
contact:Contact
republication_interval:int
auto_price_reduction:AutoPriceReductionConfig = Field(default_factory = AutoPriceReductionConfig)
price_reduction_count:int = 0
@model_validator(mode = "after")
def _validate_auto_price_config(self) -> "Ad":
# Validate the final Ad object after merging with defaults
# This ensures the merged configuration is valid even if raw YAML had None values
_validate_auto_price_reduction_constraints(self.price, self.auto_price_reduction)
return self

View File

@@ -0,0 +1,353 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import copy
from gettext import gettext as _
from typing import Annotated, Any, Final, Literal
from pydantic import AfterValidator, Field, model_validator
from typing_extensions import deprecated
from kleinanzeigen_bot.model.update_check_model import UpdateCheckConfig
from kleinanzeigen_bot.utils import dicts, loggers
from kleinanzeigen_bot.utils.misc import get_attr
from kleinanzeigen_bot.utils.pydantics import ContextualModel
LOG:Final[loggers.Logger] = loggers.get_logger(__name__)
_MAX_PERCENTAGE:Final[int] = 100
class AutoPriceReductionConfig(ContextualModel):
enabled:bool = Field(default = False, description = "automatically lower the price of reposted ads")
strategy:Literal["FIXED", "PERCENTAGE"] | None = Field(
default = None,
description = "reduction strategy (required when enabled: true). PERCENTAGE = % of price, FIXED = absolute amount",
examples = ["PERCENTAGE", "FIXED"],
)
amount:float | None = Field(
default = None,
gt = 0,
description = "reduction amount (required when enabled: true). For PERCENTAGE: use percent value (e.g., 10 = 10%%). For FIXED: use currency amount",
examples = [10.0, 5.0, 20.0],
)
min_price:float | None = Field(
default = None, ge = 0, description = "minimum price floor (required when enabled: true). Use 0 for no minimum", examples = [1.0, 5.0, 10.0]
)
delay_reposts:int = Field(default = 0, ge = 0, description = "number of reposts to wait before applying the first automatic price reduction")
delay_days:int = Field(default = 0, ge = 0, description = "number of days to wait after publication before applying automatic price reductions")
@model_validator(mode = "after")
def _validate_config(self) -> "AutoPriceReductionConfig":
if self.enabled:
if self.strategy is None:
raise ValueError(_("strategy must be specified when auto_price_reduction is enabled"))
if self.amount is None:
raise ValueError(_("amount must be specified when auto_price_reduction is enabled"))
if self.min_price is None:
raise ValueError(_("min_price must be specified when auto_price_reduction is enabled"))
if self.strategy == "PERCENTAGE" and self.amount > _MAX_PERCENTAGE:
raise ValueError(_("Percentage reduction amount must not exceed %s") % _MAX_PERCENTAGE)
return self
class ContactDefaults(ContextualModel):
name:str = Field(default = "", description = "contact name displayed on the ad")
street:str = Field(default = "", description = "street address for the listing")
zipcode:int | str = Field(default = "", description = "postal/ZIP code for the listing location")
location:str = Field(
default = "",
description = "city or locality of the listing (can include multiple districts)",
examples = ["Sample Town - District One"],
)
phone:str = Field(
default = "",
description = "phone number for contact - only available for commercial accounts, personal accounts no longer support this",
examples = ['"01234 567890"'],
)
@deprecated("Use description_prefix/description_suffix instead")
class DescriptionAffixes(ContextualModel):
prefix:str | None = Field(default = None, description = "text to prepend to the ad description (deprecated, use description_prefix)")
suffix:str | None = Field(default = None, description = "text to append to the ad description (deprecated, use description_suffix)")
class AdDefaults(ContextualModel):
active:bool = Field(default = True, description = "whether the ad should be published (false = skip this ad)")
type:Literal["OFFER", "WANTED"] = Field(default = "OFFER", description = "type of the ad listing", examples = ["OFFER", "WANTED"])
description:DescriptionAffixes | None = Field(default = None, description = "DEPRECATED: Use description_prefix/description_suffix instead")
description_prefix:str | None = Field(default = "", description = "text to prepend to each ad (optional)")
description_suffix:str | None = Field(default = "", description = "text to append to each ad (optional)")
price_type:Literal["FIXED", "NEGOTIABLE", "GIVE_AWAY", "NOT_APPLICABLE"] = Field(
default = "NEGOTIABLE", description = "pricing strategy for the listing", examples = ["FIXED", "NEGOTIABLE", "GIVE_AWAY", "NOT_APPLICABLE"]
)
auto_price_reduction:AutoPriceReductionConfig = Field(
default_factory = AutoPriceReductionConfig, description = "automatic price reduction configuration for reposted ads"
)
shipping_type:Literal["PICKUP", "SHIPPING", "NOT_APPLICABLE"] = Field(
default = "SHIPPING", description = "shipping method for the item", examples = ["PICKUP", "SHIPPING", "NOT_APPLICABLE"]
)
sell_directly:bool = Field(default = False, description = "enable direct purchase option (only works when shipping_type is SHIPPING)")
images:list[str] | None = Field(
default_factory = list,
description = "default image glob patterns (optional). Leave empty for no default images",
examples = ['"images/*.jpg"', '"photos/*.{png,jpg}"'],
)
contact:ContactDefaults = Field(default_factory = ContactDefaults, description = "default contact information for ads")
republication_interval:int = Field(default = 7, description = "number of days between automatic republication of ads")
@model_validator(mode = "before")
@classmethod
def migrate_legacy_description(cls, values:dict[str, Any]) -> dict[str, Any]:
# Ensure flat prefix/suffix take precedence over deprecated nested "description"
description_prefix = values.get("description_prefix")
description_suffix = values.get("description_suffix")
legacy_prefix = get_attr(values, "description.prefix")
legacy_suffix = get_attr(values, "description.suffix")
if not description_prefix and legacy_prefix is not None:
values["description_prefix"] = legacy_prefix
if not description_suffix and legacy_suffix is not None:
values["description_suffix"] = legacy_suffix
return values
class DownloadConfig(ContextualModel):
include_all_matching_shipping_options:bool = Field(
default = False,
description = "if true, all shipping options matching the package size will be included",
)
excluded_shipping_options:list[str] = Field(
default_factory = list,
description = ("shipping options to exclude (optional). Leave as [] to include all. Add items like 'DHL_2' to exclude specific carriers"),
examples = ['"DHL_2"', '"DHL_5"', '"Hermes"'],
)
folder_name_max_length:int = Field(
default = 100,
ge = 10,
le = 255,
description = "maximum length for folder names when downloading ads (default: 100)",
)
rename_existing_folders:bool = Field(
default = False,
description = "if true, rename existing folders without titles to include titles (default: false)",
)
class BrowserConfig(ContextualModel):
arguments:list[str] = Field(
default_factory = list,
description=(
"additional Chromium command line switches (optional). Leave as [] for default behavior. "
"See https://peter.sh/experiments/chromium-command-line-switches/ "
"Common: --headless (no GUI), --disable-dev-shm-usage (Docker fix), --user-data-dir=/path"
),
examples = ['"--headless"', '"--disable-dev-shm-usage"', '"--user-data-dir=/path/to/profile"'],
)
binary_location:str | None = Field(default = "", description = "path to custom browser executable (optional). Leave empty to use system default")
extensions:list[str] = Field(
default_factory = list,
description = "Chrome extensions to load (optional). Leave as [] for no extensions. Add .crx file paths relative to config file",
examples = ['"extensions/adblock.crx"', '"/absolute/path/to/extension.crx"'],
)
use_private_window:bool = Field(default = True, description = "open browser in private/incognito mode (recommended to avoid cookie conflicts)")
user_data_dir:str | None = Field(
default = "",
description = "custom browser profile directory (optional). Leave empty for auto-configured default",
)
profile_name:str | None = Field(
default = "",
description = "browser profile name (optional). Leave empty for default profile",
examples = ['"Profile 1"'],
)
class LoginConfig(ContextualModel):
username:str = Field(..., min_length = 1, description = "kleinanzeigen.de login email or username")
password:str = Field(..., min_length = 1, description = "kleinanzeigen.de login password")
class PublishingConfig(ContextualModel):
delete_old_ads:Literal["BEFORE_PUBLISH", "AFTER_PUBLISH", "NEVER"] | None = Field(
default = "AFTER_PUBLISH", description = "when to delete old versions of republished ads", examples = ["BEFORE_PUBLISH", "AFTER_PUBLISH", "NEVER"]
)
delete_old_ads_by_title:bool = Field(default = True, description = "match old ads by title when deleting (only works with BEFORE_PUBLISH)")
class CaptchaConfig(ContextualModel):
auto_restart:bool = Field(
default = False, description = "if true, abort when captcha is detected and auto-retry after restart_delay (if false, wait for manual solving)"
)
restart_delay:str = Field(
default = "6h", description = "duration to wait before retrying after captcha detection (e.g., 1h30m, 6h, 30m)", examples = ["6h", "1h30m", "30m"]
)
class TimeoutConfig(ContextualModel):
multiplier:float = Field(default = 1.0, ge = 0.1, description = "Global multiplier applied to all timeout values.")
default:float = Field(default = 5.0, ge = 0.0, description = "Baseline timeout for DOM interactions.")
page_load:float = Field(default = 15.0, ge = 1.0, description = "Page load timeout for web_open.")
captcha_detection:float = Field(default = 2.0, ge = 0.1, description = "Timeout for captcha iframe detection.")
sms_verification:float = Field(default = 4.0, ge = 0.1, description = "Timeout for SMS verification prompts.")
email_verification:float = Field(default = 4.0, ge = 0.1, description = "Timeout for email verification prompts.")
gdpr_prompt:float = Field(default = 10.0, ge = 1.0, description = "Timeout for GDPR/consent dialogs.")
login_detection:float = Field(default = 10.0, ge = 1.0, description = "Timeout for detecting existing login session via DOM elements.")
publishing_result:float = Field(default = 300.0, ge = 10.0, description = "Timeout for publishing result checks.")
publishing_confirmation:float = Field(default = 20.0, ge = 1.0, description = "Timeout for publish confirmation redirect.")
image_upload:float = Field(default = 30.0, ge = 5.0, description = "Timeout for image upload and server-side processing.")
pagination_initial:float = Field(default = 10.0, ge = 1.0, description = "Timeout for initial pagination lookup.")
pagination_follow_up:float = Field(default = 5.0, ge = 1.0, description = "Timeout for subsequent pagination navigation.")
quick_dom:float = Field(default = 2.0, ge = 0.1, description = "Generic short timeout for transient UI.")
update_check:float = Field(default = 10.0, ge = 1.0, description = "Timeout for GitHub update checks.")
chrome_remote_probe:float = Field(default = 2.0, ge = 0.1, description = "Timeout for local remote-debugging probes.")
chrome_remote_debugging:float = Field(default = 5.0, ge = 1.0, description = "Timeout for remote debugging API calls.")
chrome_binary_detection:float = Field(default = 10.0, ge = 1.0, description = "Timeout for chrome --version subprocesses.")
retry_enabled:bool = Field(default = True, description = "Enable built-in retry/backoff for DOM operations.")
retry_max_attempts:int = Field(default = 2, ge = 1, description = "Max retry attempts when retry is enabled.")
retry_backoff_factor:float = Field(default = 1.5, ge = 1.0, description = "Exponential factor applied per retry attempt.")
def resolve(self, key:str = "default", override:float | None = None) -> float:
"""
Return the base timeout (seconds) for the given key without applying modifiers.
"""
if override is not None:
return float(override)
if key == "default":
return float(self.default)
attr = getattr(self, key, None)
if isinstance(attr, (int, float)):
return float(attr)
return float(self.default)
def effective(self, key:str = "default", override:float | None = None, *, attempt:int = 0) -> float:
"""
Return the effective timeout (seconds) with multiplier/backoff applied.
"""
base = self.resolve(key, override)
backoff = self.retry_backoff_factor**attempt if attempt > 0 else 1.0
return base * self.multiplier * backoff
class CaptureOnConfig(ContextualModel):
"""Configuration for which operations should trigger diagnostics capture."""
login_detection:bool = Field(
default = False,
description = "Capture screenshot and HTML when login state detection fails",
)
publish:bool = Field(
default = False,
description = "Capture screenshot, HTML, and JSON on publish failures",
)
class DiagnosticsConfig(ContextualModel):
capture_on:CaptureOnConfig = Field(
default_factory = CaptureOnConfig,
description = "Enable diagnostics capture for specific operations.",
)
capture_log_copy:bool = Field(
default = False,
description = "If true, copy the entire bot log file when diagnostics are captured (may duplicate log content).",
)
pause_on_login_detection_failure:bool = Field(
default = False,
description = "If true, pause (interactive runs only) after capturing login detection diagnostics "
"so that user can inspect the browser. Requires capture_on.login_detection to be enabled.",
)
output_dir:str | None = Field(
default = None,
description = "Optional output directory for diagnostics artifacts. If omitted, a safe default is used based on installation mode.",
)
timing_collection:bool = Field(
default = True,
description = "If true, collect local timeout timing data and write it to diagnostics JSON for troubleshooting and tuning.",
)
@model_validator(mode = "before")
@classmethod
def migrate_legacy_diagnostics_keys(cls, data:dict[str, Any]) -> dict[str, Any]:
"""Migrate legacy login_detection_capture and publish_error_capture keys."""
# Migrate legacy login_detection_capture -> capture_on.login_detection
# Only migrate if the new key is not already explicitly set
if "login_detection_capture" in data:
LOG.warning("Deprecated: 'login_detection_capture' is replaced by 'capture_on.login_detection'. Please update your config.")
if "capture_on" not in data or data["capture_on"] is None:
data["capture_on"] = {}
if isinstance(data["capture_on"], dict) and "login_detection" not in data["capture_on"]:
data["capture_on"]["login_detection"] = data.pop("login_detection_capture")
else:
# Remove legacy key but don't overwrite explicit new value
data.pop("login_detection_capture")
# Migrate legacy publish_error_capture -> capture_on.publish
# Only migrate if the new key is not already explicitly set
if "publish_error_capture" in data:
LOG.warning("Deprecated: 'publish_error_capture' is replaced by 'capture_on.publish'. Please update your config.")
if "capture_on" not in data or data["capture_on"] is None:
data["capture_on"] = {}
if isinstance(data["capture_on"], dict) and "publish" not in data["capture_on"]:
data["capture_on"]["publish"] = data.pop("publish_error_capture")
else:
# Remove legacy key but don't overwrite explicit new value
data.pop("publish_error_capture")
return data
@model_validator(mode = "after")
def _validate_pause_requires_capture(self) -> "DiagnosticsConfig":
if self.pause_on_login_detection_failure and not self.capture_on.login_detection:
raise ValueError(_("pause_on_login_detection_failure requires capture_on.login_detection to be enabled"))
return self
def _validate_glob_pattern(v:str) -> str:
if not v.strip():
raise ValueError(_("must be a non-empty, non-blank glob pattern"))
return v
GlobPattern = Annotated[str, AfterValidator(_validate_glob_pattern)]
class Config(ContextualModel):
ad_files:list[GlobPattern] = Field(
default_factory = lambda: ["./**/ad_*.{json,yml,yaml}"],
json_schema_extra = {"default": ["./**/ad_*.{json,yml,yaml}"]},
min_length = 1,
description = """
glob (wildcard) patterns to select ad configuration files
if relative paths are specified, then they are relative to this configuration file
""",
)
ad_defaults:AdDefaults = Field(default_factory = AdDefaults, description = "Default values for ads, can be overwritten in each ad configuration file")
categories:dict[str, str] = Field(
default_factory = dict,
description=(
"additional name to category ID mappings (optional). Leave as {} if not needed. "
"See full list at: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/src/kleinanzeigen_bot/resources/categories.yaml "
"To add: use format 'Category > Subcategory': 'ID'"
),
examples = ['"Elektronik > Notebooks": "161/278"', '"Jobs > Praktika": "102/125"'],
)
download:DownloadConfig = Field(default_factory = DownloadConfig)
publishing:PublishingConfig = Field(default_factory = PublishingConfig)
browser:BrowserConfig = Field(default_factory = BrowserConfig, description = "Browser configuration")
login:LoginConfig = Field(default_factory = LoginConfig.model_construct, description = "Login credentials")
captcha:CaptchaConfig = Field(default_factory = CaptchaConfig)
update_check:UpdateCheckConfig = Field(default_factory = UpdateCheckConfig, description = "Update check configuration")
timeouts:TimeoutConfig = Field(default_factory = TimeoutConfig, description = "Centralized timeout configuration.")
diagnostics:DiagnosticsConfig = Field(default_factory = DiagnosticsConfig, description = "diagnostics capture configuration for troubleshooting")
def with_values(self, values:dict[str, Any]) -> Config:
return Config.model_validate(dicts.apply_defaults(copy.deepcopy(values), defaults = self.model_dump()))

View File

@@ -0,0 +1,27 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
from typing import Literal
from pydantic import Field
from kleinanzeigen_bot.utils.pydantics import ContextualModel
class UpdateCheckConfig(ContextualModel):
enabled:bool = Field(default = True, description = "whether to check for updates on startup")
channel:Literal["latest", "preview"] = Field(
default = "latest", description = "which release channel to check (latest = stable, preview = prereleases)", examples = ["latest", "preview"]
)
interval:str = Field(
default = "7d",
description=(
"how often to check for updates (e.g., 7d, 1d). "
"If invalid, too short (<1d), or too long (>30d), "
"uses defaults: 1d for 'preview' channel, 7d for 'latest' channel"
),
examples = ["7d", "1d", "14d"],
)

View File

@@ -0,0 +1,195 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import datetime
import json
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from pathlib import Path
from kleinanzeigen_bot.utils import dicts, loggers, misc, xdg_paths
from kleinanzeigen_bot.utils.pydantics import ContextualModel
LOG = loggers.get_logger(__name__)
# Current version of the state file format
CURRENT_STATE_VERSION = 1
# Maximum allowed interval in days
MAX_INTERVAL_DAYS = 30
class UpdateCheckState(ContextualModel):
"""State for update checking functionality."""
version:int = CURRENT_STATE_VERSION
last_check:datetime.datetime | None = None
@classmethod
def _parse_timestamp(cls, timestamp_str:str) -> datetime.datetime | None:
"""Parse a timestamp string and ensure it's in UTC.
Args:
timestamp_str: The timestamp string to parse.
Returns:
The parsed timestamp in UTC, or None if parsing fails.
"""
try:
timestamp = datetime.datetime.fromisoformat(timestamp_str)
if timestamp.tzinfo is None:
# If no timezone info, assume UTC
timestamp = timestamp.replace(tzinfo = datetime.timezone.utc)
elif timestamp.tzinfo != datetime.timezone.utc:
# Convert to UTC if in a different timezone
timestamp = timestamp.astimezone(datetime.timezone.utc)
return timestamp
except ValueError as e:
LOG.warning("Invalid timestamp format in state file: %s", e)
return None
@classmethod
def load(cls, state_file:Path) -> UpdateCheckState:
"""Load the update check state from a file.
Args:
state_file: The path to the state file.
Returns:
The loaded state.
"""
if not state_file.exists():
return cls()
if state_file.stat().st_size == 0:
return cls()
try:
data = dicts.load_dict(str(state_file))
if not data:
return cls()
# Handle version migration
version = data.get("version", 0)
if version < CURRENT_STATE_VERSION:
LOG.info("Migrating update check state from version %d to %d", version, CURRENT_STATE_VERSION)
data = cls._migrate_state(data, version)
# Parse last_check timestamp
if "last_check" in data:
data["last_check"] = cls._parse_timestamp(data["last_check"])
return cls.model_validate(data)
except (json.JSONDecodeError, ValueError) as e:
LOG.warning("Failed to load update check state: %s", e)
return cls()
@classmethod
def _migrate_state(cls, data:dict[str, Any], from_version:int) -> dict[str, Any]:
"""Migrate state data from an older version to the current version.
Args:
data: The state data to migrate.
from_version: The version of the state data.
Returns:
The migrated state data.
"""
# Version 0 to 1: Add version field
if from_version == 0:
data["version"] = CURRENT_STATE_VERSION
LOG.debug("Migrated state from version 0 to 1: Added version field")
return data
def save(self, state_file:Path) -> None:
"""Save the update check state to a file.
Args:
state_file: The path to the state file.
"""
try:
data = self.model_dump()
if data["last_check"]:
# Ensure timestamp is in UTC before saving
if data["last_check"].tzinfo != datetime.timezone.utc:
data["last_check"] = data["last_check"].astimezone(datetime.timezone.utc)
data["last_check"] = data["last_check"].isoformat()
xdg_paths.ensure_directory(state_file.parent, "update check state directory")
dicts.save_dict(str(state_file), data)
except PermissionError:
LOG.warning("Permission denied when saving update check state to %s", state_file)
except Exception as e:
LOG.warning("Failed to save update check state: %s", e)
def update_last_check(self) -> None:
"""Update the last check time to now in UTC."""
self.last_check = datetime.datetime.now(datetime.timezone.utc)
def _validate_update_interval(self, interval:str) -> tuple[datetime.timedelta, bool, str]:
"""
Validate the update check interval string.
Returns (timedelta, is_valid, reason).
"""
td = misc.parse_duration(interval)
# Accept explicit zero (e.g. "0d", "0h", "0m", "0s", "0") as invalid, but distinguish from typos
if td.total_seconds() == 0:
if interval.strip() in {"0d", "0h", "0m", "0s", "0"}:
return td, False, "Interval is zero, which is not allowed."
return td, False, "Invalid interval format or unsupported unit."
if td.total_seconds() < 0:
return td, False, "Negative interval is not allowed."
return td, True, ""
def should_check(self, interval:str, channel:str = "latest") -> bool:
"""
Determine if an update check should be performed based on the provided interval.
Args:
interval: The interval string (e.g. '7d', '1d 12h', etc.)
channel: The update channel ('latest' or 'preview') for fallback default interval.
Returns:
bool: True if an update check should be performed, False otherwise.
Notes:
- If interval is invalid, negative, zero, or above max, falls back to default interval for the channel.
- Only returns True if more than the interval has passed since last_check.
- Always compares in UTC.
"""
fallback = False
td = None
reason = ""
td, is_valid, reason = self._validate_update_interval(interval)
total_days = td.total_seconds() / 86400 if td else 0
epsilon = 1e-6
if not is_valid:
if reason == "Interval is zero, which is not allowed.":
LOG.warning("Interval is zero: %s. Minimum interval is 1d. Using default interval for this run.", interval)
elif reason == "Invalid interval format or unsupported unit.":
LOG.warning("Invalid interval format or unsupported unit: %s. Using default interval for this run.", interval)
elif reason == "Negative interval is not allowed.":
LOG.warning("Negative interval: %s. Minimum interval is 1d. Using default interval for this run.", interval)
fallback = True
elif total_days > MAX_INTERVAL_DAYS + epsilon:
LOG.warning("Interval too long: %s. Maximum interval is 30d. Using default interval for this run.", interval)
fallback = True
elif total_days < 1 - epsilon:
LOG.warning("Interval too short: %s. Minimum interval is 1d. Using default interval for this run.", interval)
fallback = True
if fallback:
# Fallback to default interval based on channel
if channel == "preview":
td = misc.parse_duration("1d")
LOG.warning("Falling back to default interval: 1d (preview channel). Please fix your config to avoid this warning.")
else:
td = misc.parse_duration("7d")
LOG.warning("Falling back to default interval: 7d (latest channel). Please fix your config to avoid this warning.")
if not self.last_check:
return True
now = datetime.datetime.now(datetime.timezone.utc)
elapsed = now - self.last_check
# Compare using integer seconds to avoid microsecond-level flakiness
return int(elapsed.total_seconds()) > int(td.total_seconds())

View File

@@ -1,22 +0,0 @@
active: # one of: true, false
type: # one of: OFFER, WANTED
title:
description:
category:
special_attributes: {}
price:
price_type: # one of: FIXED, NEGOTIABLE, GIVE_AWAY, NOT_APPLICABLE
shipping_type: # one of: PICKUP, SHIPPING, NOT_APPLICABLE
shipping_costs:
shipping_options: [] # see README.md for more information
sell_directly: # requires shipping_options to take effect
images: []
contact:
name:
street:
zipcode:
phone:
republication_interval:
id:
created_on:
updated_on:

View File

@@ -1,198 +1,582 @@
# Elektronik Auto, Rad & Boot: 210/241
Auto, Rad & Boot > Autos: 210/216/sonstige_autos
Auto, Rad & Boot > Autos > Alfa Romeo: 210/216/alfa_romeo
Auto, Rad & Boot > Autos > Audi: 210/216/audi
Auto, Rad & Boot > Autos > BMW: 210/216/bmw
Auto, Rad & Boot > Autos > Chevrolet: 210/216/chevrolet
Auto, Rad & Boot > Autos > Chrysler: 210/216/chrysler
Auto, Rad & Boot > Autos > Citroen: 210/216/citroen
Auto, Rad & Boot > Autos > Dacia: 210/216/dacia
Auto, Rad & Boot > Autos > Daewoo: 210/216/daewoo
Auto, Rad & Boot > Autos > Daihatsu: 210/216/daihatsu
Auto, Rad & Boot > Autos > Fiat: 210/216/fiat
Auto, Rad & Boot > Autos > Ford: 210/216/ford
Auto, Rad & Boot > Autos > Honda: 210/216/honda
Auto, Rad & Boot > Autos > Hyundai: 210/216/hyundai
Auto, Rad & Boot > Autos > Jaguar: 210/216/jaguar
Auto, Rad & Boot > Autos > Jeep: 210/216/jeep
Auto, Rad & Boot > Autos > Kia: 210/216/kia
Auto, Rad & Boot > Autos > Lada: 210/216/lada
Auto, Rad & Boot > Autos > Lancia: 210/216/lancia
Auto, Rad & Boot > Autos > Land Rover: 210/216/land_rover
Auto, Rad & Boot > Autos > Lexus: 210/216/lexus
Auto, Rad & Boot > Autos > Mazda: 210/216/mazda
Auto, Rad & Boot > Autos > Mercedes Benz: 210/216/mercedes_benz
Auto, Rad & Boot > Autos > Mini: 210/216/mini
Auto, Rad & Boot > Autos > Mitsubishi: 210/216/mitsubishi
Auto, Rad & Boot > Autos > Nissan: 210/216/nissan
Auto, Rad & Boot > Autos > Opel: 210/216/opel
Auto, Rad & Boot > Autos > Peugeot: 210/216/peugeot
Auto, Rad & Boot > Autos > Porsche: 210/216/porsche
Auto, Rad & Boot > Autos > Renault: 210/216/renault
Auto, Rad & Boot > Autos > Rover: 210/216/rover
Auto, Rad & Boot > Autos > Saab: 210/216/saab
Auto, Rad & Boot > Autos > Seat: 210/216/seat
Auto, Rad & Boot > Autos > Skoda: 210/216/skoda
Auto, Rad & Boot > Autos > Smart: 210/216/smart
Auto, Rad & Boot > Autos > Subaru: 210/216/subaru
Auto, Rad & Boot > Autos > Suzuki: 210/216/suzuki
Auto, Rad & Boot > Autos > Tesla: 210/216/tesla
Auto, Rad & Boot > Autos > Toyota: 210/216/toyota
Auto, Rad & Boot > Autos > Trabant: 210/216/trabant
Auto, Rad & Boot > Autos > Volkswagen: 210/216/volkswagen
Auto, Rad & Boot > Autos > Volvo: 210/216/volvo
Auto, Rad & Boot > Autoteile & Reifen: 210/223/sonstige_autoteile
Auto, Rad & Boot > Autoteile & Reifen > Auto Hifi & Navigation: 210/223/auto_hifi_navigation
Auto, Rad & Boot > Autoteile & Reifen > Ersatz- & Reparaturteile: 210/223/ersatz_reparaturteile
Auto, Rad & Boot > Autoteile & Reifen > Reifen & Felgen: 210/223/reifen_felgen
Auto, Rad & Boot > Autoteile & Reifen > Tuning & Styling: 210/223/tuning_styling
Auto, Rad & Boot > Autoteile & Reifen > Werkzeug: 210/223/werkzeug
Auto, Rad & Boot > Boote & Bootszubehör: 210/211/andere
Auto, Rad & Boot > Boote & Bootszubehör > Motorboote: 210/211/motorboote
Auto, Rad & Boot > Boote & Bootszubehör > Segelboote: 210/211/segelboote
Auto, Rad & Boot > Boote & Bootszubehör > Kleinboote: 210/211/kleinboote
Auto, Rad & Boot > Boote & Bootszubehör > Schlauchboote: 210/211/schlauchboote
Auto, Rad & Boot > Boote & Bootszubehör > Jetski: 210/211/jetski
Auto, Rad & Boot > Boote & Bootszubehör > Bootstrailer: 210/211/bootstrailer
Auto, Rad & Boot > Boote & Bootszubehör > Bootsliegeplätze: 210/211/bootsliegeplaetze
Auto, Rad & Boot > Boote & Bootszubehör > Bootszubehör: 210/211/bootszubehoer
Auto, Rad & Boot > Fahrräder & Zubehör: 210/217/weiteres
Auto, Rad & Boot > Fahrräder & Zubehör > Damen: 210/217/damen
Auto, Rad & Boot > Fahrräder & Zubehör > Herren: 210/217/herren
Auto, Rad & Boot > Fahrräder & Zubehör > Kinder: 210/217/kinder
Auto, Rad & Boot > Fahrräder & Zubehör > Zubehör: 210/217/zubehoer
Auto, Rad & Boot > Motorräder & Motorroller > Mofas & Mopeds: 210/305/mofa
Auto, Rad & Boot > Motorräder & Motorroller > Motorräder: 210/305/motorrad
Auto, Rad & Boot > Motorräder & Motorroller > Motorroller & Scooter: 210/305/roller
Auto, Rad & Boot > Motorräder & Motorroller > Quads: 210/305/quad
Auto, Rad & Boot > Motorradteile & Zubehör > Ersatz- & Reparaturteile: 210/306/teile
Auto, Rad & Boot > Motorradteile & Zubehör > Reifen & Felgen: 210/306/reifen_felgen
Auto, Rad & Boot > Motorradteile & Zubehör > Motorradbekleidung: 210/306/kleidung
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger: 210/276/andere
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > Agrarfahrzeuge: 210/276/agrarfahrzeuge
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > Anhänger: 210/276/anhaenger
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > Baumaschinen: 210/276/baumaschinen
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > Busse: 210/276/busse
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > LKW: 210/276/lkw
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > Sattelzugmaschinen & Auflieger: 210/276/sattelzugmaschinen_auflieger
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > Stapler: 210/276/stapler
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > Traktoren: 210/276/traktoren
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > Transporter: 210/276/transporter
Auto, Rad & Boot > Nutzfahrzeuge & Anhänger > Nutzfahrzeugteile & Zubehör: 210/276/zubehoer
Auto, Rad & Boot > Reparaturen & Dienstleistungen: 210/280
Auto, Rad & Boot > Wohnwagen & -mobile: 210/220/andere
Auto, Rad & Boot > Wohnwagen & -mobile > Alkoven: 210/220/alkoven
Auto, Rad & Boot > Wohnwagen & -mobile > Integrierter: 210/220/integrierter
Auto, Rad & Boot > Wohnwagen & -mobile > Kastenwagen: 210/220/kastenwagen
Auto, Rad & Boot > Wohnwagen & -mobile > Teilintegrierter: 210/220/teilintegrierter
Auto, Rad & Boot > Wohnwagen & -mobile > Wohnwagen: 210/220/wohnwagen
Dienstleistungen: 297/298
Dienstleistungen > Altenpflege: 297/288
Dienstleistungen > Auto, Rad & Boot: 297/289
Dienstleistungen > Babysitter/-in & Kinderbetreuung: 297/290
Dienstleistungen > Elektronik: 297/293
Dienstleistungen > Haus & Garten: 297/291/sonstige
Dienstleistungen > Haus & Garten > Bau & Handwerk: 297/291/bau_handwerk
Dienstleistungen > Haus & Garten > Garten- & Landschaftsbau: 297/291/garten_landschaftsbau
Dienstleistungen > Haus & Garten > Haushaltshilfe: 297/291/haushaltshilfe
Dienstleistungen > Haus & Garten > Reinigungsservice: 297/291/reingungsservice
Dienstleistungen > Haus & Garten > Reparaturen: 297/291/reparaturen
Dienstleistungen > Haus & Garten > Wohnungsauflösungen: 297/291/wohnungsaufloesungen
Dienstleistungen > Künstler/-in & Musiker/-in: 297/292
Dienstleistungen > Reise & Event: 297/294
Dienstleistungen > Tierbetreuung & Training: 297/295
Dienstleistungen > Umzug & Transport: 297/296
Eintrittskarten & Tickets: 231/256
Eintrittskarten & Tickets > Bahn & ÖPNV: 231/286
Eintrittskarten & Tickets > Comedy & Kabarett: 231/254
Eintrittskarten & Tickets > Gutscheine: 231/287
Eintrittskarten & Tickets > Kinder: 231/252
Eintrittskarten & Tickets > Konzerte: 231/255
Eintrittskarten & Tickets > Sport: 231/257
Eintrittskarten & Tickets > Theater & Musical: 231/251
Elektronik: 161/168 Elektronik: 161/168
Elektronik > Audio & Hifi: 161/172/sonstiges
Elektronik > Audio & Hifi > CD Player: 161/172/cd_player
Elektronik > Audio & Hifi > Lautsprecher & Kopfhörer: 161/172/lautsprecher_kopfhoerer
Elektronik > Audio & Hifi > MP3 Player: 161/172/mp3_player
Elektronik > Audio & Hifi > Radio & Receiver: 161/172/radio_receiver
Elektronik > Audio & Hifi > Stereoanlagen: 161/172/stereoanlagen
## Audio & Hifi Elektronik > Dienstleistungen Elektronik: 161/226
Audio_und_Hifi: 161/172/sonstiges
CD_Player: 161/172/cd_player Elektronik > Foto: 161/245/other
Kopfhörer: 161/172/lautsprecher_kopfhoerer Elektronik > Foto > Kamera: 161/245/camera
Lautsprecher: 161/172/lautsprecher_kopfhoerer Elektronik > Foto > Objektiv: 161/245/lens
MP3_Player: 161/172/mp3_player Elektronik > Foto > Zubehör: 161/245/equipment
Radio: 161/172/radio_receiver Elektronik > Foto > Kamera & Zubehör: 161/245/camera_and_equipment
Reciver: 161/172/radio_receiver
Stereoanlagen: 161/172/stereoanlagen
## Dienstleistungen Elektronik Elektronik > Handy & Telefon: 161/173/sonstige
Dienstleistungen_Elektronik: 161/226 Elektronik > Handy & Telefon > Apple: 161/173/apple
Elektronik > Handy & Telefon > Google: 161/173/google_handy
Elektronik > Handy & Telefon > Huawei: 161/173/huawai_handy
Elektronik > Handy & Telefon > HTC: 161/173/htc_handy
Elektronik > Handy & Telefon > LG: 161/173/lg_handy
Elektronik > Handy & Telefon > Motorola: 161/173/motorola_handy
Elektronik > Handy & Telefon > Nokia: 161/173/nokia_handy
Elektronik > Handy & Telefon > Samsung: 161/173/samsung_handy
Elektronik > Handy & Telefon > Siemens: 161/173/siemens_handy
Elektronik > Handy & Telefon > Sony: 161/173/sony_handy
Elektronik > Handy & Telefon > Xiaomi: 161/173/xiaomi_handy
Elektronik > Handy & Telefon > Faxgeräte: 161/173/faxgeraete
Elektronik > Handy & Telefon > Telefone: 161/173/telefone
## Foto Elektronik > Haushaltsgeräte: 161/176/sonstige
Foto: 161/245/other Elektronik > Haushaltsgeräte > Haushaltskleingeräte: 161/176/haushaltskleingeraete
Elektronik > Haushaltsgeräte > Herde & Backöfen: 161/176/herde_backoefen
Elektronik > Haushaltsgeräte > Kaffee- & Espressomaschinen: 161/176/kaffee_espressomaschinen
Elektronik > Haushaltsgeräte > Kühlschränke & Gefriergeräte: 161/176/kuehlschraenke_gefriergeraete
Elektronik > Haushaltsgeräte > Spülmaschinen: 161/176/spuelmaschinen
Elektronik > Haushaltsgeräte > Staubsauger: 161/176/staubsauger
Elektronik > Haushaltsgeräte > Waschmaschinen & Trockner: 161/176/waschmaschinen_trockner
Kameras: 161/245/camera Elektronik > Konsolen: 161/279/weitere
Objektive: 161/245/lens Elektronik > Konsolen > Pocket Konsolen: 161/279/dsi_psp
Foto_Zubehör: 161/245/equipment Elektronik > Konsolen > Playstation: 161/279/playstation
Kamera_Equipment: 161/245/camera_and_equipment Elektronik > Konsolen > Xbox: 161/279/xbox
Elektronik > Konsolen > Wii: 161/279/wii
## Handy & Telefon Elektronik > Notebooks: 161/278
Handys: 161/173/sonstige Elektronik > PCs: 161/228
Elektronik > PC-Zubehör & Software: 161/225/sonstiges
Elektronik > PC-Zubehör & Software > Drucker & Scanner: 161/225/drucker_scanner
Elektronik > PC-Zubehör & Software > Festplatten & Laufwerke: 161/225/festplatten_laufwerke
Elektronik > PC-Zubehör & Software > Gehäuse: 161/225/gehaeuse
Elektronik > PC-Zubehör & Software > Grafikkarten: 161/225/grafikkarten
Elektronik > PC-Zubehör & Software > Kabel & Adapter: 161/225/kabel_adapter
Elektronik > PC-Zubehör & Software > Mainboards: 161/225/mainboards
Elektronik > PC-Zubehör & Software > Monitore: 161/225/monitore
Elektronik > PC-Zubehör & Software > Multimedia: 161/225/multimedia
Elektronik > PC-Zubehör & Software > Netzwerk & Modem: 161/225/netzwerk_modem
Elektronik > PC-Zubehör & Software > Prozessoren / CPUs: 161/225/prozessor_cpu
Elektronik > PC-Zubehör & Software > Speicher: 161/225/speicher
Elektronik > PC-Zubehör & Software > Software: 161/225/software
Elektronik > PC-Zubehör & Software > Tastatur & Maus: 161/225/tastatur_maus
Handy_Apple: 161/173/apple Elektronik > Tablets Reader: 161/285/weitere
Handy_HTC: 161/173/htc_handy Elektronik > Tablets & Reader > iPad: 161/285/ipad
Handy_LG: 161/173/lg_handy Elektronik > Tablets & Reader > Kindle: 161/285/kindle
Handy_Motorola: 161/173/motorola_handy Elektronik > Tablets & Reader > Samsung Tablets: 161/285/samsung_tablets
Handy_Nokia: 161/173/nokia_handy
Handy_Samsung: 161/173/samsung_handy
Handy_Siemens: 161/173/siemens_handy
Handy_Sony: 161/173/sony_handy
Faxgeräte: 161/173/faxgeraete
Telefone: 161/173/telefone
## Haushaltsgeräte Elektronik > TV & Video: 161/175/weitere
Haushaltsgeräte: 161/176/sonstige Elektronik > TV & Video > DVD-Player & Recorder: 161/175/dvdplayer_recorder
Elektronik > TV & Video > Fernseher: 161/175/fernseher
Elektronik > TV & Video > TV-Receiver: 161/175/tv_receiver
Haushaltkleingeräte: 161/176/haushaltskleingeraete Elektronik > Videospiele: 161/227/sonstige
Herde: 161/176/herde_backoefen Elektronik > Videospiele > DS(i)- & PSP Spiele: 161/227/dsi_psp
Backöfen: 161/176/herde_backoefen Elektronik > Videospiele > Nintendo Spiele: 161/227/nintendo
Kaffemaschinen: 161/176/kaffee_espressomaschinen Elektronik > Videospiele > PlayStation Spiele: 161/227/playstation
Espressomaschinen: 161/176/kaffee_espressomaschinen Elektronik > Videospiele > Xbox Spiele: 161/227/xbox
Kühlschränke: 161/176/kuehlschraenke_gefriergeraete Elektronik > Videospiele > Wii Spiele: 161/227/wii
Gefriergeräte: 161/176/kuehlschraenke_gefriergeraete Elektronik > Videospiele > PC Spiele: 161/227/pc_spiele
Spülmaschinen: 161/176/spuelmaschinen
Staubsauger: 161/176/staubsauger
Waschmaschinen: 161/176/waschmaschinen_trockner
Trockner: 161/176/waschmaschinen_trockner
## Konsolen Familie, Kind & Baby: 17/18
Konsolen: 161/279/weitere Familie, Kind & Baby > Altenpflege: 17/236
Pocket_Konsolen: 161/279/dsi_psp Familie, Kind & Baby > Baby- & Kinderkleidung: 17/22/sonstiges
Playstation: 161/279/playstation Familie, Kind & Baby > Baby- & Kinderkleidung > Hosen & Jeans: 17/22/hosen_jeans
XBox: 161/279/xbox Familie, Kind & Baby > Baby- & Kinderkleidung > Kleider & Röcke: 17/22/kleider_roecke
Wii: 161/279/wii Familie, Kind & Baby > Baby- & Kinderkleidung > Shirts & Tops: 17/22/shirts_tops
Familie, Kind & Baby > Baby- & Kinderkleidung > Hemden: 17/22/hemden
Familie, Kind & Baby > Baby- & Kinderkleidung > Jacken & Mäntel: 17/22/jacken_mantel
Familie, Kind & Baby > Baby- & Kinderkleidung > Pullover & Strickjacken: 17/22/pullover_strickjacken
Familie, Kind & Baby > Baby- & Kinderkleidung > Wäsche: 17/22/wasche
Familie, Kind & Baby > Baby- & Kinderkleidung > Sportbekleidung: 17/22/sportbekleidung
Familie, Kind & Baby > Baby- & Kinderkleidung > Bademode: 17/22/bademode
Familie, Kind & Baby > Baby- & Kinderkleidung > Accessoires: 17/22/accessoires
Familie, Kind & Baby > Baby- & Kinderkleidung > Kleidungspakete: 17/22/kleidungspakete
## Notebooks Familie, Kind & Baby > Baby- & Kinderschuhe: 17/19/sonstiges
Notebooks: 161/278 Familie, Kind & Baby > Baby- & Kinderschuhe > Ballerinas: 17/19/ballerinas
Familie, Kind & Baby > Baby- & Kinderschuhe > Halb- & Schnürschuhe: 17/19/halb_schnuerschuhe
Familie, Kind & Baby > Baby- & Kinderschuhe > Hausschuhe: 17/19/hausschuhe
Familie, Kind & Baby > Baby- & Kinderschuhe > Sandalen: 17/19/sandalen
Familie, Kind & Baby > Baby- & Kinderschuhe > Outdoor & Wanderschuhe: 17/19/outdoor_wanderschuhe
Familie, Kind & Baby > Baby- & Kinderschuhe > Sneaker & Sportschuhe: 17/19/sneaker_sportschuhe
Familie, Kind & Baby > Baby- & Kinderschuhe > Stiefel & Stiefeletten: 17/19/stiefel_stiefeletten
Familie, Kind & Baby > Baby- & Kinderschuhe > Badeschuhe: 17/19/badeschuhe
## PCs Familie, Kind & Baby > Baby-Ausstattung: 17/258
PCs: 161/228 Familie, Kind & Baby > Babyschalen & Kindersitze: 17/21
Familie, Kind & Baby > Babysitter/-in & Kinderbetreuung: 17/237
Familie, Kind & Baby > Kinderwagen & Buggys: 17/25
## PC-Zubehör & Software Familie, Kind & Baby > Kinderzimmermöbel: 17/20/sonstige
PC-Zubehör: 161/225/sonstiges Familie, Kind & Baby > Kinderzimmermöbel > Betten & Wiegen: 17/20/betten_wiegen
Familie, Kind & Baby > Kinderzimmermöbel > Hochstühle & Laufställe: 17/20/hochstuehle_laufstaelle
Familie, Kind & Baby > Kinderzimmermöbel > Schränke & Kommoden: 17/20/schraenke_kommoden
Familie, Kind & Baby > Kinderzimmermöbel > Wickeltische & Zubehör: 17/20/wickeltische_zubehoer
Familie, Kind & Baby > Kinderzimmermöbel > Wippen & Schaukeln: 17/20/wippen_schaukeln
Drucker: 161/225/drucker_scanner Familie, Kind & Baby > Spielzeug: 17/23/sonstiges
Scanner: 161/225/drucker_scanner Familie, Kind & Baby > Spielzeug > Action- & Spielfiguren: 17/23/actionfiguren
Festplatten: 161/225/festplatten_laufwerke Familie, Kind & Baby > Spielzeug > Babyspielzeug: 17/23/babyspielzeug
Laufwerke: 161/225/festplatten_laufwerke Familie, Kind & Baby > Spielzeug > Barbie & Co: 17/23/barbie
Gehäuse: 161/225/gehaeuse Familie, Kind & Baby > Spielzeug > Dreirad & Co: 17/23/dreirad
Grafikkarten: 161/225/grafikkarten Familie, Kind & Baby > Spielzeug > Gesellschaftsspiele: 17/23/gesellschaftsspiele
Kabel: 161/225/kabel_adapter Familie, Kind & Baby > Spielzeug > Holzspielzeug: 17/23/holzspielzeug
Adapter: 161/225/kabel_adapter Familie, Kind & Baby > Spielzeug > LEGO & Duplo: 17/23/lego_duplo
Mainboards: 161/225/mainboards Familie, Kind & Baby > Spielzeug > Lernspielzeug: 17/23/lernspielzeug
Monitore: 161/225/monitore Familie, Kind & Baby > Spielzeug > Playmobil: 17/23/playmobil
Multimedia: 161/225/multimedia Familie, Kind & Baby > Spielzeug > Puppen: 17/23/puppen
Netzwerk: 161/225/netzwerk_modem Familie, Kind & Baby > Spielzeug > Spielzeugautos: 17/23/spielzeug_autos
CPUs: 161/225/prozessor_cpu Familie, Kind & Baby > Spielzeug > Spielzeug für draußen: 17/23/spielzeug_draussen
Prozessoren: 161/225/prozessor_cpu Familie, Kind & Baby > Spielzeug > Stofftiere: 17/23/stofftiere
Speicher: 161/225/speicher
Software: 161/225/software
Mäuse: 161/225/tastatur_maus
Tastaturen: 161/225/tastatur_maus
## Tablets & Reader Freizeit, Hobby & Nachbarschaft: 185/242
Tablets_Reader: 161/285/weitere Freizeit, Hobby & Nachbarschaft > Esoterik & Spirituelles: 185/232
Freizeit, Hobby & Nachbarschaft > Essen & Trinken: 185/248
Freizeit, Hobby & Nachbarschaft > Freizeitaktivitäten: 185/187
Freizeit, Hobby & Nachbarschaft > Handarbeit, Basteln & Kunsthandwerk: 185/282
Freizeit, Hobby & Nachbarschaft > Kunst & Antiquitäten: 185/240
Freizeit, Hobby & Nachbarschaft > Künstler/-in & Musiker/-in: 185/191
Freizeit, Hobby & Nachbarschaft > Modellbau: 185/249
Freizeit, Hobby & Nachbarschaft > Reise & Eventservices: 185/233
iPad: 161/285/ipad Freizeit, Hobby & Nachbarschaft > Sammeln: 185/234/sonstige
Kindle: 161/285/kindle Freizeit, Hobby & Nachbarschaft > Sammeln > Ansichts- & Postkarten: 185/234/ansichts_postkarten
Tablets_Samsung: 161/285/samsung_tablets Freizeit, Hobby & Nachbarschaft > Sammeln > Autogramme: 185/234/autogramme
Freizeit, Hobby & Nachbarschaft > Sammeln > Bierkrüge & -gläser: 185/234/bierkruege_glaeser
Freizeit, Hobby & Nachbarschaft > Sammeln > Briefmarken: 185/234/briefmarken
Freizeit, Hobby & Nachbarschaft > Sammeln > Comics: 185/234/comics
Freizeit, Hobby & Nachbarschaft > Sammeln > Flaggen: 185/234/flaggen
Freizeit, Hobby & Nachbarschaft > Sammeln > Münzen: 185/234/muenzen
Freizeit, Hobby & Nachbarschaft > Sammeln > Porzellan: 185/234/porzellan
Freizeit, Hobby & Nachbarschaft > Sammeln > Puppen & Puppenzubehör: 185/234/puppen_puppenzubehoer
Freizeit, Hobby & Nachbarschaft > Sammeln > Sammelbilder & Sticker: 185/234/sammelbilder_sticker
Freizeit, Hobby & Nachbarschaft > Sammeln > Sammelkartenspiele: 185/234/sammelkartenspiele
Freizeit, Hobby & Nachbarschaft > Sammeln > Überraschungseier: 185/234/ueberraschungseier
Freizeit, Hobby & Nachbarschaft > Sammeln > Werbeartikel: 185/234/werbeartikel
## TV & Video Freizeit, Hobby & Nachbarschaft > Sport & Camping: 185/230/sonstige
TV_Video: 161/175/weitere Freizeit, Hobby & Nachbarschaft > Sport & Camping > Ballsport: 185/230/ballsport
Freizeit, Hobby & Nachbarschaft > Sport & Camping > Camping & Outdoor: 185/230/camping
Freizeit, Hobby & Nachbarschaft > Sport & Camping > Fitness: 185/230/fitness
Freizeit, Hobby & Nachbarschaft > Sport & Camping > Radsport: 185/230/radsport
Freizeit, Hobby & Nachbarschaft > Sport & Camping > Tanzen & Laufen: 185/230/tanzen_laufen
Freizeit, Hobby & Nachbarschaft > Sport & Camping > Wassersport: 185/230/wassersport
Freizeit, Hobby & Nachbarschaft > Sport & Camping > Wintersport: 185/230/wintersport
DVD-Player: 161/175/dvdplayer_recorder Freizeit, Hobby & Nachbarschaft > Trödel: 185/250
Recorder: 161/175/dvdplayer_recorder Freizeit, Hobby & Nachbarschaft > Verloren & Gefunden: 185/189
Fernseher: 161/175/fernseher
Reciever: 161/175/tv_receiver
## Videospiele Haus & Garten: 80/87
Videospiele: 161/227/sonstige Haus & Garten > Badezimmer: 80/91
Haus & Garten > Büro: 80/93
Videospiele_DS: 161/227/dsi_psp Haus & Garten > Dekoration: 80/246/weitere
Videospiele_PSP: 161/227/dsi_psp Haus & Garten > Dekoration > Bilder & Poster: 80/246/bilder_poster
Videospiele_Nintendo: 161/227/nintendo Haus & Garten > Dekoration > Kerzen & Kerzenhalter: 80/246/kerzen_kerzenhalter
Videospiele_Playstation: 161/227/playstation Haus & Garten > Dekoration > Spiegel: 80/246/spiegel
Videospiele_XBox: 161/227/xbox Haus & Garten > Dekoration > Vasen: 80/246/vasen
Videospiele_Wii: 161/227/wii
Videospiele_PC: 161/227/pc_spiele
Haus & Garten > Dienstleistungen Haus & Garten: 80/239/sonstige
Haus & Garten > Dienstleistungen Haus & Garten > Bau & Handwerk: 80/239/bau_handwerk
Haus & Garten > Dienstleistungen Haus & Garten > Garten- & Landschaftsbau: 80/239/garten_landschaftsbau
Haus & Garten > Dienstleistungen Haus & Garten > Haushaltshilfe: 80/239/haushaltshilfe
Haus & Garten > Dienstleistungen Haus & Garten > Reinigungsservice: 80/239/reingungsservice
Haus & Garten > Dienstleistungen Haus & Garten > Reparaturen: 80/239/reparaturen
Haus & Garten > Dienstleistungen Haus & Garten > Wohnungsauflösungen: 80/239/wohnungsaufloesungen
#Auto, Rad & Boot Haus & Garten > Gartenzubehör & Pflanzen: 80/89/sonstige
Autoreifen: 210/223/reifen_felgen Haus & Garten > Gartenzubehör & Pflanzen > Blumentöpfe: 80/89/blumentoepfe
Haus & Garten > Gartenzubehör & Pflanzen > Dekoration: 80/89/dekoration
Haus & Garten > Gartenzubehör & Pflanzen > Gartengeräte: 80/89/gartengeraete
Haus & Garten > Gartenzubehör & Pflanzen > Gartenmöbel: 80/89/gartenmoebel
Haus & Garten > Gartenzubehör & Pflanzen > Pflanzen: 80/89/pflanzen
# Freizeit, Hobby & Nachbarschaft Haus & Garten > Heimtextilien: 80/90
Sammeln: 185/234/sonstige Haus & Garten > Heimwerken: 80/84
# Mode & Beauty Haus & Garten > Küche & Esszimmer: 80/86/sonstige
Beauty: 153/224/sonstiges Haus & Garten > Küche & Esszimmer > Besteck: 80/86/besteck
Gesundheit: 153/224/gesundheit Haus & Garten > Küche & Esszimmer > Geschirr: 80/86/geschirr
Mode: 153/155 Haus & Garten > Küche & Esszimmer > Gläser: 80/86/glaeser
Haus & Garten > Küche & Esszimmer > Kleingeräte: 80/86/kuechengeraete
Haus & Garten > Küche & Esszimmer > Küchenschränke: 80/86/kuechenschraenke
Haus & Garten > Küche & Esszimmer > Stühle: 80/86/stuehle
Haus & Garten > Küche & Esszimmer > Tische: 80/86/tische
# Mode & Beauty > Damenschuhe Haus & Garten > Lampen & Licht: 80/82
Damenschuhe: 153/159/sonstiges
Damen_Ballerinas: 153/159/ballerinas
Damen_Halbschuhe: 153/159/halb_schnuerschuhe
Damen_Hausschuhe: 153/159/hausschuhe
Damen_High_Heels: 153/159/pumps
Damen_Pumps: 153/159/pumps
Damen_Sandalen: 153/159/sandalen
Damen_Schnürschuhe: 153/159/halb_schnuerschuhe
Damen_Sportschuche: 153/159/sneaker_sportschuhe
Damen_Sneaker: 153/159/sneaker_sportschuhe
Damen_Stiefel: 153/159/stiefel
Damen_Stiefeletten: 153/159/stiefel
Damen_Outdoorschuhe: 153/159/outdoor_wanderschuhe
Damen_Wanderschuhe: 153/159/outdoor_wanderschuhe
# Mode & Beauty > Herrenschuhe Haus & Garten > Schlafzimmer: 80/81/sonstiges
Herrenschuhe: 153/158/sonstiges Haus & Garten > Schlafzimmer > Betten: 80/81/betten
Herren_Halbschuhe: 153/158/halb_schnuerschuhe Haus & Garten > Schlafzimmer > Lattenroste: 80/81/lattenroste
Herren_Hausschuhe: 153/158/hausschuhe Haus & Garten > Schlafzimmer > Matratzen: 80/81/matratzen
Herren_Sandalen: 153/158/sandalen Haus & Garten > Schlafzimmer > Nachttische: 80/81/nachttische
Herren_Schnürschuhe: 153/158/halb_schnuerschuhe Haus & Garten > Schlafzimmer > Schränke: 80/81/schraenke
Herren_Sportschuche: 153/158/sneaker_sportschuhe
Herren_Sneaker: 153/158/sneaker_sportschuhe
Herren_Stiefel: 153/158/stiefel
Herren_Stiefeletten: 153/158/stiefel
Herren_Outdoorschuhe: 153/158/outdoor_wanderschuhe
Herren_Wanderschuhe: 153/158/outdoor_wanderschuhe
#Familie, Kind & Baby Haus & Garten > Wohnzimmer: 80/88/sonstiges
Familie_Kind_Baby: 17/18 Haus & Garten > Wohnzimmer > Regale: 80/88/regale
Altenpflege: 17/236 Haus & Garten > Wohnzimmer > Schränke & Schrankwände: 80/88/schraenke
Babysitter: 17/237 Haus & Garten > Wohnzimmer > Sitzmöbel: 80/88/sitzmoebel
Buggys: 17/25 Haus & Garten > Wohnzimmer > Sofas & Sitzgarnituren: 80/88/sofas_sitzgarnituren
Babyschalen: 17/21 Haus & Garten > Wohnzimmer > Tische: 80/88/tische
Baby-Ausstattung: 17/258 Haus & Garten > Wohnzimmer > TV & Phonomöbel: 80/88/tv_moebel
Kinderbetreuung: 17/237
Kindersitze: 17/21
Kinderwagen: 17/25
# Familie, Kind & Baby > Spielzeug Haustiere > Fische: 130/138/sonstige
Spielzeug: 17/23/sonstiges Haustiere > Fische > Aquariumfische: 130/138/aquarium
Actionfiguren: 17/23/actionfiguren Haustiere > Fische > Barsche: 130/138/barsche
Babyspielzeug: 17/23/babyspielzeug Haustiere > Fische > Diskusfische: 130/138/diskusfische
Barbie: 17/23/barbie Haustiere > Fische > Garnelen & Krebse: 130/138/garnelen_krebse
Dreirad: 17/23/dreirad Haustiere > Fische > Koi: 130/138/koi
Gesellschaftsspiele: 17/23/gesellschaftsspiele Haustiere > Fische > Schnecken: 130/138/schnecken
Holzspielzeug: 17/23/holzspielzeug Haustiere > Fische > Wasserpflanzen: 130/138/wasserpflanzen
Duplo: 17/23/lego_duplo Haustiere > Fische > Welse: 130/138/welse
LEGO: 17/23/lego_duplo
Lernspielzeug: 17/23/lernspielzeug
Playmobil: 17/23/playmobil
Puppen: 17/23/puppen
Spielzeugautos: 17/23/spielzeug_autos
Spielzeug_draussen: 17/23/spielzeug_draussen
Stofftiere: 17/23/stofftiere
# Haus & Garten > Wohnzimmer Haustiere > Hunde: 130/134/sonstige
Wohnzimmer_Regale: 80/88/regale Haustiere > Hunde > Mischlinge: 130/134/mischlinge
Wohnzimmer_Schraenke: 80/88/schraenke Haustiere > Hunde > Beagle: 130/134/beagle
Wohnzimmer_Sitzmoebel: 80/88/sitzmoebel Haustiere > Hunde > Bernhardiner: 130/134/bernhardiner
Wohnzimmer_Sofas_Sitzgarnituren: 80/88/sofas_sitzgarnituren Haustiere > Hunde > Border Collie: 130/134/border_collie
Wohnzimmer_Tische: 80/88/tische Haustiere > Hunde > Boxer: 130/134/boxer
Wohnzimmer_TV_Moebel: 80/88/tv_moebel Haustiere > Hunde > Cocker Spaniel: 130/134/cocker_spaniel
Wohnzimmer_Sonstiges: 80/88/sonstiges Haustiere > Hunde > Collie: 130/134/collie
Haustiere > Hunde > Dackel: 130/134/dackel
Haustiere > Hunde > Dalmatiner: 130/134/dalmatiner
Haustiere > Hunde > Dobermann: 130/134/dobermann
Haustiere > Hunde > Dogge: 130/134/dogge
Haustiere > Hunde > Golden Retriever: 130/134/goldenretriever
Haustiere > Hunde > Husky: 130/134/husky
Haustiere > Hunde > Jack Russell Terrier: 130/134/jack_russel_terrier
Haustiere > Hunde > Labrador: 130/134/labrador
Haustiere > Hunde > Malteser: 130/134/malteser
Haustiere > Hunde > Pudel: 130/134/pudel
Haustiere > Hunde > Schäferhunde: 130/134/schaeferhund
Haustiere > Hunde > Spitz: 130/134/spitz
Haustiere > Hunde > Terrier: 130/134/terrier
# Verschenken & Tauschen Haustiere > Katzen: 130/136/sonstige
Tauschen: 272/273 Haustiere > Katzen > Britisch Kurzhaar: 130/136/britisch_kurzhaar
Verleihen: 272/274 Haustiere > Katzen > Hauskatze: 130/136/hauskatze
Verschenken: 272/192 Haustiere > Katzen > Maine Coon: 130/136/maine_coon
Haustiere > Katzen > Siam: 130/136/siam
Haustiere > Kleintiere: 130/132/sonstige
Haustiere > Kleintiere > Hamster: 130/132/hamster
Haustiere > Kleintiere > Hasen & Kaninchen: 130/132/hasen_kaninchen
Haustiere > Kleintiere > Mäuse & Ratten: 130/132/maeuse_ratten
Haustiere > Kleintiere > Meerschweinchen: 130/132/meerschweinchen
Haustiere > Nutztiere: 130/135
Haustiere > Pferde > Großpferde: 130/139/grosspferde
Haustiere > Pferde > Kleinpferde & Ponys: 130/139/kleinpferde_ponys
Haustiere > Tierbetreuung & Training: 130/133
Haustiere > Vermisste Tiere > Entlaufen: 130/283/entlaufen
Haustiere > Vermisste Tiere > Gefunden: 130/283/gefunden
Haustiere > Vögel: 130/243
Haustiere > Zubehör: 130/313/sonstiges
Haustiere > Zubehör > Fische: 130/313/fische
Haustiere > Zubehör > Hunde: 130/313/hunde
Haustiere > Zubehör > Katzen: 130/313/katzen
Haustiere > Zubehör > Kleintiere: 130/313/kleintiere
Haustiere > Zubehör > Pferde: 130/313/pferde
Haustiere > Zubehör > Reptilien: 130/313/reptilien
Haustiere > Zubehör > Vögel: 130/313/voegel
Immobilien: 195/198
Immobilien > Auf Zeit & WG > Gesamte Unterkunft: 195/199/entire_accommodation
Immobilien > Auf Zeit & WG > Privatzimmer: 195/199/private_room
Immobilien > Auf Zeit & WG > Gemeinsames Zimmer: 195/199/shared_room
Immobilien > Eigentumswohnungen: 195/196
Immobilien > Ferien- & Auslandsimmobilien > Kaufen: 195/275/kaufen
Immobilien > Ferien- & Auslandsimmobilien > Mieten: 195/275/mieten
Immobilien > Garagen & Stellplätze > Kaufen: 195/197/kaufen
Immobilien > Garagen & Stellplätze > Mieten: 195/197/mieten
Immobilien > Gewerbeimmobilien > Kaufen: 195/277/kaufen
Immobilien > Gewerbeimmobilien > Mieten: 195/277/mieten
Immobilien > Grundstücke & Gärten: 195/207/andere
Immobilien > Grundstücke & Gärten > Baugrundstück: 195/207/baugrundstueck
Immobilien > Grundstücke & Gärten > Garten: 195/207/garten
Immobilien > Grundstücke & Gärten > Land-/Forstwirtschaft: 195/207/land_forstwirtschaft
Immobilien > Häuser zum Kauf: 195/208
Immobilien > Häuser zur Miete: 195/205
Immobilien > Mietwohnungen: 195/203
Immobilien > Umzug & Transport: 195/238
Jobs > Ausbildung: 102/118
Jobs > Bau, Handwerk & Produktion: 102/111/weitere
Jobs > Bau, Handwerk & Produktion > Bauhelfer/-in: 102/111/bauhelfer
Jobs > Bau, Handwerk & Produktion > Dachdecker/-in: 102/111/dachdecker
Jobs > Bau, Handwerk & Produktion > Elektriker/-in: 102/111/elektriker
Jobs > Bau, Handwerk & Produktion > Fliesenleger/-in: 102/111/fliesenleger
Jobs > Bau, Handwerk & Produktion > Maler/-in: 102/111/maler
Jobs > Bau, Handwerk & Produktion > Maurer/-in: 102/111/maurer
Jobs > Bau, Handwerk & Produktion > Produktionshelfer/-in: 102/111/produktionshelfer
Jobs > Bau, Handwerk & Produktion > Schlosser/-in: 102/111/schlosser
Jobs > Bau, Handwerk & Produktion > Tischler/-in: 102/111/tischler
Jobs > Büroarbeit & Verwaltung: 102/114/weitere
Jobs > Büroarbeit & Verwaltung > Buchhalter/-in: 102/114/buchhalter
Jobs > Büroarbeit & Verwaltung > Bürokaufmann/-frau: 102/114/buerokauf
Jobs > Büroarbeit & Verwaltung > Sachbearbeiter/-in: 102/114/sachbearbeiter
Jobs > Büroarbeit & Verwaltung > Sekretär/-in: 102/114/sekretaer
Jobs > Gastronomie & Tourismus: 102/110/weitere
Jobs > Gastronomie & Tourismus > Barkeeper/-in: 102/110/barkeeper
Jobs > Gastronomie & Tourismus > Hotelfachmann/-frau: 102/110/hotelfach
Jobs > Gastronomie & Tourismus > Housekeeping: 102/110/zimmermaedchen
Jobs > Gastronomie & Tourismus > Kellner/-in: 102/110/kellner
Jobs > Gastronomie & Tourismus > Koch/Köchin: 102/110/koch
Jobs > Gastronomie & Tourismus > Küchenhilfe: 102/110/kuechenhilfe
Jobs > Gastronomie & Tourismus > Servicekraft: 102/110/servicekraft
Jobs > Kundenservice & Call Center: 102/105
Jobs > Mini- & Nebenjobs: 102/107
Jobs > Praktika: 102/125
Jobs > Sozialer Sektor & Pflege: 102/123/weitere
Jobs > Sozialer Sektor & Pflege > Altenpfleger/-in: 102/123/altenpfleger
Jobs > Sozialer Sektor & Pflege > Arzthelfer/-in: 102/123/artzhelfer
Jobs > Sozialer Sektor & Pflege > Erzieher/-in: 102/123/erzieher
Jobs > Sozialer Sektor & Pflege > Krankenpfleger/-in: 102/123/krankenschwester
Jobs > Sozialer Sektor & Pflege > Physiotherapeut/-in: 102/123/physiotherapeut
Jobs > Transport, Logistik & Verkehr: 102/247/weitere
Jobs > Transport, Logistik & Verkehr > Kraftfahrer/-in: 102/247/kraftfahrer
Jobs > Transport, Logistik & Verkehr > Kurierfahrer/-in: 102/247/kurierfahrer
Jobs > Transport, Logistik & Verkehr > Lagerhelfer/-in: 102/247/lagerhelfer
Jobs > Transport, Logistik & Verkehr > Staplerfahrer/-in: 102/247/staplerfahrer
Jobs > Vertrieb, Einkauf & Verkauf: 102/117/weitere
Jobs > Vertrieb, Einkauf & Verkauf > Buchhalter/-in: 102/117/buchhalter
Jobs > Vertrieb, Einkauf & Verkauf > Immobilienmakler/-in: 102/117/immobilienmakler
Jobs > Vertrieb, Einkauf & Verkauf > Kaufmann/-frau: 102/117/kauffrau
Jobs > Vertrieb, Einkauf & Verkauf > Verkäufer/-in: 102/117/verkaeufer
Jobs > Weitere Jobs: 102/109/weitere
Jobs > Weitere Jobs > Designer/-in & Grafiker/-in: 102/109/designer_grafiker
Jobs > Weitere Jobs > Friseur/-in: 102/109/friseur
Jobs > Weitere Jobs > Haushaltshilfe: 102/109/haushaltshilfe
Jobs > Weitere Jobs > Hausmeister/-in: 102/109/hausmeister
Jobs > Weitere Jobs > Reinigungskraft: 102/109/reinigungskraft
Mode & Beauty: 153/155
Mode & Beauty > Beauty & Gesundheit: 153/224/sonstiges
Mode & Beauty > Beauty & Gesundheit > Make-Up & Gesichtspflege: 153/224/make_up
Mode & Beauty > Beauty & Gesundheit > Haarpflege: 153/224/haarpflege
Mode & Beauty > Beauty & Gesundheit > Körperpflege: 153/224/koerperpflege
Mode & Beauty > Beauty & Gesundheit > Hand- & Nagelpflege: 153/224/handpflege
Mode & Beauty > Beauty & Gesundheit > Gesundheit: 153/224/gesundheit
Mode & Beauty > Damenbekleidung: 153/154/sonstige
Mode & Beauty > Damenbekleidung > Anzüge: 153/154/anzuege
Mode & Beauty > Damenbekleidung > Bademode: 153/154/bademode
Mode & Beauty > Damenbekleidung > Hemden & Blusen: 153/154/hemden_blusen
Mode & Beauty > Damenbekleidung > Hochzeitsmode: 153/154/hochzeitsmode
Mode & Beauty > Damenbekleidung > Hosen: 153/154/hosen
Mode & Beauty > Damenbekleidung > Jacken & Mäntel: 153/154/jacken_maentel
Mode & Beauty > Damenbekleidung > Jeans: 153/154/jeans
Mode & Beauty > Damenbekleidung > Kostüme & Verkleidungen: 153/154/kostueme_verkleidungen
Mode & Beauty > Damenbekleidung > Pullover: 153/154/pullover
Mode & Beauty > Damenbekleidung > Röcke & Kleider: 153/154/roecke_kleider
Mode & Beauty > Damenbekleidung > Shirts & Tops: 153/154/shirts_tops
Mode & Beauty > Damenbekleidung > Shorts: 153/154/shorts
Mode & Beauty > Damenbekleidung > Sportbekleidung: 153/154/sportbekleidung
Mode & Beauty > Damenbekleidung > Umstandsmode: 153/154/umstandsmode
Mode & Beauty > Damenschuhe: 153/159/sonstiges
Mode & Beauty > Damenschuhe > Ballerinas: 153/159/ballerinas
Mode & Beauty > Damenschuhe > Halb- & Schnürschuhe: 153/159/halb_schnuerschuhe
Mode & Beauty > Damenschuhe > Hausschuhe: 153/159/hausschuhe
Mode & Beauty > Damenschuhe > Outdoor & Wanderschuhe: 153/159/outdoor_wanderschuhe
Mode & Beauty > Damenschuhe > Pumps & High Heels: 153/159/pumps
Mode & Beauty > Damenschuhe > Sandalen: 153/159/sandalen
Mode & Beauty > Damenschuhe > Sneaker & Sportschuhe: 153/159/sneaker_sportschuhe
Mode & Beauty > Damenschuhe > Stiefel & Stiefeletten: 153/159/stiefel
Mode & Beauty > Herrenbekleidung: 153/160/sonstige
Mode & Beauty > Herrenbekleidung > Anzüge: 153/160/anzuege
Mode & Beauty > Herrenbekleidung > Bademode: 153/160/bademode
Mode & Beauty > Herrenbekleidung > Hemden: 153/160/hemden
Mode & Beauty > Herrenbekleidung > Hochzeitsmode: 153/160/hochzeitsmode
Mode & Beauty > Herrenbekleidung > Hosen: 153/160/hosen
Mode & Beauty > Herrenbekleidung > Jacken & Mäntel: 153/160/jacken_maentel
Mode & Beauty > Herrenbekleidung > Jeans: 153/160/jeans
Mode & Beauty > Herrenbekleidung > Kostüme & Verkleidungen: 153/160/kostueme_verkleidungen
Mode & Beauty > Herrenbekleidung > Pullover: 153/160/pullover
Mode & Beauty > Herrenbekleidung > Shirts: 153/160/shirts
Mode & Beauty > Herrenbekleidung > Shorts: 153/160/shorts
Mode & Beauty > Herrenbekleidung > Sportbekleidung: 153/160/sportbekleidung
Mode & Beauty > Herrenschuhe: 153/158/sonstiges
Mode & Beauty > Herrenschuhe > Halb- & Schnürschuhe: 153/158/halb_schnuerschuhe
Mode & Beauty > Herrenschuhe > Hausschuhe: 153/158/hausschuhe
Mode & Beauty > Herrenschuhe > Sandalen: 153/158/sandalen
Mode & Beauty > Herrenschuhe > Sneaker & Sportschuhe: 153/158/sneaker_sportschuhe
Mode & Beauty > Herrenschuhe > Stiefel & Stiefeletten: 153/158/stiefel
Mode & Beauty > Herrenschuhe > Outdoor & Wanderschuhe: 153/158/outdoor_wanderschuhe
Mode & Beauty > Taschen & Accessoires: 153/156/sonstiges
Mode & Beauty > Taschen & Accessoires > Mützen, Schals & Handschuhe: 153/156/muetzen_schals_handschuhe
Mode & Beauty > Taschen & Accessoires > Sonnenbrillen: 153/156/sonnenbrillen
Mode & Beauty > Taschen & Accessoires > Taschen & Rucksäcke: 153/156/taschen_rucksaecke
Mode & Beauty > Uhren & Schmuck > Schmuck: 153/157/schmuck
Mode & Beauty > Uhren & Schmuck > Uhren: 153/157/uhren
Musik, Filme & Bücher: 73/75
Musik, Filme & Bücher > Bücher & Zeitschriften: 73/76
Musik, Filme & Bücher > Bücher & Zeitschriften > Antiquarische Bücher: 73/76/antiquarische_buecher
Musik, Filme & Bücher > Bücher & Zeitschriften > Kinderbücher: 73/76/kinderbuecher
Musik, Filme & Bücher > Bücher & Zeitschriften > Krimis & Thriller: 73/76/krimis_thriller
Musik, Filme & Bücher > Bücher & Zeitschriften > Kunst & Kultur: 73/76/kunst_kultur
Musik, Filme & Bücher > Bücher & Zeitschriften > Sachbücher: 73/76/sachbuecher
Musik, Filme & Bücher > Bücher & Zeitschriften > Science Fiction: 73/76/science_fiction
Musik, Filme & Bücher > Bücher & Zeitschriften > Unterhaltungsliteratur: 73/76/unterhaltungsliteratur
Musik, Filme & Bücher > Bücher & Zeitschriften > Zeitgenössische Literatur & Klassiker: 73/76/zeitgenoessische_literatur_klassiker
Musik, Filme & Bücher > Bücher & Zeitschriften > Zeitschriften: 73/76/zeitschriften
Musik, Filme & Bücher > Büro & Schreibwaren: 73/281
Musik, Filme & Bücher > Comics: 73/284
Musik, Filme & Bücher > Fachbücher, Schule & Studium: 73/77
Musik, Filme & Bücher > Film & DVD: 73/79
Musik, Filme & Bücher > Musik & CDs: 73/78
Musik, Filme & Bücher > Musikinstrumente: 73/74
Nachbarschaftshilfe: 400/401
Unterricht & Kurse: 235/270
Unterricht & Kurse > Beauty & Gesundheit: 235/269
Unterricht & Kurse > Computerkurse: 235/260
Unterricht & Kurse > Esoterik & Spirituelles: 235/265
Unterricht & Kurse > Kochen & Backen: 235/263
Unterricht & Kurse > Kunst & Gestaltung: 235/264
Unterricht & Kurse > Musik & Gesang: 235/262
Unterricht & Kurse > Nachhilfe: 235/268
Unterricht & Kurse > Sportkurse: 235/261
Unterricht & Kurse > Sprachkurse: 235/271
Unterricht & Kurse > Tanzkurse: 235/267
Unterricht & Kurse > Weiterbildung: 235/266
Verschenken & Tauschen > Tauschen: 272/273
Verschenken & Tauschen > Verleihen: 272/274
Verschenken & Tauschen > Verschenken: 272/192

View File

@@ -0,0 +1,200 @@
###############################################################################
# Deprecated category names for backward compatiblity, don't use them anymore!
###############################################################################
# Elektronik
Elektronik: 161/168
## Audio & Hifi
Audio_und_Hifi: 161/172/sonstiges
CD_Player: 161/172/cd_player
Kopfhörer: 161/172/lautsprecher_kopfhoerer
Lautsprecher: 161/172/lautsprecher_kopfhoerer
MP3_Player: 161/172/mp3_player
Radio: 161/172/radio_receiver
Reciver: 161/172/radio_receiver
Stereoanlagen: 161/172/stereoanlagen
## Dienstleistungen Elektronik
Dienstleistungen_Elektronik: 161/226
## Foto
Foto: 161/245/other
Kameras: 161/245/camera
Objektive: 161/245/lens
Foto_Zubehör: 161/245/equipment
Kamera_Equipment: 161/245/camera_and_equipment
## Handy & Telefon
Handys: 161/173/sonstige
Handy_Apple: 161/173/apple
Handy_HTC: 161/173/htc_handy
Handy_LG: 161/173/lg_handy
Handy_Motorola: 161/173/motorola_handy
Handy_Nokia: 161/173/nokia_handy
Handy_Samsung: 161/173/samsung_handy
Handy_Siemens: 161/173/siemens_handy
Handy_Sony: 161/173/sony_handy
Faxgeräte: 161/173/faxgeraete
Telefone: 161/173/telefone
## Haushaltsgeräte
Haushaltsgeräte: 161/176/sonstige
Haushaltkleingeräte: 161/176/haushaltskleingeraete
Herde: 161/176/herde_backoefen
Backöfen: 161/176/herde_backoefen
Kaffemaschinen: 161/176/kaffee_espressomaschinen
Espressomaschinen: 161/176/kaffee_espressomaschinen
Kühlschränke: 161/176/kuehlschraenke_gefriergeraete
Gefriergeräte: 161/176/kuehlschraenke_gefriergeraete
Spülmaschinen: 161/176/spuelmaschinen
Staubsauger: 161/176/staubsauger
Waschmaschinen: 161/176/waschmaschinen_trockner
Trockner: 161/176/waschmaschinen_trockner
## Konsolen
Konsolen: 161/279/weitere
Pocket_Konsolen: 161/279/dsi_psp
Playstation: 161/279/playstation
XBox: 161/279/xbox
Wii: 161/279/wii
## Notebooks
Notebooks: 161/278
## PCs
PCs: 161/228
## PC-Zubehör & Software
PC-Zubehör: 161/225/sonstiges
Drucker: 161/225/drucker_scanner
Scanner: 161/225/drucker_scanner
Festplatten: 161/225/festplatten_laufwerke
Laufwerke: 161/225/festplatten_laufwerke
Gehäuse: 161/225/gehaeuse
Grafikkarten: 161/225/grafikkarten
Kabel: 161/225/kabel_adapter
Adapter: 161/225/kabel_adapter
Mainboards: 161/225/mainboards
Monitore: 161/225/monitore
Multimedia: 161/225/multimedia
Netzwerk: 161/225/netzwerk_modem
CPUs: 161/225/prozessor_cpu
Prozessoren: 161/225/prozessor_cpu
Speicher: 161/225/speicher
Software: 161/225/software
Mäuse: 161/225/tastatur_maus
Tastaturen: 161/225/tastatur_maus
## Tablets & Reader
Tablets_Reader: 161/285/weitere
iPad: 161/285/ipad
Kindle: 161/285/kindle
Tablets_Samsung: 161/285/samsung_tablets
## TV & Video
TV_Video: 161/175/weitere
DVD-Player: 161/175/dvdplayer_recorder
Recorder: 161/175/dvdplayer_recorder
Fernseher: 161/175/fernseher
Reciever: 161/175/tv_receiver
## Videospiele
Videospiele: 161/227/sonstige
Videospiele_DS: 161/227/dsi_psp
Videospiele_PSP: 161/227/dsi_psp
Videospiele_Nintendo: 161/227/nintendo
Videospiele_Playstation: 161/227/playstation
Videospiele_XBox: 161/227/xbox
Videospiele_Wii: 161/227/wii
Videospiele_PC: 161/227/pc_spiele
# Auto, Rad & Boot
Autoreifen: 210/223/reifen_felgen
# Freizeit, Hobby & Nachbarschaft
Sammeln: 185/234/sonstige
# Mode & Beauty
Beauty: 153/224/sonstiges
Gesundheit: 153/224/gesundheit
Mode: 153/155
# Mode & Beauty > Damenschuhe
Damenschuhe: 153/159/sonstiges
Damen_Ballerinas: 153/159/ballerinas
Damen_Halbschuhe: 153/159/halb_schnuerschuhe
Damen_Hausschuhe: 153/159/hausschuhe
Damen_High_Heels: 153/159/pumps
Damen_Pumps: 153/159/pumps
Damen_Sandalen: 153/159/sandalen
Damen_Schnürschuhe: 153/159/halb_schnuerschuhe
Damen_Sportschuche: 153/159/sneaker_sportschuhe
Damen_Sneaker: 153/159/sneaker_sportschuhe
Damen_Stiefel: 153/159/stiefel
Damen_Stiefeletten: 153/159/stiefel
Damen_Outdoorschuhe: 153/159/outdoor_wanderschuhe
Damen_Wanderschuhe: 153/159/outdoor_wanderschuhe
# Mode & Beauty > Herrenschuhe
Herrenschuhe: 153/158/sonstiges
Herren_Halbschuhe: 153/158/halb_schnuerschuhe
Herren_Hausschuhe: 153/158/hausschuhe
Herren_Sandalen: 153/158/sandalen
Herren_Schnürschuhe: 153/158/halb_schnuerschuhe
Herren_Sportschuche: 153/158/sneaker_sportschuhe
Herren_Sneaker: 153/158/sneaker_sportschuhe
Herren_Stiefel: 153/158/stiefel
Herren_Stiefeletten: 153/158/stiefel
Herren_Outdoorschuhe: 153/158/outdoor_wanderschuhe
Herren_Wanderschuhe: 153/158/outdoor_wanderschuhe
# Familie, Kind & Baby
Familie_Kind_Baby: 17/18
Altenpflege: 17/236
Babysitter: 17/237
Buggys: 17/25
Babyschalen: 17/21
Baby-Ausstattung: 17/258
Kinderbetreuung: 17/237
Kindersitze: 17/21
Kinderwagen: 17/25
# Familie, Kind & Baby > Spielzeug
Spielzeug: 17/23/sonstiges
Actionfiguren: 17/23/actionfiguren
Babyspielzeug: 17/23/babyspielzeug
Barbie: 17/23/barbie
Dreirad: 17/23/dreirad
Gesellschaftsspiele: 17/23/gesellschaftsspiele
Holzspielzeug: 17/23/holzspielzeug
Duplo: 17/23/lego_duplo
LEGO: 17/23/lego_duplo
Lernspielzeug: 17/23/lernspielzeug
Playmobil: 17/23/playmobil
Puppen: 17/23/puppen
Spielzeugautos: 17/23/spielzeug_autos
Spielzeug_draussen: 17/23/spielzeug_draussen
Stofftiere: 17/23/stofftiere
# Haus & Garten > Wohnzimmer
Wohnzimmer_Regale: 80/88/regale
Wohnzimmer_Schraenke: 80/88/schraenke
Wohnzimmer_Sitzmoebel: 80/88/sitzmoebel
Wohnzimmer_Sofas_Sitzgarnituren: 80/88/sofas_sitzgarnituren
Wohnzimmer_Tische: 80/88/tische
Wohnzimmer_TV_Moebel: 80/88/tv_moebel
Wohnzimmer_Sonstiges: 80/88/sonstiges
# Verschenken & Tauschen
Tauschen: 272/273
Verleihen: 272/274
Verschenken: 272/192

View File

@@ -1,45 +0,0 @@
ad_files:
- "./**/ad_*.{json,yml,yaml}"
# default values for ads, can be overwritten in each ad configuration file
ad_defaults:
active: true
type: OFFER # one of: OFFER, WANTED
description:
prefix: ""
suffix: ""
price_type: NEGOTIABLE # one of: FIXED, NEGOTIABLE, GIVE_AWAY, NOT_APPLICABLE
shipping_type: SHIPPING # one of: PICKUP, SHIPPING, NOT_APPLICABLE
sell_directly: false # requires shipping_options to take effect
contact:
name: ""
street: ""
zipcode:
phone: "" # IMPORTANT: surround phone number with quotes to prevent removal of leading zeros
republication_interval: 7 # every X days ads should be re-published
# additional name to category ID mappings, see default list at
# https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/kleinanzeigen_bot/resources/categories.yaml
# Notebooks: 161/278 # Elektronik > Notebooks
# Autoteile: 210/223/sonstige_autoteile # Auto, Rad & Boot > Autoteile & Reifen > Weitere Autoteile
categories: []
# browser configuration
browser:
# https://peter.sh/experiments/chromium-command-line-switches/
arguments:
# https://stackoverflow.com/a/50725918/5116073
- --disable-dev-shm-usage
- --no-sandbox
# --headless
# --start-maximized
binary_location: # path to custom browser executable, if not specified will be looked up on PATH
extensions: [] # a list of .crx extension files to be loaded
use_private_window: true
user_data_dir: "" # see https://github.com/chromium/chromium/blob/main/docs/user_data_dir.md
profile_name: ""
# login credentials
login:
username: ""
password: ""

View File

@@ -0,0 +1,738 @@
#################################################
getopt.py:
#################################################
do_longs:
"option --%s requires argument": "Option --%s benötigt ein Argument"
"option --%s must not have an argument": "Option --%s darf kein Argument haben"
long_has_args:
"option --%s not recognized": "Option --%s unbekannt"
"option --%s not a unique prefix": "Option --%s ist kein eindeutiger Prefix"
do_shorts:
"option -%s requires argument": "Option -%s benötigt ein Argument"
short_has_arg:
"option -%s not recognized": "Option -%s unbekannt"
#################################################
kleinanzeigen_bot/__main__.py:
#################################################
module:
"[INFO] Captcha detected. Sleeping %s before restart...": "[INFO] Captcha erkannt. Warte %s h bis zum Neustart..."
#################################################
kleinanzeigen_bot/__init__.py:
#################################################
module:
"Direct execution not supported. Use 'pdm run app'": "Direkte Ausführung nicht unterstützt. Bitte 'pdm run app' verwenden"
create_default_config:
"Config file %s already exists. Aborting creation.": "Konfigurationsdatei %s existiert bereits. Erstellung abgebrochen."
_workspace_or_raise:
"Workspace must be resolved before command execution": "Arbeitsbereich muss vor der Befehlsausführung aufgelöst werden"
configure_file_logging:
"Logging to [%s]...": "Protokollierung in [%s]..."
"App version: %s": "App Version: %s"
"Python version: %s": "Python Version: %s"
_fetch_published_ads:
"Empty JSON response content on page %s": "Leerer JSON-Antwortinhalt auf Seite %s"
"Failed to parse JSON response on page %s: %s (content: %s)": "Fehler beim Parsen der JSON-Antwort auf Seite %s: %s (Inhalt: %s)"
"Stopping pagination after %s pages to avoid infinite loop": "Stoppe die Seitenaufschaltung nach %s Seiten, um eine Endlosschleife zu vermeiden"
"Pagination request timed out on page %s: %s": "Zeitueberschreitung bei der Seitenabfrage auf Seite %s: %s"
"Unexpected JSON payload on page %s (content: %s)": "Unerwartete JSON-Antwort auf Seite %s (Inhalt: %s)"
"Unexpected 'ads' type on page %s: %s value: %s": "Unerwarteter 'ads'-Typ auf Seite %s: %s Wert: %s"
"Reached last page %s of %s, stopping pagination": "Letzte Seite %s von %s erreicht, beende Paginierung"
"No ads found on page %s, stopping pagination": "Keine Anzeigen auf Seite %s gefunden, beende Paginierung"
"Invalid 'next' page value in paging info: %s, stopping pagination": "Ungültiger 'next'-Seitenwert in Paginierungsinfo: %s, beende Paginierung"
"Invalid 'pageNum' in paging info: %s, stopping pagination": "Ungültiger 'pageNum'-Wert in Paginierungsinfo: %s, beende Paginierung"
__check_ad_changed:
"Hash comparison for [%s]:": "Hash-Vergleich für [%s]:"
" Stored hash: %s": " Gespeicherter Hash: %s"
" Current hash: %s": " Aktueller Hash: %s"
"Changes detected in ad [%s], will republish": "Änderungen in Anzeige [%s] erkannt, wird erneut veröffentlicht"
load_ads:
"Searching for ad config files...": "Suche nach Anzeigendateien..."
" -> found %s": "-> %s gefunden"
"ad config file": "Anzeigendatei"
"Start fetch task for the ad(s) with id(s):": "Starte Abrufaufgabe für die Anzeige(n) mit ID(s):"
" -> SKIPPED: inactive ad [%s]": " -> ÜBERSPRUNGEN: inaktive Anzeige [%s]"
" -> SKIPPED: ad [%s] is not in list of given ids.": " -> ÜBERSPRUNGEN: Anzeige [%s] ist nicht in der Liste der angegebenen IDs."
" -> SKIPPED: ad [%s] is not new. already has an id assigned.": " -> ÜBERSPRUNGEN: Anzeige [%s] ist nicht neu. Eine ID wurde bereits zugewiesen."
"Category [%s] unknown. Using category [%s] with ID [%s] instead.": "Kategorie [%s] unbekannt. Verwende stattdessen Kategorie [%s] mit ID [%s]."
" -> LOADED: ad [%s]": " -> GELADEN: Anzeige [%s]"
"Loaded %s": "%s geladen"
"ad": "Anzeige"
load_config:
"config": "Konfiguration"
"Loaded %s categories from categories.yaml": "%s Kategorien aus categories.yaml geladen"
"Loaded %s categories from categories_old.yaml": "%s Kategorien aus categories_old.yaml geladen"
"Loaded %s categories from config.yaml (custom)": "%s Kategorien aus config.yaml geladen (benutzerdefiniert)"
"Loaded %s categories in total": "%s Kategorien insgesamt geladen"
"No categories loaded - category files may be missing or empty": "Keine Kategorien geladen - Kategorie-Dateien fehlen oder sind leer"
check_and_wait_for_captcha:
"# Captcha present! Please solve the captcha.": "# Captcha vorhanden! Bitte lösen Sie das Captcha."
"Captcha recognized - auto-restart enabled, abort run...": "Captcha erkannt - Auto-Neustart aktiviert, Durchlauf wird beendet..."
"Press a key to continue...": "Eine Taste drücken, um fortzufahren..."
_capture_login_detection_diagnostics_if_enabled:
"# Login detection returned UNKNOWN. Browser is paused for manual inspection.": "# Login-Erkennung ergab UNKNOWN. Browser ist zur manuellen Prüfung angehalten."
"Press a key to continue...": "Eine Taste drücken, um fortzufahren..."
_capture_publish_error_diagnostics_if_enabled:
"Diagnostics capture failed during publish error handling: %s": "Diagnose-Erfassung fehlgeschlagen während der Veröffentlichung-Fehlerbehandlung: %s"
login:
"Checking if already logged in...": "Überprüfe, ob bereits eingeloggt..."
"Current page URL after opening homepage: %s": "Aktuelle Seiten-URL nach dem Öffnen der Startseite: %s"
"Already logged in as [%s]. Skipping login.": "Bereits eingeloggt als [%s]. Überspringe Anmeldung."
"Opening login page...": "Öffne Anmeldeseite..."
"Login state is UNKNOWN - cannot determine if already logged in. Skipping login attempt.": "Login-Status ist UNKNOWN - kann nicht bestimmt werden, ob bereits eingeloggt ist. Überspringe Anmeldeversuch."
"Login state is UNKNOWN after first login attempt - cannot determine login status. Aborting login process.": "Login-Status ist UNKNOWN nach dem ersten Anmeldeversuch - kann Login-Status nicht bestimmen. Breche Anmeldeprozess ab."
"First login attempt did not succeed, trying second login attempt": "Erster Anmeldeversuch war nicht erfolgreich, versuche zweiten Anmeldeversuch"
"Second login attempt succeeded": "Zweiter Anmeldeversuch erfolgreich"
"Second login attempt also failed - login may not have succeeded": "Zweiter Anmeldeversuch ebenfalls fehlgeschlagen - Anmeldung möglicherweise nicht erfolgreich"
is_logged_in:
"Starting login detection (timeout: %.1fs base, %.1fs effective with multiplier/backoff)": "Starte Login-Erkennung (Timeout: %.1fs Basis, %.1fs effektiv mit Multiplikator/Backoff)"
"Login detected via login detection selector '%s'": "Login erkannt über Login-Erkennungs-Selektor '%s'"
"Timeout waiting for login detection selector group after %.1fs": "Timeout beim Warten auf die Login-Erkennungs-Selektorgruppe nach %.1fs"
handle_after_login_logic:
"# Device verification message detected. Please follow the instruction displayed in the Browser.": "# Nachricht zur Geräteverifizierung erkannt. Bitte den Anweisungen im Browser folgen."
"Press ENTER when done...": "EINGABETASTE drücken, wenn erledigt..."
"Handling GDPR disclaimer...": "Verarbeite DSGVO-Hinweis..."
delete_ads:
"Processing %s/%s: '%s' from [%s]...": "Verarbeite %s/%s: '%s' von [%s]..."
"DONE: Deleted %s": "FERTIG: %s gelöscht"
"ad": "Anzeige"
delete_ad:
"Deleting ad '%s' if already present...": "Lösche Anzeige '%s', falls bereits vorhanden..."
"Expected CSRF Token not found in HTML content!": "Erwartetes CSRF-Token wurde im HTML-Inhalt nicht gefunden!"
" -> deleting %s '%s'...": " -> lösche %s '%s'..."
extend_ads:
"No ads need extension at this time.": "Keine Anzeigen müssen derzeit verlängert werden."
"DONE: No ads extended.": "FERTIG: Keine Anzeigen verlängert."
"DONE: Extended %s": "FERTIG: %s verlängert"
"ad": "Anzeige"
" -> SKIPPED: ad '%s' is not published yet": " -> ÜBERSPRUNGEN: Anzeige '%s' ist noch nicht veröffentlicht"
" -> SKIPPED: ad '%s' (ID: %s) not found in published ads": " -> ÜBERSPRUNGEN: Anzeige '%s' (ID: %s) nicht gefunden"
" -> SKIPPED: ad '%s' is not active (state: %s)": " -> ÜBERSPRUNGEN: Anzeige '%s' ist nicht aktiv (Status: %s)"
" -> SKIPPED: ad '%s' has no endDate in API response": " -> ÜBERSPRUNGEN: Anzeige '%s' hat kein Ablaufdatum in API-Antwort"
" -> ad '%s' expires in %d days, will extend": " -> Anzeige '%s' läuft in %d Tagen ab, wird verlängert"
" -> SKIPPED: ad '%s' expires in %d days (can only extend within 8 days)": " -> ÜBERSPRUNGEN: Anzeige '%s' läuft in %d Tagen ab (Verlängern nur innerhalb von 8 Tagen möglich)"
"Processing %s/%s: '%s' from [%s]...": "Verarbeite %s/%s: '%s' aus [%s]..."
extend_ad:
"Extending ad '%s' (ID: %s)...": "Verlängere Anzeige '%s' (ID: %s)..."
" -> FAILED: Could not find extend button for ad ID %s": " -> FEHLER: 'Verlängern'-Button für Anzeigen-ID %s nicht gefunden"
" -> No confirmation dialog found, extension may have completed directly": " -> Kein Bestätigungsdialog gefunden"
" -> SUCCESS: ad extended with ID %s": " -> ERFOLG: Anzeige mit ID %s verlängert"
" -> FAILED: Timeout while extending ad '%s': %s": " -> FEHLER: Zeitüberschreitung beim Verlängern der Anzeige '%s': %s"
" -> FAILED: Could not persist extension for ad '%s': %s": " -> FEHLER: Verlängerung der Anzeige '%s' konnte nicht gespeichert werden: %s"
find_and_click_extend_button:
"Found extend button on page %s": "'Verlängern'-Button auf Seite %s gefunden"
_resolve_workspace:
"Config: %s": "Konfiguration: %s"
"Workspace mode: %s": "Arbeitsmodus: %s"
"Workspace: %s": "Arbeitsverzeichnis: %s"
parse_args:
"Use --help to display available options.": "Mit --help können die verfügbaren Optionen angezeigt werden."
"More than one command given: %s": "Mehr als ein Befehl angegeben: %s"
"Invalid --workspace-mode '%s'. Use 'portable' or 'xdg'.": "Ungültiger --workspace-mode '%s'. Verwenden Sie 'portable' oder 'xdg'."
publish_ads:
"Processing %s/%s: '%s' from [%s]...": "Verarbeite %s/%s: '%s' von [%s]..."
"Skipping because ad is reserved": "Überspringen, da Anzeige reserviert ist"
" -> Could not confirm publishing for '%s', but ad may be online": " -> Veröffentlichung für '%s' konnte nicht bestätigt werden, aber Anzeige ist möglicherweise online"
"Attempt %s/%s failed for '%s': %s. Retrying...": "Versuch %s/%s fehlgeschlagen für '%s': %s. Erneuter Versuch..."
"All %s attempts failed for '%s': %s. Skipping ad.": "Alle %s Versuche fehlgeschlagen für '%s': %s. Überspringe Anzeige."
"DONE: (Re-)published %s (%s failed after retries)": "FERTIG: %s (erneut) veröffentlicht (%s fehlgeschlagen nach Wiederholungen)"
"DONE: (Re-)published %s": "FERTIG: %s (erneut) veröffentlicht"
"ad": "Anzeige"
apply_auto_price_reduction:
"Auto price reduction is enabled for [%s] but no price is configured.": "Automatische Preisreduzierung ist für [%s] aktiviert, aber es wurde kein Preis konfiguriert."
"Auto price reduction is enabled for [%s] but min_price equals price (%s) - no reductions will occur.": "Automatische Preisreduzierung ist für [%s] aktiviert, aber min_price entspricht dem Preis (%s) - es werden keine Reduktionen auftreten."
"Auto price reduction applied: %s -> %s after %s reduction cycles": "Automatische Preisreduzierung angewendet: %s -> %s nach %s Reduktionszyklen"
"Auto price reduction kept price %s after attempting %s reduction cycles": "Automatische Preisreduzierung hat Preis %s beibehalten nach dem Versuch von %s Reduktionszyklen"
_repost_cycle_ready:
"Auto price reduction delayed for [%s]: waiting %s more reposts (completed %s, applied %s reductions)": "Automatische Preisreduzierung für [%s] verzögert: Warte %s weitere erneute Veröffentlichungen (abgeschlossen %s, angewendet %s Reduktionen)"
"Auto price reduction already applied for [%s]: %s reductions match %s eligible reposts": "Automatische Preisreduzierung für [%s] bereits angewendet: %s Reduktionen entsprechen %s berechtigten erneuten Veröffentlichungen"
_day_delay_elapsed:
"Auto price reduction delayed for [%s]: waiting %s days (elapsed %s)": "Automatische Preisreduzierung für [%s] verzögert: Warte %s Tage (vergangen %s)"
"Auto price reduction delayed for [%s]: waiting %s days but publish timestamp missing": "Automatische Preisreduzierung für [%s] verzögert: Warte %s Tage, aber Zeitstempel der Veröffentlichung fehlt"
publish_ad:
"Publishing ad '%s'...": "Veröffentliche Anzeige '%s'..."
"Updating ad '%s'...": "Aktualisiere Anzeige '%s'..."
"Failed to set shipping attribute for type '%s'!": "Fehler beim setzen des Versandattributs für den Typ '%s'!"
"Shipping step skipped - reason: NOT_APPLICABLE": "Versandschritt übersprungen: Versand nicht anwendbar (Status = NOT_APPLICABLE)"
"# Payment form detected! Please proceed with payment.": "# Bestellformular gefunden! Bitte mit der Bezahlung fortfahren."
" -> SUCCESS: ad published with ID %s": " -> ERFOLG: Anzeige mit ID %s veröffentlicht"
" -> SUCCESS: ad updated with ID %s": " -> ERFOLG: Anzeige mit ID %s aktualisiert"
" -> effective ad meta:": " -> effektive Anzeigen-Metadaten:"
"Press a key to continue...": "Eine Taste drücken, um fortzufahren..."
update_ads:
"Processing %s/%s: '%s' from [%s]...": "Verarbeite %s/%s: '%s' von [%s]..."
"Skipping because ad is reserved": "Überspringen, da Anzeige reserviert ist"
" -> SKIPPED: ad '%s' (ID: %s) not found in published ads": " -> ÜBERSPRUNGEN: Anzeige '%s' (ID: %s) nicht in veröffentlichten Anzeigen gefunden"
"DONE: updated %s": "FERTIG: %s aktualisiert"
"ad": "Anzeige"
__set_condition:
"Unable to close condition dialog!": "Kann den Dialog für Artikelzustand nicht schließen!"
"Unable to open condition dialog and select condition [%s]": "Zustandsdialog konnte nicht geöffnet und Zustand [%s] nicht ausgewählt werden"
"Unable to select condition [%s]": "Zustand [%s] konnte nicht ausgewählt werden"
__set_contact_fields:
"Could not set contact street.": "Kontaktstraße konnte nicht gesetzt werden."
"Could not set contact name.": "Kontaktname konnte nicht gesetzt werden."
"Could not set contact location: %s": "Kontaktort konnte nicht gesetzt werden: %s"
"Could not set contact zipcode: %s": "Kontakt-PLZ konnte nicht gesetzt werden: %s"
"No city dropdown option matched location: %s": "Kein Eintrag im Orts-Dropdown passte zum Ort: %s"
? "Phone number field not present on page. This is expected for many private accounts; commercial accounts may still support phone numbers."
: "Telefonnummernfeld auf der Seite nicht vorhanden. Dies ist bei vielen privaten Konten zu erwarten; gewerbliche Konten unterstützen Telefonnummern möglicherweise weiterhin."
__upload_images:
" -> found %s": "-> %s gefunden"
"image": "Bild"
" -> uploading image [%s]": " -> Lade Bild [%s] hoch"
" -> waiting for %s to be processed...": " -> Warte auf Verarbeitung von %s..."
" -> all images uploaded successfully": " -> Alle Bilder erfolgreich hochgeladen"
"Image upload timeout exceeded": "Zeitüberschreitung beim Hochladen der Bilder"
"Not all images were uploaded within timeout. Expected %(expected)d, found %(found)d thumbnails.": "Nicht alle Bilder wurden innerhalb der Zeitüberschreitung hochgeladen. Erwartet: %(expected)d, gefunden: %(found)d Miniaturansichten."
check_thumbnails_uploaded:
" -> %d of %d images processed": " -> %d von %d Bildern verarbeitet"
__check_ad_republication:
" -> SKIPPED: ad [%s] was last published %d days ago. republication is only required every %s days": " -> ÜBERSPRUNGEN: Anzeige [%s] wurde zuletzt vor %d Tagen veröffentlicht. Erneute Veröffentlichung ist erst nach %s Tagen erforderlich"
__set_special_attributes:
"Found %i special attributes": "%i spezielle Attribute gefunden"
"Setting special attribute [%s] to [%s]...": "Setze spezielles Attribut [%s] auf [%s]..."
"Successfully set attribute field [%s] to [%s]...": "Attributfeld [%s] erfolgreich auf [%s] gesetzt..."
"Attribute field '%s' could not be found.": "Attributfeld '%s' konnte nicht gefunden werden."
"Failed to set attribute '%s'": "Fehler beim Setzen des Attributs '%s'"
"Attribute field '%s' seems to be a select...": "Attributfeld '%s' scheint ein Auswahlfeld zu sein..."
"Failed to set attribute field '%s' via known input types.": "Fehler beim Setzen des Attributfelds '%s' über bekannte Eingabetypen."
"Attribute field '%s' seems to be a checkbox...": "Attributfeld '%s' scheint eine Checkbox zu sein..."
"Attribute field '%s' seems to be a text input...": "Attributfeld '%s' scheint ein Texteingabefeld zu sein..."
"Attribute field '%s' seems to be a Combobox (i.e. text input with filtering dropdown)...": "Attributfeld '%s' scheint eine Combobox zu sein (d.h. Texteingabefeld mit Dropdown-Filter)..."
download_ads:
"Fetching published ads...": "Lade veröffentlichte Anzeigen..."
"Loaded %s published ads.": "%s veröffentlichte Anzeigen geladen."
"Scanning your ad overview...": "Scanne Anzeigenübersicht..."
"%s found.": "%s gefunden."
"ad": "Anzeige"
"Starting download of all ads...": "Starte den Download aller Anzeigen..."
"%d of %d ads were downloaded from your profile.": "%d von %d Anzeigen wurden aus Ihrem Profil heruntergeladen."
"Starting download of not yet downloaded ads...": "Starte den Download noch nicht heruntergeladener Anzeigen..."
"Skipping ad with non-numeric id: %s": "Überspringe Anzeige mit nicht-numerischer ID: %s"
"The ad with id %d has already been saved.": "Die Anzeige mit der ID %d wurde bereits gespeichert."
"%s were downloaded from your profile.": "%s wurden aus Ihrem Profil heruntergeladen."
"new ad": "neue Anzeige"
"Starting download of ad(s) with the id(s):": "Starte Download der Anzeige(n) mit den ID(s):"
"Downloaded ad with id %d": "Anzeige mit der ID %d heruntergeladen"
"The page with the id %d does not exist!": "Die Seite mit der ID %d existiert nicht!"
run:
"DONE: No configuration errors found.": "FERTIG: Keine Konfigurationsfehler gefunden."
"DONE: No active ads found.": "FERTIG: Keine aktiven Anzeigen gefunden."
"Invalid --ads selector: \"%s\". Valid values: all, new, due, changed, or comma-separated numeric IDs.": "Ungültiger --ads-Selektor: \"%s\". Gültige Werte: all, new, due, changed oder kommagetrennte numerische IDs."
"Invalid --ads selector: \"%s\". Valid values: all, changed, or comma-separated numeric IDs.": "Ungültiger --ads-Selektor: \"%s\". Gültige Werte: all, changed oder kommagetrennte numerische IDs."
"Invalid --ads selector: \"%s\". Valid values: all, new, or comma-separated numeric IDs.": "Ungültiger --ads-Selektor: \"%s\". Gültige Werte: all, new oder kommagetrennte numerische IDs."
"Invalid --ads selector: \"%s\". Valid values: all or comma-separated numeric IDs.": "Ungültiger --ads-Selektor: \"%s\". Gültige Werte: all oder kommagetrennte numerische IDs."
"DONE: No new/outdated ads found.": "FERTIG: Keine neuen/veralteten Anzeigen gefunden."
"DONE: No ads to delete found.": "FERTIG: Keine zu löschenden Anzeigen gefunden."
"DONE: No changed ads found.": "FERTIG: Keine geänderten Anzeigen gefunden."
"Extending all ads within 8-day window...": "Verlängere alle Anzeigen innerhalb des 8-Tage-Zeitfensters..."
"DONE: No ads found to extend.": "FERTIG: Keine Anzeigen zum Verlängern gefunden."
"Unknown command: %s": "Unbekannter Befehl: %s"
"Timing collector flush failed: %s": "Zeitmessdaten konnten nicht gespeichert werden: %s"
fill_login_data_and_send:
"Logging in as [%s]...": "Anmeldung als [%s]..."
__set_shipping:
"Unable to close shipping dialog!": "Versanddialog konnte nicht geschlossen werden!"
__set_shipping_options:
"Unable to close shipping dialog!": "Versanddialog konnte nicht geschlossen werden!"
update_content_hashes:
"DONE: Updated [content_hash] in %s": "FERTIG: [content_hash] in %s aktualisiert."
"Processing %s/%s: '%s' from [%s]...": "Verarbeite %s/%s: '%s' von [%s]..."
"ad": "Anzeige"
#################################################
kleinanzeigen_bot/extract.py:
#################################################
download_ad:
"Using download directory: %s": "Verwende Download-Verzeichnis: %s"
_download_and_save_image_sync:
"Failed to download image %s: %s": "Fehler beim Herunterladen des Bildes %s: %s"
_download_images_from_ad_page:
"Found %s.": "%s gefunden."
"Downloaded %s.": "%s heruntergeladen."
"No image area found. Continuing without downloading images.": "Keine Bildbereiche gefunden. Fahre ohne Bilder-Download fort."
extract_ad_id_from_ad_url:
"Failed to extract ad ID from URL '%s': %s": "Fehler beim Extrahieren der Anzeigen-ID aus der URL '%s': %s"
extract_own_ads_urls:
"No ad URLs were extracted.": "Es wurden keine Anzeigen-URLs extrahiert."
extract_page_refs:
"Could not find ad list container or ad items on page %s.": "Anzeigenlistencontainer oder Anzeigenelemente auf Seite %s nicht gefunden."
"Error extracting refs on page %s: %s": "Fehler beim Extrahieren der Referenzen auf Seite %s: %s"
"Found %s ad items on page %s.": "%s Anzeigen-Elemente auf Seite %s gefunden."
"Skipping ad item %s/%s on page %s: ad reference link has no href attribute.": "Überspringe Anzeigenelement %s/%s auf Seite %s: Anzeigenlink hat kein href-Attribut."
"Skipping ad item %s/%s on page %s: no ad reference link found (likely unpublished or draft item).": "Überspringe Anzeigenelement %s/%s auf Seite %s: kein Anzeigenlink gefunden (wahrscheinlich unveröffentlicht oder Entwurf)."
"Successfully extracted %s refs from page %s.": "%s Referenzen von Seite %s erfolgreich extrahiert."
navigate_to_ad_page:
"There is no ad under the given ID.": "Es gibt keine Anzeige unter der angegebenen ID."
"A popup appeared!": "Ein Popup ist erschienen!"
_extract_ad_page_info_with_directory_handling:
"Extracting title from ad %s: \"%s\"": "Extrahiere Titel aus Anzeige %s: \"%s\""
"Deleting current folder of ad %s...": "Lösche aktuellen Ordner der Anzeige %s..."
"New directory for ad created at %s.": "Neues Verzeichnis für Anzeige erstellt unter %s."
"Renaming folder from %s to %s for ad %s...": "Benenne Ordner von %s zu %s für Anzeige %s um..."
"Using existing folder for ad %s at %s.": "Verwende bestehenden Ordner für Anzeige %s unter %s."
_extract_contact_from_ad_page:
"No street given in the contact.": "Keine Straße in den Kontaktdaten angegeben."
_extract_category_from_ad_page:
"Breadcrumb container 'vap-brdcrmb' not found; cannot extract ad category: %s": "Breadcrumb-Container 'vap-brdcrmb' nicht gefunden; kann Anzeigenkategorie nicht extrahieren: %s"
"Falling back to legacy breadcrumb selectors; collected ids: %s": "Weiche auf ältere Breadcrumb-Selektoren aus; gesammelte IDs: %s"
"Legacy breadcrumb selectors not found within %.1f seconds (collected ids: %s)": "Ältere Breadcrumb-Selektoren nicht innerhalb von %.1f Sekunden gefunden (gesammelte IDs: %s)"
"Unable to locate breadcrumb fallback selectors within %(seconds).1f seconds.": "Ältere Breadcrumb-Selektoren konnten nicht innerhalb von %(seconds).1f Sekunden gefunden werden."
_extract_sell_directly_from_ad_page:
"Could not extract ad ID from URL: %s": "Konnte Anzeigen-ID nicht aus der URL extrahieren: %s"
#################################################
kleinanzeigen_bot/utils/i18n.py:
#################################################
_detect_locale:
"Error detecting language on Windows": "Fehler bei der Spracherkennung unter Windows"
#################################################
kleinanzeigen_bot/utils/error_handlers.py:
#################################################
on_sigint:
"Aborted on user request.": "Auf Benutzeranfrage abgebrochen."
on_exception:
"%s: %s": "%s: %s"
"Unknown exception occurred (missing exception info): ex_type=%s, ex=%s": "Unbekannter Fehler aufgetreten (fehlende Fehlerinformation): ex_type=%s, ex_value=%s"
#################################################
kleinanzeigen_bot/utils/loggers.py:
#################################################
format:
"CRITICAL": "KRITISCH"
"ERROR": "FEHLER"
"WARNING": "WARNUNG"
#################################################
kleinanzeigen_bot/utils/dicts.py:
#################################################
load_dict_if_exists:
"Loading %s[%s]...": "Lade %s[%s]..."
"Unsupported file type. The filename \"%s\" must end with *.json, *.yaml, or *.yml": "Nicht unterstützter Dateityp. Der Dateiname \"%s\" muss mit *.json, *.yaml oder *.yml enden"
save_dict:
"Saving [%s]...": "Speichere [%s]..."
save_commented_model:
"Saving [%s]...": "Speichere [%s]..."
load_dict_from_module:
"Loading %s[%s.%s]...": "Lade %s[%s.%s]..."
#################################################
kleinanzeigen_bot/utils/pydantics.py:
#################################################
__get_message_template:
"Object has no attribute '{attribute}'": "Objekt hat kein Attribut '{attribute}'"
"Invalid JSON: {error}": "Ungültiges JSON: {error}"
"JSON input should be string, bytes or bytearray": "JSON-Eingabe sollte eine Zeichenkette, Bytes oder Bytearray sein"
"Cannot check `{method_name}` when validating from json, use a JsonOrPython validator instead": "Kann `{method_name}` beim Validieren von JSON nicht prüfen, verwende stattdessen einen JsonOrPython-Validator"
"Recursion error - cyclic reference detected": "Rekursionsfehler zirkuläre Referenz erkannt"
"Field required": "Feld erforderlich"
"Field is frozen": "Feld ist gesperrt"
"Instance is frozen": "Instanz ist gesperrt"
"Extra inputs are not permitted": "Zusätzliche Eingaben sind nicht erlaubt"
"Keys should be strings": "Schlüssel sollten Zeichenketten sein"
"Error extracting attribute: {error}": "Fehler beim Extrahieren des Attributs: {error}"
"Input should be a valid dictionary or instance of {class_name}": "Eingabe sollte ein gültiges Wörterbuch oder eine Instanz von {class_name} sein"
"Input should be a valid dictionary or object to extract fields from": "Eingabe sollte ein gültiges Wörterbuch oder Objekt sein, um Felder daraus zu extrahieren"
"Input should be a dictionary or an instance of {class_name}": "Eingabe sollte ein Wörterbuch oder eine Instanz von {class_name} sein"
"Input should be an instance of {class_name}": "Eingabe sollte eine Instanz von {class_name} sein"
"Input should be None": "Eingabe sollte None sein"
"Input should be greater than {gt}": "Eingabe sollte größer als {gt} sein"
"Input should be greater than or equal to {ge}": "Eingabe sollte größer oder gleich {ge} sein"
"Input should be less than {lt}": "Eingabe sollte kleiner als {lt} sein"
"Input should be less than or equal to {le}": "Eingabe sollte kleiner oder gleich {le} sein"
"Input should be a multiple of {multiple_of}": "Eingabe sollte ein Vielfaches von {multiple_of} sein"
"Input should be a finite number": "Eingabe sollte eine endliche Zahl sein"
"{field_type} should have at least {min_length} item{expected_plural} after validation, not {actual_length}": "{field_type} sollte nach der Validierung mindestens {min_length} Element{expected_plural} haben, nicht {actual_length}"
"{field_type} should have at most {max_length} item{expected_plural} after validation, not {actual_length}": "{field_type} sollte nach der Validierung höchstens {max_length} Element{expected_plural} haben, nicht {actual_length}"
"Input should be iterable": "Eingabe sollte iterierbar sein"
"Error iterating over object, error: {error}": "Fehler beim Iterieren des Objekts: {error}"
"Input should be a valid string": "Eingabe sollte eine gültige Zeichenkette sein"
"Input should be a string, not an instance of a subclass of str": "Eingabe sollte ein String sein, keine Instanz einer Unterklasse von str"
"Input should be a valid string, unable to parse raw data as a unicode string": "Eingabe sollte eine gültige Zeichenkette sein, Rohdaten können nicht als Unicode-String geparst werden"
"String should have at least {min_length} character{expected_plural}": "String sollte mindestens {min_length} Zeichen{expected_plural} haben"
"String should have at most {max_length} character{expected_plural}": "String sollte höchstens {max_length} Zeichen{expected_plural} haben"
"String should match pattern '{pattern}'": "String sollte dem Muster '{pattern}' entsprechen"
"Input should be {expected}": "Eingabe sollte {expected} sein"
"Input should be a valid dictionary": "Eingabe sollte ein gültiges Wörterbuch sein"
"Input should be a valid mapping, error: {error}": "Eingabe sollte eine gültige Zuordnung sein, Fehler: {error}"
"Input should be a valid list": "Eingabe sollte eine gültige Liste sein"
"Input should be a valid tuple": "Eingabe sollte ein gültiges Tupel sein"
"Input should be a valid set": "Eingabe sollte eine gültige Menge sein"
"Set items should be hashable": "Elemente einer Menge sollten hashbar sein"
"Input should be a valid boolean": "Eingabe sollte ein gültiger Boolescher Wert sein"
"Input should be a valid boolean, unable to interpret input": "Eingabe sollte ein gültiger Boolescher Wert sein, Eingabe kann nicht interpretiert werden"
"Input should be a valid integer": "Eingabe sollte eine gültige Ganzzahl sein"
"Input should be a valid integer, unable to parse string as an integer": "Eingabe sollte eine gültige Ganzzahl sein, Zeichenkette konnte nicht als Ganzzahl geparst werden"
"Input should be a valid integer, got a number with a fractional part": "Eingabe sollte eine gültige Ganzzahl sein, Zahl hat einen Dezimalteil"
"Unable to parse input string as an integer, exceeded maximum size": "Zeichenkette konnte nicht als Ganzzahl geparst werden, maximale Größe überschritten"
"Input should be a valid number": "Eingabe sollte eine gültige Zahl sein"
"Input should be a valid number, unable to parse string as a number": "Eingabe sollte eine gültige Zahl sein, Zeichenkette kann nicht als Zahl geparst werden"
"Input should be a valid bytes": "Eingabe sollte gültige Bytes sein"
"Data should have at least {min_length} byte{expected_plural}": "Daten sollten mindestens {min_length} Byte{expected_plural} enthalten"
"Data should have at most {max_length} byte{expected_plural}": "Daten sollten höchstens {max_length} Byte{expected_plural} enthalten"
"Data should be valid {encoding}: {encoding_error}": "Daten sollten gültiges {encoding} sein: {encoding_error}"
"Value error, {error}": "Wertfehler: {error}"
"Assertion failed, {error}": "Assertion fehlgeschlagen: {error}"
"Input should be a valid date": "Eingabe sollte ein gültiges Datum sein"
"Input should be a valid date in the format YYYY-MM-DD, {error}": "Eingabe sollte ein gültiges Datum im Format YYYY-MM-DD sein: {error}"
"Input should be a valid date or datetime, {error}": "Eingabe sollte ein gültiges Datum oder eine gültige Datums-Uhrzeit sein: {error}"
"Datetimes provided to dates should have zero time - e.g. be exact dates": "Datetime-Werte für Datum sollten keine Uhrzeit enthalten also exakte Daten sein"
"Date should be in the past": "Datum sollte in der Vergangenheit liegen"
"Date should be in the future": "Datum sollte in der Zukunft liegen"
"Input should be a valid time": "Eingabe sollte eine gültige Uhrzeit sein"
"Input should be in a valid time format, {error}": "Eingabe sollte in einem gültigen Zeitformat sein: {error}"
"Input should be a valid datetime": "Eingabe sollte ein gültiges Datum mit Uhrzeit sein"
"Input should be a valid datetime, {error}": "Eingabe sollte ein gültiges Datum mit Uhrzeit sein: {error}"
"Invalid datetime object, got {error}": "Ungültiges Datetime-Objekt: {error}"
"Input should be a valid datetime or date, {error}": "Eingabe sollte ein gültiges Datum oder Datum mit Uhrzeit sein: {error}"
"Input should be in the past": "Eingabe sollte in der Vergangenheit liegen"
"Input should be in the future": "Eingabe sollte in der Zukunft liegen"
"Input should not have timezone info": "Eingabe sollte keine Zeitzonen-Information enthalten"
"Input should have timezone info": "Eingabe sollte Zeitzonen-Information enthalten"
"Timezone offset of {tz_expected} required, got {tz_actual}": "Zeitzonen-Offset von {tz_expected} erforderlich, erhalten: {tz_actual}"
"Input should be a valid timedelta": "Eingabe sollte ein gültiges Zeitdelta sein"
"Input should be a valid timedelta, {error}": "Eingabe sollte ein gültiges Zeitdelta sein: {error}"
"Input should be a valid frozenset": "Eingabe sollte ein gültiges Frozenset sein"
"Input should be an instance of {class}": "Eingabe sollte eine Instanz von {class} sein"
"Input should be a subclass of {class}": "Eingabe sollte eine Unterklasse von {class} sein"
"Input should be callable": "Eingabe sollte aufrufbar sein"
"Input tag '{tag}' found using {discriminator} does not match any of the expected tags: {expected_tags}": "Eingabe-Tag '{tag}', ermittelt durch {discriminator}, stimmt mit keinem der erwarteten Tags überein: {expected_tags}"
"Unable to extract tag using discriminator {discriminator}": "Tag kann mit {discriminator} nicht extrahiert werden"
"Arguments must be a tuple, list or a dictionary": "Argumente müssen ein Tupel, eine Liste oder ein Wörterbuch sein"
"Missing required argument": "Erforderliches Argument fehlt"
"Unexpected keyword argument": "Unerwartetes Schlüsselwort-Argument"
"Missing required keyword only argument": "Erforderliches keyword-only-Argument fehlt"
"Unexpected positional argument": "Unerwartetes Positionsargument"
"Missing required positional only argument": "Erforderliches positional-only-Argument fehlt"
"Got multiple values for argument": "Mehrere Werte für Argument erhalten"
"URL input should be a string or URL": "URL-Eingabe sollte eine Zeichenkette oder URL sein"
"Input should be a valid URL, {error}": "Eingabe sollte eine gültige URL sein: {error}"
"Input violated strict URL syntax rules, {error}": "Eingabe hat strikte URL-Syntaxregeln verletzt: {error}"
"URL should have at most {max_length} character{expected_plural}": "URL sollte höchstens {max_length} Zeichen{expected_plural} haben"
"URL scheme should be {expected_schemes}": "URL-Schema sollte {expected_schemes} sein"
"UUID input should be a string, bytes or UUID object": "UUID-Eingabe sollte eine Zeichenkette, Bytes oder ein UUID-Objekt sein"
"Input should be a valid UUID, {error}": "Eingabe sollte eine gültige UUID sein: {error}"
"UUID version {expected_version} expected": "UUID-Version {expected_version} erwartet"
"Decimal input should be an integer, float, string or Decimal object": "Decimal-Eingabe sollte eine Ganzzahl, Gleitkommazahl, Zeichenkette oder ein Decimal-Objekt sein"
"Input should be a valid decimal": "Eingabe sollte ein gültiges Decimal sein"
"Decimal input should have no more than {max_digits} digit{expected_plural} in total": "Decimal-Eingabe sollte insgesamt nicht mehr als {max_digits} Ziffer{expected_plural} haben"
"Decimal input should have no more than {decimal_places} decimal place{expected_plural}": "Decimal-Eingabe sollte nicht mehr als {decimal_places} Dezimalstelle{expected_plural} haben"
"Decimal input should have no more than {whole_digits} digit{expected_plural} before the decimal point": "Decimal-Eingabe sollte vor dem Dezimalpunkt nicht mehr als {whole_digits} Ziffer{expected_plural} haben"
? "Input should be a valid python complex object, a number, or a valid complex string following the rules at https://docs.python.org/3/library/functions.html#complex"
: "Eingabe sollte ein gültiges Python-komplexes Objekt, eine Zahl oder eine gültige komplexe Zeichenkette sein, gemäß https://docs.python.org/3/library/functions.html#complex"
"Input should be a valid complex string following the rules at https://docs.python.org/3/library/functions.html#complex": "Eingabe sollte eine gültige komplexe Zeichenkette sein, gemäß https://docs.python.org/3/library/functions.html#complex"
format_validation_error:
"validation error": "Validationsfehler"
"%s for [%s]:": "%s für %s"
"' or '": "' oder '"
#################################################
kleinanzeigen_bot/utils/web_scraping_mixin.py:
#################################################
create_browser_session:
"Creating Browser session...": "Erstelle Browser-Sitzung..."
"Using existing browser process at %s:%s": "Verwende existierenden Browser-Prozess unter %s:%s"
"New Browser session is %s": "Neue Browser-Sitzung ist %s"
" -> Browser binary location: %s": " -> Browser-Programmpfad: %s"
" -> Browser profile name: %s": " -> Browser-Profilname: %s"
" -> Browser user data dir: %s": " -> Browser-Benutzerdatenverzeichnis: %s"
" -> Custom Browser argument: %s": " -> Benutzerdefiniertes Browser-Argument: %s"
"Ignoring empty --user-data-dir= argument; falling back to configured user_data_dir.": "Ignoriere leeres --user-data-dir= Argument; verwende konfiguriertes user_data_dir."
"Configured browser.user_data_dir (%s) does not match --user-data-dir argument (%s); using the argument value.": "Konfiguriertes browser.user_data_dir (%s) stimmt nicht mit --user-data-dir Argument (%s) überein; verwende Argument-Wert."
"Remote debugging detected, but browser configuration looks invalid: %s": "Remote-Debugging erkannt, aber Browser-Konfiguration scheint ungültig: %s"
" -> Setting chrome prefs [%s]...": " -> Setze Chrome-Einstellungen [%s]..."
" -> Adding Browser extension: [%s]": " -> Füge Browser-Erweiterung hinzu: [%s]"
"Failed to connect to browser. This error often occurs when:": "Fehler beim Verbinden mit dem Browser. Dieser Fehler tritt häufig auf, wenn:"
"Failed to start browser. This error often occurs when:": "Fehler beim Starten des Browsers. Dieser Fehler tritt häufig auf, wenn:"
"1. Running as root user (try running as regular user)": "1. Als Root-Benutzer ausgeführt wird (versuchen Sie es als normaler Benutzer)"
"2. Browser profile is locked or in use by another process": "2. Das Browser-Profil gesperrt oder von einem anderen Prozess verwendet wird"
"3. Insufficient permissions to access the browser profile": "3. Unzureichende Berechtigungen für den Zugriff auf das Browser-Profil"
"4. Browser is not properly started with remote debugging enabled": "4. Der Browser nicht ordnungsgemäß mit aktiviertem Remote-Debugging gestartet wurde"
"4. Browser binary is not executable or missing": "4. Die Browser-Binärdatei nicht ausführbar oder fehlend ist"
"5. Check if any antivirus or security software is blocking the browser": "5. Überprüfen Sie, ob Antiviren- oder Sicherheitssoftware den Browser blockiert"
"Troubleshooting steps:": "Schritte zur Fehlerbehebung:"
"1. Close all browser instances and try again": "1. Schließen Sie alle Browser-Instanzen und versuchen Sie es erneut"
"2. Remove the user_data_dir configuration temporarily": "2. Entfernen Sie die user_data_dir-Konfiguration vorübergehend"
"3. Start browser manually with: %s --remote-debugging-port=%d": "3. Starten Sie den Browser manuell mit: %s --remote-debugging-port=%d"
"3. Try running without profile configuration": "3. Versuchen Sie es ohne Profil-Konfiguration"
"4. Check browser binary permissions: %s": "4. Überprüfen Sie die Browser-Binärdatei-Berechtigungen: %s"
"4. Check if any antivirus or security software is blocking the connection": "4. Überprüfen Sie, ob Antiviren- oder Sicherheitssoftware die Verbindung blockiert"
web_check:
"Unsupported attribute: %s": "Nicht unterstütztes Attribut: %s"
web_select:
"Option not found by value or displayed text: %s": "Option nicht gefunden nach Wert oder angezeigtem Text: %s"
web_select_combobox:
"Combobox input field does not have 'aria-controls' attribute.": "Das Eingabefeld der Combobox hat kein 'aria-controls'-Attribut."
"Combobox missing aria-controls attribute": "Combobox fehlt aria-controls Attribut"
"No matching option found in combobox: '%s'": "Keine passende Option in Combobox gefunden: '%s'"
_navigate_paginated_ad_overview:
"Failed to open ad overview page at %s: timeout": "Fehler beim Öffnen der Anzeigenübersichtsseite unter %s: Zeitüberschreitung"
"Scroll timeout on page %s (non-critical, continuing)": "Zeitüberschreitung beim Scrollen auf Seite %s (nicht kritisch, wird fortgesetzt)"
"Page action timed out on page %s": "Seitenaktion hat auf Seite %s eine Zeitüberschreitung erreicht"
"Ad list container not found. Maybe no ads present?": "Anzeigenlistencontainer nicht gefunden. Vielleicht sind keine Anzeigen vorhanden?"
"Multiple ad pages detected.": "Mehrere Anzeigenseiten erkannt."
"No pagination controls found. Assuming single page.": "Keine Paginierungssteuerung gefunden. Es wird von einer einzelnen Seite ausgegangen."
"Processing page %s...": "Verarbeite Seite %s..."
"Navigating to page %s...": "Navigiere zu Seite %s..."
"Last page reached (no enabled 'Naechste' button found).": "Letzte Seite erreicht (kein aktivierter 'Naechste'-Button gefunden)."
"No pagination controls found. Assuming last page.": "Keine Paginierungssteuerung gefunden. Es wird von der letzten Seite ausgegangen."
_record_timing:
"Timing collector failed for key=%s operation=%s: %s": "Zeitmessung fehlgeschlagen für key=%s operation=%s: %s"
_allocate_selector_group_budgets:
"selector_count must be > 0": "selector_count muss > 0 sein"
web_find_first_available:
"selectors must contain at least one selector": "selectors muss mindestens einen Selektor enthalten"
attempt:
"No selector candidates executed.": "Keine Selektor-Kandidaten ausgeführt."
? "No HTML element found using selector group after trying %(count)d alternatives within %(timeout)s seconds. Last error: %(error)s"
: "Kein HTML-Element über Selektorgruppe gefunden, nachdem %(count)d Alternativen innerhalb von %(timeout)s Sekunden versucht wurden. Letzter Fehler: %(error)s"
close_browser_session:
"Closing Browser session...": "Schließe Browser-Sitzung..."
get_compatible_browser:
"Installed browser could not be detected": "Installierter Browser konnte nicht erkannt werden"
"Installed browser for OS %s could not be detected": "Installierter Browser für Betriebssystem %s konnte nicht erkannt werden"
web_open:
" => skipping, [%s] is already open": " => überspringe, [%s] ist bereits geöffnet"
" -> Opening [%s]...": " -> Öffne [%s]..."
web_request:
" -> HTTP %s [%s]...": " -> HTTP %s [%s]..."
_web_find_once:
"Unsupported selector type: %s": "Nicht unterstützter Selektor-Typ: %s"
_web_find_all_once:
"Unsupported selector type: %s": "Nicht unterstützter Selektor-Typ: %s"
diagnose_browser_issues:
"=== Browser Connection Diagnostics ===": "=== Browser-Verbindungsdiagnose ==="
"=== End Diagnostics ===": "=== Ende der Diagnose ==="
"(ok) Browser binary exists: %s": "(Ok) Browser-Binärdatei existiert: %s"
"(ok) Browser binary is executable": "(Ok) Browser-Binärdatei ist ausführbar"
"(ok) Auto-detected browser: %s": "(Ok) Automatisch erkannter Browser: %s"
"(ok) User data directory exists: %s": "(Ok) Benutzerdatenverzeichnis existiert: %s"
"(ok) User data directory is readable and writable": "(Ok) Benutzerdatenverzeichnis ist lesbar und beschreibbar"
"(ok) Remote debugging port is open": "(Ok) Remote-Debugging-Port ist offen"
"(fail) Browser binary not found: %s": "(Fehler) Browser-Binärdatei nicht gefunden: %s"
"(fail) Browser binary is not executable": "(Fehler) Browser-Binärdatei ist nicht ausführbar"
"(fail) No compatible browser found": "(Fehler) Kein kompatibler Browser gefunden"
"(fail) User data directory permissions issue": "(Fehler) Benutzerdatenverzeichnis-Berechtigungsproblem"
"(info) User data directory does not exist (will be created): %s": "(Info) Benutzerdatenverzeichnis existiert nicht (wird erstellt): %s"
"(info) Remote debugging port configured: %d": "(Info) Remote-Debugging-Port konfiguriert: %d"
"(info) Remote debugging port is not open": "(Info) Remote-Debugging-Port ist nicht offen"
"(warn) Unable to inspect browser processes: %s": "(Warnung) Browser-Prozesse konnten nicht überprüft werden: %s"
"(info) No browser processes currently running": "(Info) Derzeit keine Browser-Prozesse aktiv"
"(fail) Running as root - this can cause browser issues": "(Fehler) Läuft als Root - dies kann Browser-Probleme verursachen"
"(info) Found %d browser processes running": "(Info) %d Browser-Prozesse aktiv gefunden"
" - PID %d: %s (remote debugging enabled)": " - PID %d: %s (Remote-Debugging aktiviert)"
" - PID %d: %s (remote debugging NOT enabled)": " - PID %d: %s (Remote-Debugging NICHT aktiviert)"
"(ok) Remote debugging API accessible - Browser: %s": "(ok) Remote-Debugging-API zugänglich - Browser: %s"
"(fail) Remote debugging port is open but API not accessible: %s": "(Fehler) Remote-Debugging-Port ist offen, aber API nicht zugänglich: %s"
" This might indicate a browser update issue or configuration problem": " Dies könnte auf ein Browser-Update-Problem oder Konfigurationsproblem hinweisen"
_validate_chrome_136_configuration:
" -> %s 136+ configuration validation failed: %s": " -> %s 136+ Konfigurationsvalidierung fehlgeschlagen: %s"
" -> %s 136+ configuration validation passed": " -> %s 136+ Konfigurationsvalidierung bestanden"
_validate_chrome_version_configuration:
" -> %s 136+ detected: %s": " -> %s 136+ erkannt: %s"
" -> %s version detected: %s (pre-136, no special validation required)": " -> %s-Version erkannt: %s (vor 136, keine besondere Validierung erforderlich)"
" -> Browser version detection failed, skipping validation: %s": " -> Browser-Versionserkennung fehlgeschlagen, Validierung wird übersprungen: %s"
" -> Unexpected error during browser version validation, skipping: %s": " -> Unerwarteter Fehler bei Browser-Versionsvalidierung, wird übersprungen: %s"
_diagnose_chrome_version_issues:
"(info) %s version from binary: %s (major: %d)": "(Info) %s-Version von Binärdatei: %s (Hauptversion: %d)"
"(info) %s version from remote debugging: %s (major: %d)": "(Info) %s-Version von Remote-Debugging: %s (Hauptversion: %d)"
"(info) %s 136+ detected - security validation required": "(Info) %s 136+ erkannt - Sicherheitsvalidierung erforderlich"
"(info) %s pre-136 detected - no special security requirements": "(Info) %s vor 136 erkannt - keine besonderen Sicherheitsanforderungen"
"(info) Remote %s 136+ detected - validating configuration": "(Info) Remote %s 136+ erkannt - validiere Konfiguration"
"(fail) %s 136+ configuration validation failed: %s": "(Fehler) %s 136+ Konfigurationsvalidierung fehlgeschlagen: %s"
"(ok) %s 136+ configuration validation passed": "(Ok) %s 136+ Konfigurationsvalidierung bestanden"
"(info) Chrome/Edge 136+ security changes require --user-data-dir for remote debugging": "(Info) Chrome/Edge 136+ Sicherheitsänderungen erfordern --user-data-dir für Remote-Debugging"
" See: https://developer.chrome.com/blog/remote-debugging-port": " Siehe: https://developer.chrome.com/blog/remote-debugging-port"
" -> Browser version diagnostics failed: %s": " -> Browser-Versionsdiagnose fehlgeschlagen: %s"
" -> Unexpected error during browser version diagnostics: %s": " -> Unerwarteter Fehler bei Browser-Versionsdiagnose: %s"
" Solution: Add --user-data-dir=/path/to/directory to browser arguments": " Lösung: Fügen Sie --user-data-dir=/pfad/zum/verzeichnis zu Browser-Argumenten hinzu"
" And user_data_dir: \"/path/to/directory\" to your configuration": " Und user_data_dir: \"/pfad/zum/verzeichnis\" zu Ihrer Konfiguration"
#################################################
kleinanzeigen_bot/update_checker.py:
#################################################
_resolve_commitish:
"Could not resolve commit '%s': %s": "Konnte Commit '%s' nicht aufloesen: %s"
check_for_updates:
"A new version is available: %s from %s UTC (current: %s from %s UTC, channel: %s)": "Eine neue Version ist verfügbar: %s vom %s UTC (aktuell: %s vom %s UTC, Kanal: %s)"
"Could not determine commit dates for comparison.": "Konnte Commit-Daten für den Vergleich nicht ermitteln."
"Could not determine local commit hash.": "Konnte lokalen Commit-Hash nicht ermitteln."
"Could not determine local version.": "Konnte lokale Version nicht ermitteln."
"Could not determine release commit hash.": "Konnte Release-Commit-Hash nicht ermitteln."
"Could not get releases: %s": "Konnte Releases nicht abrufen: %s"
? "Release notes:\n%s"
: "Release-Notizen:\n%s"
"You are on the latest version: %s (compared to %s in channel %s)": "Sie verwenden die neueste Version: %s (verglichen mit %s im Kanal %s)"
"Latest release from GitHub is a prerelease, but 'latest' channel expects a stable release.": "Die neueste GitHub-Version ist eine Vorabversion, aber der 'latest'-Kanal erwartet eine stabile Version."
"No prerelease found for 'preview' channel.": "Keine Vorabversion für den 'preview'-Kanal gefunden."
"Unknown update channel: %s": "Unbekannter Update-Kanal: %s"
? "You are on a different commit than the release for channel '%s' (tag: %s). This may mean you are ahead, behind, or on a different branch. Local commit: %s (%s UTC), Release commit: %s (%s UTC)"
: "Sie befinden sich auf einem anderen Commit als das Release für Kanal '%s' (Tag: %s). Dies kann bedeuten, dass Sie voraus, hinterher oder auf einem anderen Branch sind. Lokaler Commit: %s (%s UTC), Release-Commit: %s (%s UTC)"
#################################################
kleinanzeigen_bot/model/config_model.py:
#################################################
_validate_config:
"strategy must be specified when auto_price_reduction is enabled": "strategy muss angegeben werden, wenn auto_price_reduction aktiviert ist"
"amount must be specified when auto_price_reduction is enabled": "amount muss angegeben werden, wenn auto_price_reduction aktiviert ist"
"min_price must be specified when auto_price_reduction is enabled": "min_price muss angegeben werden, wenn auto_price_reduction aktiviert ist"
"Percentage reduction amount must not exceed %s": "Prozentuale Reduktionsmenge darf %s nicht überschreiten"
migrate_legacy_diagnostics_keys:
"Deprecated: 'login_detection_capture' is replaced by 'capture_on.login_detection'. Please update your config.": "Veraltet: 'login_detection_capture' wurde durch 'capture_on.login_detection' ersetzt. Bitte aktualisieren Sie Ihre Konfiguration."
"Deprecated: 'publish_error_capture' is replaced by 'capture_on.publish'. Please update your config.": "Veraltet: 'publish_error_capture' wurde durch 'capture_on.publish' ersetzt. Bitte aktualisieren Sie Ihre Konfiguration."
_validate_glob_pattern:
"must be a non-empty, non-blank glob pattern": "muss ein nicht-leeres Glob-Muster sein"
_validate_pause_requires_capture:
"pause_on_login_detection_failure requires capture_on.login_detection to be enabled": "pause_on_login_detection_failure erfordert, dass capture_on.login_detection aktiviert ist"
#################################################
kleinanzeigen_bot/model/ad_model.py:
#################################################
_validate_auto_price_reduction_constraints:
"price must be specified when auto_price_reduction is enabled": "price muss angegeben werden, wenn auto_price_reduction aktiviert ist"
"min_price must not exceed price": "min_price darf price nicht überschreiten"
_calculate_auto_price_internal:
"min_price must be specified when auto_price_reduction is enabled": "min_price muss angegeben werden, wenn auto_price_reduction aktiviert ist"
#################################################
kleinanzeigen_bot/model/update_check_state.py:
#################################################
_parse_timestamp:
"Invalid timestamp format in state file: %s": "Ungültiges Zeitstempel-Format in der Statusdatei: %s"
load:
"Failed to load update check state: %s": "Fehler beim Laden des Update-Prüfstatus: %s"
"Migrating update check state from version %d to %d": "Migriere Update-Prüfstatus von Version %d zu %d"
save:
"Failed to save update check state: %s": "Fehler beim Speichern des Update-Prüfstatus: %s"
"Permission denied when saving update check state to %s": "Keine Berechtigung zum Speichern des Update-Prüfstatus in %s"
should_check:
"Falling back to default interval: 1d (preview channel). Please fix your config to avoid this warning.": "Falle auf das Standardintervall zurück: 1 Tag (Vorschaukanal). Bitte korrigieren Sie Ihre Konfiguration, um diese Warnung zu vermeiden."
"Falling back to default interval: 7d (latest channel). Please fix your config to avoid this warning.": "Falle auf das Standardintervall zurück: 7 Tage (Stabiler Kanal). Bitte korrigieren Sie Ihre Konfiguration, um diese Warnung zu vermeiden."
"Interval is zero: %s. Minimum interval is 1d. Using default interval for this run.": "Intervall ist null: %s. Das Mindestintervall beträgt 1 Tag. Es wird das Standardintervall für diesen Durchlauf verwendet."
"Interval too long: %s. Maximum interval is 30d. Using default interval for this run.": "Intervall zu lang: %s. Das maximale Intervall beträgt 30 Tage. Es wird das Standardintervall für diesen Durchlauf verwendet."
"Interval too short: %s. Minimum interval is 1d. Using default interval for this run.": "Intervall zu kurz: %s. Das Mindestintervall beträgt 1 Tag. Es wird das Standardintervall für diesen Durchlauf verwendet."
"Invalid interval format or unsupported unit: %s. Using default interval for this run.": "Ungültiges Intervallformat oder nicht unterstützte Einheit: %s. Es wird das Standardintervall für diesen Durchlauf verwendet."
"Negative interval: %s. Minimum interval is 1d. Using default interval for this run.": "Negatives Intervall: %s. Das Mindestintervall beträgt 1 Tag. Es wird das Standardintervall für diesen Durchlauf verwendet."
#################################################
kleinanzeigen_bot/utils/diagnostics.py:
#################################################
_copy_log_sync:
"Log file not found for diagnostics copy: %s": "Logdatei nicht gefunden für Diagnosekopie: %s"
capture_diagnostics:
"Diagnostics screenshot capture failed: %s": "Diagnose-Screenshot-Erfassung fehlgeschlagen: %s"
"Diagnostics HTML capture failed: %s": "Diagnose-HTML-Erfassung fehlgeschlagen: %s"
"Diagnostics JSON capture failed: %s": "Diagnose-JSON-Erfassung fehlgeschlagen: %s"
"Diagnostics log copy failed: %s": "Diagnose-Log-Kopie fehlgeschlagen: %s"
"Diagnostics saved: %s": "Diagnosedaten gespeichert: %s"
"Diagnostics capture attempted but no artifacts were saved (all captures failed)": "Diagnoseerfassung versucht, aber keine Artefakte gespeichert (alle Erfassungen fehlgeschlagen)"
"Diagnostics capture failed: %s": "Diagnoseerfassung fehlgeschlagen: %s"
#################################################
kleinanzeigen_bot/utils/timing_collector.py:
#################################################
_load_existing_sessions:
"Unable to load timing collection data from %s: %s": "Zeitmessdaten aus %s konnten nicht geladen werden: %s"
flush:
"Failed to flush timing collection data: %s": "Zeitmessdaten konnten nicht gespeichert werden: %s"
#################################################
kleinanzeigen_bot/utils/xdg_paths.py:
#################################################
ensure_directory:
"Failed to create %s %s: %s": "Fehler beim Erstellen von %s %s: %s"
detect_installation_mode:
"Detected installation mode: %s": "Erkannter Installationsmodus: %s"
"No existing configuration (portable or system-wide) found": "Keine bestehende Konfiguration (portabel oder systemweit) gefunden"
prompt_installation_mode:
"Non-interactive mode detected, defaulting to portable installation": "Nicht-interaktiver Modus erkannt, Standard-Installation: portabel"
"Choose installation type:": "Installationstyp wählen:"
"[1] Portable (current directory)": "[1] Portabel (aktuelles Verzeichnis)"
"[2] User directories (per-user standard locations)": "[2] Benutzerverzeichnisse (pro Benutzer, standardisierte Pfade)"
"Enter 1 or 2: ": "1 oder 2 eingeben: "
"Defaulting to portable installation mode": "Standard-Installationsmodus: portabel"
"User selected installation mode: %s": "Benutzer hat Installationsmodus gewählt: %s"
"Invalid choice. Please enter 1 or 2.": "Ungültige Auswahl. Bitte 1 oder 2 eingeben."
resolve_workspace: {}
_format_hits:
"none": "keine"
_workspace_mode_resolution_error:
? "Cannot determine workspace mode for --config=%(config_file)s. Use --workspace-mode=portable or --workspace-mode=xdg.\nFor cleanup guidance, see: %(url)s"
: "Arbeitsmodus für --config=%(config_file)s konnte nicht bestimmt werden. Verwende --workspace-mode=portable oder --workspace-mode=xdg.\nHinweise zur Bereinigung: %(url)s"
"Portable footprint hits": "Gefundene portable Spuren"
"XDG footprint hits": "Gefundene XDG-Spuren"
"Detected both portable and XDG footprints.": "Sowohl portable als auch XDG-Spuren wurden gefunden."
"Detected neither portable nor XDG footprints.": "Weder portable noch XDG-Spuren wurden gefunden."

View File

@@ -1,322 +0,0 @@
"""
SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
SPDX-License-Identifier: AGPL-3.0-or-later
SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""
import logging, os, platform, shutil, time
from collections.abc import Callable, Iterable
from typing import Any, Final, TypeVar
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException, TimeoutException, WebDriverException
from selenium.webdriver.common.by import By
from selenium.webdriver.chromium.options import ChromiumOptions
from selenium.webdriver.chromium.webdriver import ChromiumDriver
from selenium.webdriver.remote.webdriver import WebDriver
from selenium.webdriver.remote.webelement import WebElement
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import Select, WebDriverWait
import selenium_stealth
from .utils import ensure, pause, T
LOG:Final[logging.Logger] = logging.getLogger("kleinanzeigen_bot.selenium_mixin")
class BrowserConfig:
def __init__(self) -> None:
self.arguments:Iterable[str] = []
self.binary_location:str | None = None
self.extensions:Iterable[str] = []
self.use_private_window:bool = True
self.user_data_dir:str = ""
self.profile_name:str = ""
CHROMIUM_OPTIONS = TypeVar('CHROMIUM_OPTIONS', bound = ChromiumOptions) # pylint: disable=invalid-name
class SeleniumMixin:
def __init__(self) -> None:
os.environ["SE_AVOID_STATS"] = "true" # see https://www.selenium.dev/documentation/selenium_manager/
self.browser_config:Final[BrowserConfig] = BrowserConfig()
self.webdriver:WebDriver = None
def _init_browser_options(self, browser_options:CHROMIUM_OPTIONS) -> CHROMIUM_OPTIONS:
if self.browser_config.use_private_window:
if isinstance(browser_options, webdriver.EdgeOptions):
browser_options.add_argument("-inprivate")
else:
browser_options.add_argument("--incognito")
if self.browser_config.user_data_dir:
LOG.info(" -> Browser User Data Dir: %s", self.browser_config.user_data_dir)
browser_options.add_argument(f"--user-data-dir={self.browser_config.user_data_dir}")
if self.browser_config.profile_name:
LOG.info(" -> Browser Profile Name: %s", self.browser_config.profile_name)
browser_options.add_argument(f"--profile-directory={self.browser_config.profile_name}")
browser_options.add_argument("--disable-crash-reporter")
browser_options.add_argument("--no-first-run")
browser_options.add_argument("--no-service-autorun")
for chrome_option in self.browser_config.arguments:
LOG.info(" -> Custom chrome argument: %s", chrome_option)
browser_options.add_argument(chrome_option)
LOG.debug("Effective browser arguments: %s", browser_options.arguments)
for crx_extension in self.browser_config.extensions:
ensure(os.path.exists(crx_extension), f"Configured extension-file [{crx_extension}] does not exist.")
browser_options.add_extension(crx_extension)
LOG.debug("Effective browser extensions: %s", browser_options.extensions)
browser_options.add_experimental_option("excludeSwitches", ["enable-automation"])
browser_options.add_experimental_option("useAutomationExtension", False)
browser_options.add_experimental_option("prefs", {
"credentials_enable_service": False,
"profile.password_manager_enabled": False,
"profile.default_content_setting_values.notifications": 2, # 1 = allow, 2 = block browser notifications
"devtools.preferences.currentDockState": "\"bottom\""
})
if not LOG.isEnabledFor(logging.DEBUG):
browser_options.add_argument("--log-level=3") # INFO: 0, WARNING: 1, ERROR: 2, FATAL: 3
LOG.debug("Effective experimental options: %s", browser_options.experimental_options)
if self.browser_config.binary_location:
browser_options.binary_location = self.browser_config.binary_location
LOG.info(" -> Chrome binary location: %s", self.browser_config.binary_location)
return browser_options
def create_webdriver_session(self) -> None:
LOG.info("Creating WebDriver session...")
if self.browser_config.binary_location:
ensure(os.path.exists(self.browser_config.binary_location), f"Specified browser binary [{self.browser_config.binary_location}] does not exist.")
else:
self.browser_config.binary_location = self.get_compatible_browser()
if "edge" in self.browser_config.binary_location.lower():
os.environ["MSEDGEDRIVER_TELEMETRY_OPTOUT"] = "1" # https://docs.microsoft.com/en-us/microsoft-edge/privacy-whitepaper/#microsoft-edge-driver
browser_options = self._init_browser_options(webdriver.EdgeOptions())
browser_options.binary_location = self.browser_config.binary_location
self.webdriver = webdriver.Edge(options = browser_options)
else:
browser_options = self._init_browser_options(webdriver.ChromeOptions())
browser_options.binary_location = self.browser_config.binary_location
self.webdriver = webdriver.Chrome(options = browser_options)
LOG.info(" -> Chrome driver: %s", self.webdriver.service.path)
# workaround to support Edge, see https://github.com/diprajpatra/selenium-stealth/pull/25
selenium_stealth.Driver = ChromiumDriver
selenium_stealth.stealth(self.webdriver, # https://github.com/diprajpatra/selenium-stealth#args
languages = ("de-DE", "de", "en-US", "en"),
platform = "Win32",
fix_hairline = True,
)
LOG.info("New WebDriver session is: %s %s", self.webdriver.session_id, self.webdriver.command_executor._url) # pylint: disable=protected-access
def get_compatible_browser(self) -> str | None:
match platform.system():
case "Linux":
browser_paths = [
shutil.which("chromium"),
shutil.which("chromium-browser"),
shutil.which("google-chrome"),
shutil.which("microsoft-edge")
]
case "Darwin":
browser_paths = [
"/Applications/Chromium.app/Contents/MacOS/Chromium",
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome",
"/Applications/Microsoft Edge.app/Contents/MacOS/Microsoft Edge",
]
case "Windows":
browser_paths = [
os.environ.get("ProgramFiles", "C:\\Program Files") + r'\Microsoft\Edge\Application\msedge.exe',
os.environ.get("ProgramFiles(x86)", "C:\\Program Files (x86)") + r'\Microsoft\Edge\Application\msedge.exe',
os.environ["ProgramFiles"] + r'\Chromium\Application\chrome.exe',
os.environ["ProgramFiles(x86)"] + r'\Chromium\Application\chrome.exe',
os.environ["LOCALAPPDATA"] + r'\Chromium\Application\chrome.exe',
os.environ["ProgramFiles"] + r'\Chrome\Application\chrome.exe',
os.environ["ProgramFiles(x86)"] + r'\Chrome\Application\chrome.exe',
os.environ["LOCALAPPDATA"] + r'\Chrome\Application\chrome.exe',
shutil.which("msedge.exe"),
shutil.which("chromium.exe"),
shutil.which("chrome.exe")
]
case _ as os_name:
LOG.warning("Installed browser for OS [%s] could not be detected", os_name)
return None
for browser_path in browser_paths:
if browser_path and os.path.isfile(browser_path):
return browser_path
raise AssertionError("Installed browser could not be detected")
def web_await(self, condition: Callable[[WebDriver], T], timeout:float = 5, exception_on_timeout: Callable[[], Exception] | None = None) -> T:
"""
Blocks/waits until the given condition is met.
:param timeout: timeout in seconds
:raises TimeoutException: if element could not be found within time
"""
max_attempts = 2
for attempt in range(max_attempts + 1)[1:]:
try:
return WebDriverWait(self.webdriver, timeout).until(condition) # type: ignore[no-any-return]
except TimeoutException as ex:
if exception_on_timeout:
raise exception_on_timeout() from ex
raise ex
except WebDriverException as ex:
# temporary workaround for:
# - https://groups.google.com/g/chromedriver-users/c/Z_CaHJTJnLw
# - https://bugs.chromium.org/p/chromedriver/issues/detail?id=4048
if ex.msg == "target frame detached" and attempt < max_attempts:
LOG.warning(ex)
else:
raise ex
raise AssertionError("Should never be reached.")
def web_click(self, selector_type:By, selector_value:str, timeout:float = 5) -> WebElement:
"""
:param timeout: timeout in seconds
:raises NoSuchElementException: if element could not be found within time
"""
elem = self.web_await(
EC.element_to_be_clickable((selector_type, selector_value)),
timeout,
lambda: NoSuchElementException(f"Element {selector_type}:{selector_value} not found or not clickable")
)
elem.click()
pause()
return elem
def web_execute(self, javascript:str) -> Any:
"""
Executes the given JavaScript code in the context of the current page.
:return: The command's JSON response
"""
return self.webdriver.execute_script(javascript)
def web_find(self, selector_type:By, selector_value:str, timeout:float = 5) -> WebElement:
"""
Locates an HTML element.
:param timeout: timeout in seconds
:raises NoSuchElementException: if element could not be found within time
"""
return self.web_await(
EC.presence_of_element_located((selector_type, selector_value)),
timeout,
lambda: NoSuchElementException(f"Element {selector_type}='{selector_value}' not found")
)
def web_input(self, selector_type:By, selector_value:str, text:str, timeout:float = 5) -> WebElement:
"""
Enters text into an HTML input field.
:param timeout: timeout in seconds
:raises NoSuchElementException: if element could not be found within time
"""
input_field = self.web_find(selector_type, selector_value, timeout)
input_field.clear()
input_field.send_keys(text)
pause()
return input_field
def web_open(self, url:str, timeout:float = 15, reload_if_already_open:bool = False) -> None:
"""
:param url: url to open in browser
:param timeout: timespan in seconds within the page needs to be loaded
:param reload_if_already_open: if False does nothing if the URL is already open in the browser
:raises TimeoutException: if page did not open within given timespan
"""
LOG.debug(" -> Opening [%s]...", url)
if not reload_if_already_open and url == self.webdriver.current_url:
LOG.debug(" => skipping, [%s] is already open", url)
return
self.webdriver.get(url)
WebDriverWait(self.webdriver, timeout).until(lambda _: self.web_execute("return document.readyState") == "complete")
# pylint: disable=dangerous-default-value
def web_request(self, url:str, method:str = "GET", valid_response_codes:Iterable[int] = [200], headers:dict[str, str] | None = None) -> dict[str, Any]:
method = method.upper()
LOG.debug(" -> HTTP %s [%s]...", method, url)
response:dict[str, Any] = self.webdriver.execute_async_script(f"""
var callback = arguments[arguments.length - 1];
fetch("{url}", {{
method: "{method}",
redirect: "follow",
headers: {headers or {}}
}})
.then(response => response.text().then(responseText => {{
headers = {{}};
response.headers.forEach((v, k) => headers[k] = v);
callback({{
"statusCode": response.status,
"statusMessage": response.statusText,
"headers": headers,
"content": responseText
}})
}}))
""")
ensure(
response["statusCode"] in valid_response_codes,
f'Invalid response "{response["statusCode"]} response["statusMessage"]" received for HTTP {method} to {url}'
)
return response
# pylint: enable=dangerous-default-value
def web_scroll_page_down(self, scroll_length: int = 10, scroll_speed: int = 10000, scroll_back_top: bool = False) -> None:
"""
Smoothly scrolls the current web page down.
:param scroll_length: the length of a single scroll iteration, determines smoothness of scrolling, lower is smoother
:param scroll_speed: the speed of scrolling, higher is faster
:param scroll_back_top: whether to scroll the page back to the top after scrolling to the bottom
"""
current_y_pos = 0
bottom_y_pos: int = self.webdriver.execute_script('return document.body.scrollHeight;') # get bottom position by JS
while current_y_pos < bottom_y_pos: # scroll in steps until bottom reached
current_y_pos += scroll_length
self.webdriver.execute_script(f'window.scrollTo(0, {current_y_pos});') # scroll one step
time.sleep(scroll_length / scroll_speed)
if scroll_back_top: # scroll back to top in same style
while current_y_pos > 0:
current_y_pos -= scroll_length
self.webdriver.execute_script(f'window.scrollTo(0, {current_y_pos});')
time.sleep(scroll_length / scroll_speed / 2) # double speed
def web_select(self, selector_type:By, selector_value:str, selected_value:Any, timeout:float = 5) -> WebElement:
"""
Selects an <option/> of a <select/> HTML element.
:param timeout: timeout in seconds
:raises NoSuchElementException: if element could not be found within time
:raises UnexpectedTagNameException: if element is not a <select> element
"""
elem = self.web_await(
EC.element_to_be_clickable((selector_type, selector_value)),
timeout,
lambda: NoSuchElementException(f"Element {selector_type}='{selector_value}' not found or not clickable")
)
Select(elem).select_by_value(selected_value)
pause()
return elem

View File

@@ -0,0 +1,225 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import logging
from datetime import datetime
from typing import TYPE_CHECKING
import colorama
import requests
if TYPE_CHECKING:
from pathlib import Path
from kleinanzeigen_bot.model.config_model import Config
try:
from kleinanzeigen_bot._version import __version__
except ImportError:
__version__ = "unknown"
from kleinanzeigen_bot.model.update_check_state import UpdateCheckState
logger = logging.getLogger(__name__)
colorama.init()
class UpdateChecker:
"""Checks for updates to the bot."""
def __init__(self, config:"Config", state_file:"Path") -> None:
"""Initialize the update checker.
Args:
config: The bot configuration.
state_file: Path to the update-check state JSON file.
"""
self.config = config
self.state_file = state_file
self.state = UpdateCheckState.load(self.state_file)
def get_local_version(self) -> str | None:
"""Get the local version of the bot.
Returns:
The local version string, or None if it cannot be determined.
"""
return __version__
def _request_timeout(self) -> float:
"""Return the effective timeout for HTTP calls."""
return self.config.timeouts.effective("update_check")
def _get_commit_hash(self, version:str) -> str | None:
"""Extract the commit hash from a version string.
Args:
version: The version string to extract the commit hash from.
Returns:
The commit hash, or None if it cannot be extracted.
"""
if "+" in version:
return version.split("+")[1]
return None
def _resolve_commitish(self, commitish:str) -> tuple[str | None, datetime | None]:
"""Resolve a commit-ish to a full commit hash and date.
Args:
commitish: The commit hash, tag, or branch.
Returns:
Tuple of (full commit hash, commit date), or (None, None) if it cannot be determined.
"""
try:
response = requests.get(
f"https://api.github.com/repos/Second-Hand-Friends/kleinanzeigen-bot/commits/{commitish}",
timeout = self._request_timeout(),
)
response.raise_for_status()
data = response.json()
if not isinstance(data, dict):
return None, None
commit_date = None
if "commit" in data and "author" in data["commit"] and "date" in data["commit"]["author"]:
commit_date = datetime.fromisoformat(data["commit"]["author"]["date"].replace("Z", "+00:00"))
sha = data.get("sha")
commit_hash = str(sha) if sha else None
return commit_hash, commit_date
except Exception as e:
logger.warning("Could not resolve commit '%s': %s", commitish, e)
return None, None
def _get_short_commit_hash(self, commit:str) -> str:
"""Get the short version of a commit hash.
Args:
commit: The full commit hash.
Returns:
The short commit hash (first 7 characters).
"""
return commit[:7]
def _commits_match(self, local_commit:str, release_commit:str) -> bool:
"""Determine whether two commits refer to the same hash.
This accounts for short vs. full hashes (e.g. 7 chars vs. 40 chars).
"""
local_commit = local_commit.strip()
release_commit = release_commit.strip()
if local_commit == release_commit:
return True
if len(local_commit) < len(release_commit) and release_commit.startswith(local_commit):
return True
return len(release_commit) < len(local_commit) and local_commit.startswith(release_commit)
def check_for_updates(self, *, skip_interval_check:bool = False) -> None:
"""Check for updates to the bot.
Args:
skip_interval_check: If True, bypass the interval check and force an update check.
"""
if not self.config.update_check.enabled:
return
# Check if we should perform an update check based on the interval
if not skip_interval_check and not self.state.should_check(self.config.update_check.interval, self.config.update_check.channel):
return
local_version = self.get_local_version()
if not local_version:
logger.warning("Could not determine local version.")
return
local_commitish = self._get_commit_hash(local_version)
if not local_commitish:
logger.warning("Could not determine local commit hash.")
return
# --- Fetch release info from GitHub using correct endpoint per channel ---
try:
if self.config.update_check.channel == "latest":
# Use /releases/latest endpoint for stable releases
response = requests.get("https://api.github.com/repos/Second-Hand-Friends/kleinanzeigen-bot/releases/latest", timeout = self._request_timeout())
response.raise_for_status()
release = response.json()
# Defensive: ensure it's not a prerelease
if release.get("prerelease", False):
logger.warning("Latest release from GitHub is a prerelease, but 'latest' channel expects a stable release.")
return
elif self.config.update_check.channel == "preview":
# Use /releases endpoint and select the most recent prerelease
response = requests.get("https://api.github.com/repos/Second-Hand-Friends/kleinanzeigen-bot/releases", timeout = self._request_timeout())
response.raise_for_status()
releases = response.json()
# Find the most recent prerelease
release = next((r for r in releases if r.get("prerelease", False) and not r.get("draft", False)), None)
if not release:
logger.warning("No prerelease found for 'preview' channel.")
return
else:
logger.warning("Unknown update channel: %s", self.config.update_check.channel)
return
except Exception as e:
logger.warning("Could not get releases: %s", e)
return
# Get release commit-ish (use tag name to avoid branch tip drift)
release_commitish = release.get("tag_name")
if not release_commitish:
release_commitish = release.get("target_commitish")
if not release_commitish:
logger.warning("Could not determine release commit hash.")
return
# Resolve commit hashes and dates for comparison
local_commit, local_commit_date = self._resolve_commitish(local_commitish)
release_commit, release_commit_date = self._resolve_commitish(str(release_commitish))
if not local_commit or not release_commit or not local_commit_date or not release_commit_date:
logger.warning("Could not determine commit dates for comparison.")
return
if self._commits_match(local_commit, release_commit):
# If the commit hashes are identical, we are on the latest version. Do not proceed to other checks.
logger.info(
"You are on the latest version: %s (compared to %s in channel %s)",
local_version,
self._get_short_commit_hash(release_commit),
self.config.update_check.channel,
)
self.state.update_last_check()
self.state.save(self.state_file)
return
# All commit dates are in UTC; append ' UTC' to timestamps in logs for clarity.
if local_commit_date < release_commit_date:
logger.warning(
"A new version is available: %s from %s UTC (current: %s from %s UTC, channel: %s)",
self._get_short_commit_hash(release_commit),
release_commit_date.strftime("%Y-%m-%d %H:%M:%S"),
local_version,
local_commit_date.strftime("%Y-%m-%d %H:%M:%S"),
self.config.update_check.channel,
)
if release.get("body"):
logger.info("Release notes:\n%s", release["body"])
else:
logger.info(
"You are on a different commit than the release for channel '%s' (tag: %s). This may mean you are ahead, behind, or on a different branch. "
"Local commit: %s (%s UTC), Release commit: %s (%s UTC)",
self.config.update_check.channel,
release.get("tag_name", "unknown"),
self._get_short_commit_hash(local_commit),
local_commit_date.strftime("%Y-%m-%d %H:%M:%S"),
self._get_short_commit_hash(release_commit),
release_commit_date.strftime("%Y-%m-%d %H:%M:%S"),
)
# Update the last check time
self.state.update_last_check()
self.state.save(self.state_file)

View File

@@ -1,291 +0,0 @@
"""
SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
SPDX-License-Identifier: AGPL-3.0-or-later
SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""
import copy, decimal, json, logging, os, re, secrets, sys, traceback, time
from importlib.resources import read_text as get_resource_as_string
from collections.abc import Callable, Sized
from datetime import datetime
from types import FrameType, ModuleType, TracebackType
from typing import Any, Final, TypeVar
import coloredlogs
from ruamel.yaml import YAML
LOG_ROOT:Final[logging.Logger] = logging.getLogger()
LOG:Final[logging.Logger] = logging.getLogger("kleinanzeigen_bot.utils")
# https://mypy.readthedocs.io/en/stable/generics.html#generic-functions
T = TypeVar('T')
def abspath(relative_path:str, relative_to:str | None = None) -> str:
"""
Makes a given relative path absolute based on another file/folder
"""
if os.path.isabs(relative_path):
return relative_path
if not relative_to:
return os.path.abspath(relative_path)
if os.path.isfile(relative_to):
relative_to = os.path.dirname(relative_to)
return os.path.normpath(os.path.join(relative_to, relative_path))
def ensure(condition:Any | bool | Callable[[], bool], error_message:str, timeout:float = 5, poll_requency:float = 0.5) -> None:
"""
:param timeout: timespan in seconds until when the condition must become `True`, default is 5 seconds
:param poll_requency: sleep interval between calls in seconds, default is 0.5 seconds
:raises AssertionError: if condition did not come `True` within given timespan
"""
if not isinstance(condition, Callable): # type: ignore[arg-type] # https://github.com/python/mypy/issues/6864
if condition:
return
raise AssertionError(error_message)
if timeout < 0:
raise AssertionError("[timeout] must be >= 0")
if poll_requency < 0:
raise AssertionError("[poll_requency] must be >= 0")
start_at = time.time()
while not condition(): # type: ignore[operator]
elapsed = time.time() - start_at
if elapsed >= timeout:
raise AssertionError(error_message)
time.sleep(poll_requency)
def is_frozen() -> bool:
"""
>>> is_frozen()
False
"""
return getattr(sys, "frozen", False)
def apply_defaults(
target:dict[Any, Any],
defaults:dict[Any, Any],
ignore:Callable[[Any, Any], bool] = lambda _k, _v: False,
override:Callable[[Any, Any], bool] = lambda _k, _v: False
) -> dict[Any, Any]:
"""
>>> apply_defaults({}, {"foo": "bar"})
{'foo': 'bar'}
>>> apply_defaults({"foo": "foo"}, {"foo": "bar"})
{'foo': 'foo'}
>>> apply_defaults({"foo": ""}, {"foo": "bar"})
{'foo': ''}
>>> apply_defaults({}, {"foo": "bar"}, ignore = lambda k, _: k == "foo")
{}
>>> apply_defaults({"foo": ""}, {"foo": "bar"}, override = lambda _, v: v == "")
{'foo': 'bar'}
>>> apply_defaults({"foo": None}, {"foo": "bar"}, override = lambda _, v: v == "")
{'foo': None}
"""
for key, default_value in defaults.items():
if key in target:
if isinstance(target[key], dict) and isinstance(default_value, dict):
apply_defaults(target[key], default_value, ignore = ignore)
elif override(key, target[key]):
target[key] = copy.deepcopy(default_value)
elif not ignore(key, default_value):
target[key] = copy.deepcopy(default_value)
return target
def safe_get(a_map:dict[Any, Any], *keys:str) -> Any:
"""
>>> safe_get({"foo": {}}, "foo", "bar") is None
True
>>> safe_get({"foo": {"bar": "some_value"}}, "foo", "bar")
'some_value'
"""
if a_map:
for key in keys:
try:
a_map = a_map[key]
except (KeyError, TypeError):
return None
return a_map
def configure_console_logging() -> None:
stdout_log = logging.StreamHandler(sys.stderr)
stdout_log.setLevel(logging.DEBUG)
stdout_log.setFormatter(coloredlogs.ColoredFormatter("[%(levelname)s] %(message)s"))
stdout_log.addFilter(type("", (logging.Filter,), {
"filter": lambda rec: rec.levelno <= logging.INFO
}))
LOG_ROOT.addHandler(stdout_log)
stderr_log = logging.StreamHandler(sys.stderr)
stderr_log.setLevel(logging.WARNING)
stderr_log.setFormatter(coloredlogs.ColoredFormatter("[%(levelname)s] %(message)s"))
LOG_ROOT.addHandler(stderr_log)
def on_exception(ex_type:type[BaseException], ex_value:Any, ex_traceback:TracebackType | None) -> None:
if issubclass(ex_type, KeyboardInterrupt):
sys.__excepthook__(ex_type, ex_value, ex_traceback)
elif LOG.isEnabledFor(logging.DEBUG) or isinstance(ex_value, (AttributeError, ImportError, NameError, TypeError)):
LOG.error("".join(traceback.format_exception(ex_type, ex_value, ex_traceback)))
elif isinstance(ex_value, AssertionError):
LOG.error(ex_value)
else:
LOG.error("%s: %s", ex_type.__name__, ex_value)
def on_exit() -> None:
for handler in LOG_ROOT.handlers:
handler.flush()
def on_sigint(_sig:int, _frame:FrameType | None) -> None:
LOG.warning("Aborted on user request.")
sys.exit(0)
def pause(min_ms:int = 200, max_ms:int = 2000) -> None:
duration = max_ms <= min_ms and min_ms or secrets.randbelow(max_ms - min_ms) + min_ms
LOG.log(logging.INFO if duration > 1500 else logging.DEBUG, " ... pausing for %d ms ...", duration)
time.sleep(duration / 1000)
def pluralize(noun:str, count:int | Sized, prefix_with_count:bool = True) -> str:
"""
>>> pluralize("field", 1)
'1 field'
>>> pluralize("field", 2)
'2 fields'
>>> pluralize("field", 2, prefix_with_count = False)
'fields'
"""
if isinstance(count, Sized):
count = len(count)
prefix = f"{count} " if prefix_with_count else ""
if count == 1:
return f"{prefix}{noun}"
if noun.endswith('s') or noun.endswith('sh') or noun.endswith('ch') or noun.endswith('x') or noun.endswith('z'):
return f"{prefix}{noun}es"
if noun.endswith('y'):
return f"{prefix}{noun[:-1]}ies"
return f"{prefix}{noun}s"
def load_dict(filepath:str, content_label:str = "") -> dict[str, Any]:
"""
:raises FileNotFoundError
"""
data = load_dict_if_exists(filepath, content_label)
if data is None:
raise FileNotFoundError(filepath)
return data
def load_dict_if_exists(filepath:str, content_label:str = "") -> dict[str, Any] | None:
filepath = os.path.abspath(filepath)
LOG.info("Loading %s[%s]...", content_label and content_label + " from " or "", filepath)
_, file_ext = os.path.splitext(filepath)
if file_ext not in [".json", ".yaml", ".yml"]:
raise ValueError(f'Unsupported file type. The file name "{filepath}" must end with *.json, *.yaml, or *.yml')
if not os.path.exists(filepath):
return None
with open(filepath, encoding = "utf-8") as file:
return json.load(file) if filepath.endswith(".json") else YAML().load(file) # type: ignore[no-any-return] # mypy
def load_dict_from_module(module:ModuleType, filename:str, content_label:str = "") -> dict[str, Any]:
"""
:raises FileNotFoundError
"""
LOG.debug("Loading %s[%s.%s]...", content_label and content_label + " from " or "", module.__name__, filename)
_, file_ext = os.path.splitext(filename)
if file_ext not in (".json", ".yaml", ".yml"):
raise ValueError(f'Unsupported file type. The file name "{filename}" must end with *.json, *.yaml, or *.yml')
content = get_resource_as_string(module, filename) # pylint: disable=deprecated-method
return json.loads(content) if filename.endswith(".json") else YAML().load(content) # type: ignore[no-any-return] # mypy
def save_dict(filepath:str, content:dict[str, Any]) -> None:
filepath = os.path.abspath(filepath)
LOG.info("Saving [%s]...", filepath)
with open(filepath, "w", encoding = "utf-8") as file:
if filepath.endswith(".json"):
file.write(json.dumps(content, indent = 2, ensure_ascii = False))
else:
yaml = YAML()
yaml.indent(mapping = 2, sequence = 4, offset = 2)
yaml.allow_duplicate_keys = False
yaml.explicit_start = False
yaml.dump(content, file)
def parse_decimal(number:float | int | str) -> decimal.Decimal:
"""
>>> parse_decimal(5)
Decimal('5')
>>> parse_decimal(5.5)
Decimal('5.5')
>>> parse_decimal("5.5")
Decimal('5.5')
>>> parse_decimal("5,5")
Decimal('5.5')
>>> parse_decimal("1.005,5")
Decimal('1005.5')
>>> parse_decimal("1,005.5")
Decimal('1005.5')
"""
try:
return decimal.Decimal(number)
except decimal.InvalidOperation as ex:
parts = re.split("[.,]", str(number))
try:
return decimal.Decimal("".join(parts[:-1]) + "." + parts[-1])
except decimal.InvalidOperation:
raise decimal.DecimalException(f"Invalid number format: {number}") from ex
def parse_datetime(date:datetime | str | None) -> datetime | None:
"""
>>> parse_datetime(datetime(2020, 1, 1, 0, 0))
datetime.datetime(2020, 1, 1, 0, 0)
>>> parse_datetime("2020-01-01T00:00:00")
datetime.datetime(2020, 1, 1, 0, 0)
>>> parse_datetime(None)
"""
if date is None:
return None
if isinstance(date, datetime):
return date
return datetime.fromisoformat(date)
def extract_ad_id_from_ad_link(url: str) -> int:
"""
Extracts the ID of an ad, given by its reference link.
:param url: the URL to the ad page
:return: the ad ID, a (ten-digit) integer number
"""
num_part = url.split('/')[-1] # suffix
id_part = num_part.split('-')[0]
try:
return int(id_part)
except ValueError:
print('The ad ID could not be extracted from the given ad reference!')
return -1

View File

@@ -0,0 +1,3 @@
"""
This module contains generic, reusable code.
"""

View File

@@ -0,0 +1,263 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import json
import re
import subprocess # noqa: S404
import urllib.error
import urllib.request
from typing import Any, Final
from . import loggers
LOG:Final[loggers.Logger] = loggers.get_logger(__name__)
# Chrome 136 was released in March 2025 and introduced security changes
CHROME_136_VERSION = 136
class ChromeVersionInfo:
"""Information about a Chrome browser version."""
def __init__(self, version_string:str, major_version:int, browser_name:str = "Unknown") -> None:
self.version_string = version_string
self.major_version = major_version
self.browser_name = browser_name
@property
def is_chrome_136_plus(self) -> bool:
"""Check if this is Chrome version 136 or later."""
return self.major_version >= CHROME_136_VERSION
def __str__(self) -> str:
return f"{self.browser_name} {self.version_string} (major: {self.major_version})"
def parse_version_string(version_string:str) -> int:
"""
Parse a Chrome version string and extract the major version number.
Args:
version_string: Version string like "136.0.6778.0" or "136.0.6778.0 (Developer Build)"
Returns:
Major version number (e.g., 136)
Raises:
ValueError: If version string cannot be parsed
"""
# Extract version number from strings like:
# "136.0.6778.0"
# "136.0.6778.0 (Developer Build)"
# "136.0.6778.0 (Official Build) (x86_64)"
# "Google Chrome 136.0.6778.0"
# "Microsoft Edge 136.0.6778.0"
# "Chromium 136.0.6778.0"
match = re.search(r"(\d+)\.\d+\.\d+\.\d+", version_string)
if not match:
raise ValueError(f"Could not parse version string: {version_string}")
return int(match.group(1))
def _normalize_browser_name(browser_name:str) -> str:
"""
Normalize browser name for consistent detection.
Args:
browser_name: Raw browser name from detection
Returns:
Normalized browser name
"""
browser_name_lower = browser_name.lower()
if "edge" in browser_name_lower or "edg" in browser_name_lower:
return "Edge"
if "chromium" in browser_name_lower:
return "Chromium"
return "Chrome"
def detect_chrome_version_from_binary(binary_path:str, *, timeout:float | None = None) -> ChromeVersionInfo | None:
"""
Detect Chrome version by running the browser binary.
Args:
binary_path: Path to the Chrome binary
timeout: Optional timeout (seconds) for the subprocess call
Returns:
ChromeVersionInfo if successful, None if detection fails
"""
effective_timeout = timeout if timeout is not None else 10.0
try:
# Run browser with --version flag
result = subprocess.run( # noqa: S603
[binary_path, "--version"],
check = False, capture_output = True,
text = True,
timeout = effective_timeout
)
if result.returncode != 0:
LOG.debug("Browser version command failed: %s", result.stderr)
return None
output = result.stdout.strip()
major_version = parse_version_string(output)
# Extract just the version number for version_string
version_match = re.search(r"(\d+\.\d+\.\d+\.\d+)", output)
version_string = version_match.group(1) if version_match else output
# Determine browser name from binary path
browser_name = _normalize_browser_name(binary_path)
return ChromeVersionInfo(version_string, major_version, browser_name)
except subprocess.TimeoutExpired:
LOG.debug("Browser version command timed out after %.1fs", effective_timeout)
return None
except (subprocess.SubprocessError, ValueError) as e:
LOG.debug("Failed to detect browser version: %s", str(e))
return None
def detect_chrome_version_from_remote_debugging(host:str = "127.0.0.1", port:int = 9222, *, timeout:float | None = None) -> ChromeVersionInfo | None:
"""
Detect Chrome version from remote debugging API.
Args:
host: Remote debugging host
port: Remote debugging port
timeout: Optional timeout (seconds) for the HTTP request
Returns:
ChromeVersionInfo if successful, None if detection fails
"""
effective_timeout = timeout if timeout is not None else 5.0
try:
# Query the remote debugging API
url = f"http://{host}:{port}/json/version"
response = urllib.request.urlopen(url, timeout = effective_timeout) # noqa: S310
version_data = json.loads(response.read().decode())
# Extract version information
user_agent = version_data.get("User-Agent", "")
browser_name = _normalize_browser_name(version_data.get("Browser", "Unknown"))
# Parse version from User-Agent string
# Example: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.6778.0 Safari/537.36"
match = re.search(r"Chrome/(\d+)\.\d+\.\d+\.\d+", user_agent)
if not match:
LOG.debug("Could not parse Chrome version from User-Agent: %s", user_agent)
return None
major_version = int(match.group(1))
version_string = match.group(0).replace("Chrome/", "")
return ChromeVersionInfo(version_string, major_version, browser_name)
except urllib.error.URLError as e:
LOG.debug("Remote debugging API not accessible: %s", e)
return None
except json.JSONDecodeError as e:
LOG.debug("Invalid JSON response from remote debugging API: %s", e)
return None
except Exception as e:
LOG.debug("Failed to detect browser version from remote debugging: %s", str(e))
return None
def validate_chrome_136_configuration(browser_arguments:list[str], user_data_dir:str | None) -> tuple[bool, str]:
"""
Validate configuration for Chrome/Edge 136+ security requirements.
Chrome/Edge 136+ requires --user-data-dir to be specified for security reasons.
Args:
browser_arguments: List of browser arguments
user_data_dir: User data directory configuration
Returns:
Tuple of (is_valid, error_message)
"""
# Check if user-data-dir is specified in arguments
has_user_data_dir_arg = any(
arg.startswith("--user-data-dir=")
for arg in browser_arguments
)
# Check if user_data_dir is configured
has_user_data_dir_config = user_data_dir is not None and user_data_dir.strip()
if not has_user_data_dir_arg and not has_user_data_dir_config:
return False, (
"Chrome/Edge 136+ requires --user-data-dir to be specified. "
"Add --user-data-dir=/path/to/directory to your browser arguments and "
'user_data_dir: "/path/to/directory" to your configuration.'
)
return True, ""
def get_chrome_version_diagnostic_info(
binary_path:str | None = None,
remote_host:str = "127.0.0.1",
remote_port:int | None = None,
*,
remote_timeout:float | None = None,
binary_timeout:float | None = None
) -> dict[str, Any]:
"""
Get comprehensive Chrome version diagnostic information.
Args:
binary_path: Path to Chrome binary (optional)
remote_host: Remote debugging host
remote_port: Remote debugging port (optional)
remote_timeout: Timeout for remote debugging detection
binary_timeout: Timeout for binary detection
Returns:
Dictionary with diagnostic information
"""
diagnostic_info:dict[str, Any] = {
"binary_detection": None,
"remote_detection": None,
"chrome_136_plus_detected": False,
"configuration_valid": True,
"recommendations": []
}
# Try binary detection
if binary_path:
version_info = detect_chrome_version_from_binary(binary_path, timeout = binary_timeout)
if version_info:
diagnostic_info["binary_detection"] = {
"version_string": version_info.version_string,
"major_version": version_info.major_version,
"browser_name": version_info.browser_name,
"is_chrome_136_plus": version_info.is_chrome_136_plus
}
diagnostic_info["chrome_136_plus_detected"] = version_info.is_chrome_136_plus
# Try remote debugging detection
if remote_port:
version_info = detect_chrome_version_from_remote_debugging(remote_host, remote_port, timeout = remote_timeout)
if version_info:
diagnostic_info["remote_detection"] = {
"version_string": version_info.version_string,
"major_version": version_info.major_version,
"browser_name": version_info.browser_name,
"is_chrome_136_plus": version_info.is_chrome_136_plus
}
diagnostic_info["chrome_136_plus_detected"] = version_info.is_chrome_136_plus
# Add recommendations based on detected version
if diagnostic_info["chrome_136_plus_detected"]:
diagnostic_info["recommendations"].append(
"Chrome 136+ detected - ensure --user-data-dir is configured for remote debugging"
)
return diagnostic_info

View File

@@ -0,0 +1,135 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import asyncio, json, re, secrets, shutil # isort: skip
from pathlib import Path
from typing import Any, Final
from kleinanzeigen_bot.utils import loggers, misc
LOG:Final[loggers.Logger] = loggers.get_logger(__name__)
class CaptureResult:
"""Result of a diagnostics capture attempt."""
def __init__(self) -> None:
self.saved_artifacts:list[Path] = []
def add_saved(self, path:Path) -> None:
"""Add a successfully saved artifact."""
self.saved_artifacts.append(path)
def has_any(self) -> bool:
"""Check if any artifacts were saved."""
return bool(self.saved_artifacts)
def _write_json_sync(json_path:Path, json_payload:dict[str, Any]) -> None:
"""Synchronous helper to write JSON to file."""
with json_path.open("w", encoding = "utf-8") as handle:
json.dump(json_payload, handle, indent = 2, default = str)
handle.write("\n")
def _copy_log_sync(log_file_path:str, log_path:Path) -> bool:
"""Synchronous helper to copy log file. Returns True if copy succeeded."""
log_source = Path(log_file_path)
if not log_source.exists():
LOG.warning("Log file not found for diagnostics copy: %s", log_file_path)
return False
loggers.flush_all_handlers()
shutil.copy2(log_source, log_path)
return True
async def capture_diagnostics(
*,
output_dir:Path,
base_prefix:str,
attempt:int | None = None,
subject:str | None = None,
page:Any | None = None,
json_payload:dict[str, Any] | None = None,
log_file_path:str | None = None,
copy_log:bool = False,
) -> CaptureResult:
"""Capture diagnostics artifacts for a given operation.
Args:
output_dir: The output directory for diagnostics artifacts
base_prefix: Base filename prefix (e.g., 'login_detection_unknown', 'publish_error')
attempt: Optional attempt number for retry operations
subject: Optional subject identifier (e.g., ad token)
page: Optional page object with save_screenshot and get_content methods
json_payload: Optional JSON data to save
log_file_path: Optional log file path to copy
copy_log: Whether to copy log file
Returns:
CaptureResult containing the list of successfully saved artifacts
"""
result = CaptureResult()
try:
await asyncio.to_thread(output_dir.mkdir, parents = True, exist_ok = True)
ts = misc.now().strftime("%Y%m%dT%H%M%S")
suffix = secrets.token_hex(4)
base = f"{base_prefix}_{ts}_{suffix}"
if attempt is not None:
base = f"{base}_attempt{attempt}"
if subject:
safe_subject = re.sub(r"[^A-Za-z0-9_-]", "_", subject)
base = f"{base}_{safe_subject}"
screenshot_path = output_dir / f"{base}.png"
html_path = output_dir / f"{base}.html"
json_path = output_dir / f"{base}.json"
log_path = output_dir / f"{base}.log"
if page:
try:
await page.save_screenshot(str(screenshot_path))
result.add_saved(screenshot_path)
except Exception as exc: # noqa: BLE001
LOG.debug("Diagnostics screenshot capture failed: %s", exc)
try:
html = await page.get_content()
await asyncio.to_thread(html_path.write_text, html, encoding = "utf-8")
result.add_saved(html_path)
except Exception as exc: # noqa: BLE001
LOG.debug("Diagnostics HTML capture failed: %s", exc)
if json_payload is not None:
try:
await asyncio.to_thread(_write_json_sync, json_path, json_payload)
result.add_saved(json_path)
except Exception as exc: # noqa: BLE001
LOG.debug("Diagnostics JSON capture failed: %s", exc)
if copy_log and log_file_path:
try:
copy_succeeded = await asyncio.to_thread(_copy_log_sync, log_file_path, log_path)
if copy_succeeded:
result.add_saved(log_path)
except Exception as exc: # noqa: BLE001
LOG.debug("Diagnostics log copy failed: %s", exc)
# Determine if any capture was actually requested
capture_requested = page is not None or json_payload is not None or (copy_log and log_file_path)
if result.has_any():
artifacts_str = " ".join(map(str, result.saved_artifacts))
LOG.info("Diagnostics saved: %s", artifacts_str)
elif capture_requested:
LOG.warning("Diagnostics capture attempted but no artifacts were saved (all captures failed)")
else:
LOG.debug("No diagnostics capture requested")
except Exception as exc: # noqa: BLE001
LOG.debug("Diagnostics capture failed: %s", exc)
return result

View File

@@ -0,0 +1,364 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import copy, json, os, unicodedata # isort: skip
from collections import defaultdict
from collections.abc import Callable
from gettext import gettext as _
from importlib.resources import read_text as get_resource_as_string
from pathlib import Path
from types import ModuleType
from typing import Any, Final, TypeVar, cast, get_origin
from ruamel.yaml import YAML
from . import files, loggers # pylint: disable=cyclic-import
LOG:Final[loggers.Logger] = loggers.get_logger(__name__)
# https://mypy.readthedocs.io/en/stable/generics.html#generic-functions
K = TypeVar("K")
V = TypeVar("V")
def apply_defaults(
target:dict[Any, Any],
defaults:dict[Any, Any],
ignore:Callable[[Any, Any], bool] = lambda _k, _v: False,
override:Callable[[Any, Any], bool] = lambda _k, _v: False,
) -> dict[Any, Any]:
"""
>>> apply_defaults({}, {'a': 'b'})
{'a': 'b'}
>>> apply_defaults({'a': 'b'}, {'a': 'c'})
{'a': 'b'}
>>> apply_defaults({'a': ''}, {'a': 'b'})
{'a': ''}
>>> apply_defaults({}, {'a': 'b'}, ignore = lambda k, _: k == 'a')
{}
>>> apply_defaults({'a': ''}, {'a': 'b'}, override = lambda _, v: v == '')
{'a': 'b'}
>>> apply_defaults({'a': None}, {'a': 'b'}, override = lambda _, v: v == '')
{'a': None}
>>> apply_defaults({'a': {'x': 1}}, {'a': {'x': 0, 'y': 2}})
{'a': {'x': 1, 'y': 2}}
>>> apply_defaults({'a': {'b': False}}, {'a': { 'b': True}})
{'a': {'b': False}}
"""
for key, default_value in defaults.items():
if key in target:
if isinstance(target[key], dict) and isinstance(default_value, dict):
apply_defaults(target = target[key], defaults = default_value, ignore = ignore, override = override)
elif override(key, target[key]): # force overwrite if override says so
target[key] = copy.deepcopy(default_value)
elif not ignore(key, default_value): # only set if not explicitly ignored
target[key] = copy.deepcopy(default_value)
return target
def defaultdict_to_dict(d:defaultdict[K, V]) -> dict[K, V]:
"""Recursively convert defaultdict to dict."""
result:dict[K, V] = {}
for key, value in d.items():
if isinstance(value, defaultdict):
result[key] = defaultdict_to_dict(value) # type: ignore[assignment]
else:
result[key] = value
return result
def load_dict(filepath:str, content_label:str = "") -> dict[str, Any]:
"""
:raises FileNotFoundError
"""
data = load_dict_if_exists(filepath, content_label)
if data is None:
raise FileNotFoundError(filepath)
return data
def load_dict_if_exists(filepath:str, content_label:str = "") -> dict[str, Any] | None:
abs_filepath = files.abspath(filepath)
LOG.debug("Loading %s[%s]...", content_label and content_label + " from " or "", abs_filepath)
__, file_ext = os.path.splitext(filepath)
if file_ext not in {".json", ".yaml", ".yml"}:
raise ValueError(_('Unsupported file type. The filename "%s" must end with *.json, *.yaml, or *.yml') % filepath)
if not os.path.exists(filepath):
return None
with open(filepath, encoding = "utf-8") as file:
return json.load(file) if filepath.endswith(".json") else YAML().load(file) # type: ignore[no-any-return] # mypy
def load_dict_from_module(module:ModuleType, filename:str, content_label:str = "") -> dict[str, Any]:
"""
:raises FileNotFoundError
"""
LOG.debug("Loading %s[%s.%s]...", content_label and content_label + " from " or "", module.__name__, filename)
__, file_ext = os.path.splitext(filename)
if file_ext not in {".json", ".yaml", ".yml"}:
raise ValueError(f'Unsupported file type. The filename "{filename}" must end with *.json, *.yaml, or *.yml')
content = get_resource_as_string(module, filename) # pylint: disable=deprecated-method
return json.loads(content) if filename.endswith(".json") else YAML().load(content) # type: ignore[no-any-return] # mypy
def _configure_yaml() -> YAML:
"""
Configure and return a YAML instance with standard settings.
Returns:
Configured YAML instance ready for dumping
"""
yaml = YAML()
yaml.indent(mapping = 2, sequence = 4, offset = 2)
yaml.representer.add_representer(
str, # use YAML | block style for multi-line strings
lambda dumper, data: dumper.represent_scalar("tag:yaml.org,2002:str", data, style = "|" if "\n" in data else None),
)
yaml.allow_duplicate_keys = False
yaml.explicit_start = False
return yaml
def save_dict(filepath:str | Path, content:dict[str, Any], *, header:str | None = None) -> None:
# Normalize filepath to NFC for cross-platform consistency (issue #728)
# Ensures file paths match NFC-normalized directory names from sanitize_folder_name()
# Also handles edge cases where paths don't originate from sanitize_folder_name()
filepath = Path(unicodedata.normalize("NFC", str(filepath)))
# Create parent directory if needed
filepath.parent.mkdir(parents = True, exist_ok = True)
LOG.info("Saving [%s]...", filepath)
with open(filepath, "w", encoding = "utf-8") as file:
if header:
file.write(header)
file.write("\n")
if filepath.suffix == ".json":
file.write(json.dumps(content, indent = 2, ensure_ascii = False))
else:
yaml = _configure_yaml()
yaml.dump(content, file)
def safe_get(a_map:dict[Any, Any], *keys:str) -> Any:
"""
>>> safe_get({"foo": {}}, "foo", "bar") is None
True
>>> safe_get({"foo": {"bar": "some_value"}}, "foo", "bar")
'some_value'
"""
if a_map:
try:
for key in keys:
a_map = a_map[key]
except (KeyError, TypeError):
return None
return a_map
def _should_exclude(field_name:str, exclude:set[str] | dict[str, Any] | None) -> bool:
"""Check if a field should be excluded based on exclude rules."""
if exclude is None:
return False
if isinstance(exclude, set):
return field_name in exclude
if isinstance(exclude, dict):
# If the value is None, it means exclude this field entirely
# If the value is a dict/set, it means nested exclusion rules
if field_name in exclude:
return exclude[field_name] is None
return False
def _get_nested_exclude(field_name:str, exclude:set[str] | dict[str, Any] | None) -> set[str] | dict[str, Any] | None:
"""Get nested exclude rules for a field."""
if exclude is None:
return None
if isinstance(exclude, dict) and field_name in exclude:
nested = exclude[field_name]
# If nested is None, it means exclude entirely - no nested rules to pass down
# If nested is a set or dict, pass it down as nested exclusion rules
if nested is None:
return None
return cast(set[str] | dict[str, Any], nested)
return None
def model_to_commented_yaml(
model_instance:Any,
*,
indent_level:int = 0,
exclude:set[str] | dict[str, Any] | None = None,
) -> Any:
"""
Convert a Pydantic model instance to a structure with YAML comments.
This function recursively processes a Pydantic model and creates a
CommentedMap/CommentedSeq structure with comments based on field descriptions.
The comments are added as block comments above each field.
Args:
model_instance: A Pydantic model instance to convert
indent_level: Current indentation level (for recursive calls)
exclude: Optional set of field names to exclude, or dict for nested exclusion
Returns:
A CommentedMap, CommentedSeq, or primitive value suitable for YAML output
Example:
>>> from pydantic import BaseModel, Field
>>> class Config(BaseModel):
... name: str = Field(default="test", description="The name")
>>> config = Config()
>>> result = model_to_commented_yaml(config)
"""
# Delayed import to avoid circular dependency
from pydantic import BaseModel # noqa: PLC0415
from ruamel.yaml.comments import CommentedMap, CommentedSeq # noqa: PLC0415
# Handle primitive types
if model_instance is None or isinstance(model_instance, (str, int, float, bool)):
return model_instance
# Handle lists/sequences
if isinstance(model_instance, (list, tuple)):
seq = CommentedSeq()
for item in model_instance:
seq.append(model_to_commented_yaml(item, indent_level = indent_level + 1, exclude = exclude))
return seq
# Handle dictionaries (not from Pydantic models)
if isinstance(model_instance, dict) and not isinstance(model_instance, BaseModel):
cmap = CommentedMap()
for key, value in model_instance.items():
if _should_exclude(key, exclude):
continue
cmap[key] = model_to_commented_yaml(value, indent_level = indent_level + 1, exclude = exclude)
return cmap
# Handle Pydantic models
if isinstance(model_instance, BaseModel):
cmap = CommentedMap()
model_class = model_instance.__class__
field_count = 0
# Get field information from the model class
for field_name, field_info in model_class.model_fields.items():
# Skip excluded fields
if _should_exclude(field_name, exclude):
continue
# Get the value from the instance, handling unset required fields
try:
value = getattr(model_instance, field_name)
except AttributeError:
# Field is not set (e.g., required field with no default)
continue
# Add visual separators
if indent_level == 0 and field_count > 0:
# Major section: blank line + prominent separator with 80 # characters
cmap.yaml_set_comment_before_after_key(field_name, before = "\n" + "#" * 80, indent = 0)
elif indent_level > 0:
# Nested fields: always add blank line separator (both between siblings and before first child)
cmap.yaml_set_comment_before_after_key(field_name, before = "", indent = 0)
# Get nested exclude rules for this field
nested_exclude = _get_nested_exclude(field_name, exclude)
# Process the value recursively
processed_value = model_to_commented_yaml(value, indent_level = indent_level + 1, exclude = nested_exclude)
cmap[field_name] = processed_value
field_count += 1
# Build comment from description and examples
comment_parts = []
# Add description if available
description = field_info.description
if description:
comment_parts.append(description)
# Add examples if available
examples = field_info.examples
if examples:
# Check if this is a list field by inspecting type annotation first (handles empty lists),
# then fall back to runtime value type check
is_list_field = get_origin(field_info.annotation) is list or isinstance(value, list)
if is_list_field:
# For list fields, show YAML syntax with field name for clarity
examples_lines = [
"Example usage:",
f" {field_name}:",
*[f" - {ex}" for ex in examples]
]
comment_parts.append("\n".join(examples_lines))
elif len(examples) == 1:
# Single example for scalar field: use singular form without list marker
comment_parts.append(f"Example: {examples[0]}")
else:
# Multiple examples for scalar field: show as alternatives (not list items)
# Use bullets (•) instead of hyphens to distinguish from YAML list syntax
examples_lines = ["Examples (choose one):", *[f"{ex}" for ex in examples]]
comment_parts.append("\n".join(examples_lines))
# Set the comment above the key
if comment_parts:
full_comment = "\n".join(comment_parts)
cmap.yaml_set_comment_before_after_key(field_name, before = full_comment, indent = indent_level * 2)
return cmap
# Fallback: return as-is
return model_instance
def save_commented_model(
filepath:str | Path,
model_instance:Any,
*,
header:str | None = None,
exclude:set[str] | dict[str, Any] | None = None,
) -> None:
"""
Save a Pydantic model to a YAML file with field descriptions as comments.
This function converts a Pydantic model to a commented YAML structure
where each field has its description (and optionally examples) as a
block comment above the key.
Args:
filepath: Path to the output YAML file
model_instance: Pydantic model instance to save
header: Optional header string to write at the top of the file
exclude: Optional set of field names to exclude, or dict for nested exclusion
Example:
>>> from kleinanzeigen_bot.model.config_model import Config
>>> from pathlib import Path
>>> import tempfile
>>> config = Config()
>>> with tempfile.TemporaryDirectory() as tmpdir:
... save_commented_model(Path(tmpdir) / "config.yaml", config, header="# Config file")
"""
filepath = Path(unicodedata.normalize("NFC", str(filepath)))
filepath.parent.mkdir(parents = True, exist_ok = True)
LOG.info("Saving [%s]...", filepath)
# Convert to commented structure directly from model (preserves metadata)
commented_data = model_to_commented_yaml(model_instance, exclude = exclude)
with open(filepath, "w", encoding = "utf-8") as file:
if header:
file.write(header)
file.write("\n")
yaml = _configure_yaml()
yaml.dump(commented_data, file)

View File

@@ -0,0 +1,36 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import sys, traceback # isort: skip
from types import FrameType, TracebackType
from typing import Final
from pydantic import ValidationError
from . import loggers
from .pydantics import format_validation_error
LOG:Final[loggers.Logger] = loggers.get_logger(__name__)
def on_exception(ex_type:type[BaseException] | None, ex:BaseException | None, ex_traceback:TracebackType | None) -> None:
if ex_type is None or ex is None:
LOG.error("Unknown exception occurred (missing exception info): ex_type=%s, ex=%s", ex_type, ex)
return
if issubclass(ex_type, KeyboardInterrupt):
sys.__excepthook__(ex_type, ex, ex_traceback)
elif loggers.is_debug(LOG) or isinstance(ex, (AttributeError, ImportError, NameError, TypeError)):
LOG.error("".join(traceback.format_exception(ex_type, ex, ex_traceback)))
elif isinstance(ex, ValidationError):
LOG.error(format_validation_error(ex))
elif isinstance(ex, AssertionError):
LOG.error(ex)
else:
LOG.error("%s: %s", ex_type.__name__, ex)
sys.exit(1)
def on_sigint(_sig:int, _frame:FrameType | None) -> None:
LOG.warning("Aborted on user request.")
sys.exit(0)

View File

@@ -0,0 +1,16 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from datetime import timedelta
class KleinanzeigenBotError(RuntimeError):
"""Base class for all custom bot-related exceptions."""
class CaptchaEncountered(KleinanzeigenBotError):
"""Raised when a Captcha was detected and auto-restart is enabled."""
def __init__(self, restart_delay:timedelta) -> None:
super().__init__()
self.restart_delay = restart_delay

View File

@@ -0,0 +1,47 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import asyncio, os # isort: skip
from pathlib import Path
def abspath(relative_path:str, relative_to:str | None = None) -> str:
"""
Return a normalized absolute path based on *relative_to*.
If 'relative_path' is already absolute, it is normalized and returned.
Otherwise, the function joins 'relative_path' with 'relative_to' (or the current working directory if not provided),
normalizes the result, and returns the absolute path.
"""
if not relative_to:
return os.path.abspath(relative_path)
if os.path.isabs(relative_path):
return os.path.normpath(relative_path)
base = os.path.abspath(relative_to)
if os.path.isfile(base):
base = os.path.dirname(base)
return os.path.normpath(os.path.join(base, relative_path))
async def exists(path:str | Path) -> bool:
"""
Asynchronously check if a file or directory exists.
:param path: Path to check
:return: True if path exists, False otherwise
"""
return await asyncio.get_running_loop().run_in_executor(None, Path(path).exists)
async def is_dir(path:str | Path) -> bool:
"""
Asynchronously check if a path is a directory.
:param path: Path to check
:return: True if path is a directory, False otherwise
"""
return await asyncio.get_running_loop().run_in_executor(None, Path(path).is_dir)

View File

@@ -0,0 +1,199 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import ctypes, gettext, inspect, locale, logging, os, sys # isort: skip
from collections.abc import Sized
from typing import Any, Final, NamedTuple
from kleinanzeigen_bot import resources
from . import dicts, reflect
__all__ = [
"Locale",
"get_current_locale",
"pluralize",
"set_current_locale",
"translate"
]
LOG:Final[logging.Logger] = logging.getLogger(__name__)
class Locale(NamedTuple):
language:str # Language code (e.g., "en", "de")
region:str | None = None # Region code (e.g., "US", "DE")
encoding:str = "UTF-8" # Encoding format (e.g., "UTF-8")
def __str__(self) -> str:
"""
>>> str(Locale("en", "US", "UTF-8"))
'en_US.UTF-8'
>>> str(Locale("en", "US"))
'en_US.UTF-8'
>>> str(Locale("en"))
'en.UTF-8'
>>> str(Locale("de", None, "UTF-8"))
'de.UTF-8'
"""
region_part = f"_{self.region}" if self.region else ""
encoding_part = f".{self.encoding}" if self.encoding else ""
return f"{self.language}{region_part}{encoding_part}"
@staticmethod
def of(locale_string:str) -> "Locale":
"""
>>> Locale.of("en_US.UTF-8")
Locale(language='en', region='US', encoding='UTF-8')
>>> Locale.of("de.UTF-8")
Locale(language='de', region=None, encoding='UTF-8')
>>> Locale.of("de_DE")
Locale(language='de', region='DE', encoding='UTF-8')
>>> Locale.of("en")
Locale(language='en', region=None, encoding='UTF-8')
>>> Locale.of("en.UTF-8")
Locale(language='en', region=None, encoding='UTF-8')
"""
parts = locale_string.split(".")
language_and_region = parts[0]
encoding = parts[1].upper() if len(parts) > 1 else "UTF-8"
parts = language_and_region.split("_")
language = parts[0]
region = parts[1].upper() if len(parts) > 1 else None
return Locale(language = language, region = region, encoding = encoding)
def _detect_locale() -> Locale:
"""
Detects the system language, returning a tuple of (language, region, encoding).
- On macOS/Linux, it uses the LANG environment variable.
- On Windows, it uses the Windows API via ctypes to get the default UI language.
Returns:
(language, region, encoding): e.g. ("en", "US", "UTF-8")
"""
lang = os.environ.get("LANG", None)
if not lang and os.name == "nt": # Windows
try:
lang = locale.windows_locale.get(ctypes.windll.kernel32.GetUserDefaultUILanguage(), "en_US") # type: ignore[attr-defined,unused-ignore] # mypy
except Exception:
LOG.warning("Error detecting language on Windows", exc_info = True)
return Locale.of(lang) if lang else Locale("en", "US", "UTF-8")
_CURRENT_LOCALE:Locale = _detect_locale()
_TRANSLATIONS:dict[str, Any] | None = None
def translate(text:object, caller:inspect.FrameInfo | None) -> str:
text = str(text)
if not caller:
return text
global _TRANSLATIONS # noqa: PLW0603 Using the global statement to update `...` is discouraged
if _TRANSLATIONS is None:
try:
_TRANSLATIONS = dicts.load_dict_from_module(resources, f"translations.{_CURRENT_LOCALE[0]}.yaml")
except FileNotFoundError:
_TRANSLATIONS = {}
if not _TRANSLATIONS:
return text
module_name = caller.frame.f_globals.get("__name__") # pylint: disable=redefined-outer-name
file_basename = os.path.splitext(os.path.basename(caller.filename))[0]
if module_name and module_name.endswith(f".{file_basename}"):
module_name = module_name[:-(len(file_basename) + 1)]
if module_name:
module_name = module_name.replace(".", "/")
file_key = f"{file_basename}.py" if module_name == file_basename else f"{module_name}/{file_basename}.py"
translation = dicts.safe_get(_TRANSLATIONS,
file_key,
caller.function,
text
)
return translation if translation else text
# replace gettext.gettext with custom _translate function
_original_gettext = gettext.gettext
gettext.gettext = lambda message: translate(_original_gettext(message), reflect.get_caller())
for module_name, module in sys.modules.copy().items():
if module is None or module_name in sys.builtin_module_names:
continue
if hasattr(module, "_") and module._ is _original_gettext:
module._ = gettext.gettext # type: ignore[attr-defined]
if hasattr(module, "gettext") and module.gettext is _original_gettext:
module.gettext = gettext.gettext # type: ignore[attr-defined]
def get_current_locale() -> Locale:
return _CURRENT_LOCALE
def set_current_locale(new_locale:Locale) -> None:
global _CURRENT_LOCALE, _TRANSLATIONS # noqa: PLW0603 Using the global statement to update `...` is discouraged
if new_locale.language != _CURRENT_LOCALE.language:
_TRANSLATIONS = None
_CURRENT_LOCALE = new_locale
def pluralize(noun:str, count:int | Sized, *, prefix_with_count:bool = True) -> str:
"""
>>> set_current_locale(Locale("en")) # Setup for doctests
>>> pluralize("field", 1)
'1 field'
>>> pluralize("field", 2)
'2 fields'
>>> pluralize("field", 2, prefix_with_count = False)
'fields'
"""
noun = translate(noun, reflect.get_caller())
if isinstance(count, Sized):
count = len(count)
prefix = f"{count} " if prefix_with_count else ""
if count == 1:
return f"{prefix}{noun}"
# German
if _CURRENT_LOCALE.language == "de":
# Special cases
irregular_plurals = {
"Attribute": "Attribute",
"Bild": "Bilder",
"Feld": "Felder",
}
if noun in irregular_plurals:
return f"{prefix}{irregular_plurals[noun]}"
for singular_suffix, plural_suffix in irregular_plurals.items():
if noun.lower().endswith(singular_suffix):
pluralized = noun[:-len(singular_suffix)] + plural_suffix.lower()
return f"{prefix}{pluralized}"
# Very simplified German rules
if noun.endswith("ei"):
return f"{prefix}{noun}en" # Datei -> Dateien
if noun.endswith("e"):
return f"{prefix}{noun}n" # Blume -> Blumen
if noun.endswith(("el", "er", "en")):
return f"{prefix}{noun}" # Keller -> Keller
if noun[-1] in "aeiou":
return f"{prefix}{noun}s" # Auto -> Autos
return f"{prefix}{noun}e" # Hund -> Hunde
# English
if len(noun) < 2: # noqa: PLR2004 Magic value used in comparison
return f"{prefix}{noun}s"
if noun.endswith(("s", "sh", "ch", "x", "z")):
return f"{prefix}{noun}es"
if noun.endswith("y") and noun[-2].lower() not in "aeiou":
return f"{prefix}{noun[:-1]}ies"
return f"{prefix}{noun}s"

View File

@@ -0,0 +1,82 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import ctypes, sys # isort: skip
from kleinanzeigen_bot.utils.i18n import get_current_locale
from kleinanzeigen_bot.utils.misc import is_frozen
def _is_launched_from_windows_explorer() -> bool:
"""
Returns True if this process is the *only* one attached to the console,
i.e. the user started us by double-clicking in Windows Explorer.
"""
if not is_frozen():
return False # Only relevant when compiled exe
if sys.platform != "win32":
return False # Only relevant on Windows
# Allocate small buffer for at most 3 PIDs
DWORD = ctypes.c_uint
pids = (DWORD * 3)()
n = int(ctypes.windll.kernel32.GetConsoleProcessList(pids, 3))
return n <= 2 # our PID (+ maybe conhost.exe) -> console dies with us # noqa: PLR2004 # Magic value used in comparison
def ensure_not_launched_from_windows_explorer() -> None:
"""
Terminates the application if the EXE was started by double-clicking in Windows Explorer
instead of from a terminal (cmd.exe / PowerShell).
"""
if not _is_launched_from_windows_explorer():
return
if get_current_locale().language == "de":
banner = (
"\n"
" ┌─────────────────────────────────────────────────────────────┐\n"
" │ Kleinanzeigen-Bot ist ein *Kommandozeilentool*. │\n"
" │ │\n"
" │ Du hast das Programm scheinbar per Doppelklick gestartet. │\n"
" │ │\n"
" │ -> Bitte starte es stattdessen in einem Terminal: │\n"
" │ │\n"
" │ kleinanzeigen-bot.exe [OPTIONEN] │\n"
" │ │\n"
" │ Schneller Weg, ein Terminal zu öffnen: │\n"
" │ 1. Drücke Win + R, gib cmd ein und drücke Enter. │\n"
" │ 2. Wechsle per `cd` in das Verzeichnis mit dieser Datei. │\n"
" │ 3. Gib den obigen Befehl ein und drücke Enter. │\n"
" │ │\n"
" │─────────────────────────────────────────────────────────────│\n"
" │ Drücke <Enter>, um dieses Fenster zu schließen. │\n"
" └─────────────────────────────────────────────────────────────┘\n"
)
else:
banner = (
"\n"
" ┌─────────────────────────────────────────────────────────────┐\n"
" │ Kleinanzeigen-Bot is a *command-line* tool. │\n"
" │ │\n"
" │ It looks like you launched it by double-clicking the EXE. │\n"
" │ │\n"
" │ -> Please run it from a terminal instead: │\n"
" │ │\n"
" │ kleinanzeigen-bot.exe [OPTIONS] │\n"
" │ │\n"
" │ Quick way to open a terminal: │\n"
" │ 1. Press Win + R , type cmd and press Enter. │\n"
" │ 2. cd to the folder that contains this file. │\n"
" │ 3. Type the command above and press Enter. │\n"
" │ │\n"
" │─────────────────────────────────────────────────────────────│\n"
" │ Press <Enter> to close this window. │\n"
" └─────────────────────────────────────────────────────────────┘\n"
)
print(banner, file = sys.stderr, flush = True)
input() # keep window open
sys.exit(1)

View File

@@ -0,0 +1,208 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import copy, logging, os, re, sys # isort: skip
from gettext import gettext as _
from logging.handlers import RotatingFileHandler
from typing import Any, Final # @UnusedImport
import colorama
__all__ = [
"Logger",
"LogFileHandle",
"DEBUG",
"INFO",
"WARNING",
"ERROR",
"CRITICAL",
"configure_console_logging",
"configure_file_logging",
"flush_all_handlers",
"get_logger",
"is_debug"
]
CRITICAL = logging.CRITICAL
DEBUG = logging.DEBUG
ERROR = logging.ERROR
INFO = logging.INFO
WARNING = logging.WARNING
Logger = logging.Logger
LOG_ROOT:Final[Logger] = logging.getLogger()
class _MaxLevelFilter(logging.Filter):
def __init__(self, level:int) -> None:
super().__init__()
self.level = level
def filter(self, record:logging.LogRecord) -> bool:
return record.levelno <= self.level
def configure_console_logging() -> None:
# if a StreamHandler already exists, do not append it again
if any(isinstance(h, logging.StreamHandler) for h in LOG_ROOT.handlers):
return
class CustomFormatter(logging.Formatter):
LEVEL_COLORS = {
DEBUG: colorama.Fore.BLACK + colorama.Style.BRIGHT,
INFO: colorama.Fore.BLACK + colorama.Style.BRIGHT,
WARNING: colorama.Fore.YELLOW,
ERROR: colorama.Fore.RED,
CRITICAL: colorama.Fore.RED,
}
MESSAGE_COLORS = {
DEBUG: colorama.Fore.BLACK + colorama.Style.BRIGHT,
INFO: colorama.Fore.RESET,
WARNING: colorama.Fore.YELLOW,
ERROR: colorama.Fore.RED,
CRITICAL: colorama.Fore.RED + colorama.Style.BRIGHT,
}
VALUE_COLORS = {
DEBUG: colorama.Fore.BLACK + colorama.Style.BRIGHT,
INFO: colorama.Fore.MAGENTA,
WARNING: colorama.Fore.MAGENTA,
ERROR: colorama.Fore.MAGENTA,
CRITICAL: colorama.Fore.MAGENTA,
}
def _relativize_paths_under_cwd(self, record:logging.LogRecord) -> None:
"""
Mutate record.args in-place, converting any absolute-path strings
under the current working directory into relative paths.
"""
if not record.args:
return
cwd = os.getcwd()
def _rel_if_subpath(val:Any) -> Any:
if isinstance(val, str) and os.path.isabs(val):
# don't relativize log-file paths
if val.endswith(".log"):
return val
try:
if os.path.commonpath([cwd, val]) == cwd:
return os.path.relpath(val, cwd)
except ValueError:
return val
return val
if isinstance(record.args, tuple):
record.args = tuple(_rel_if_subpath(a) for a in record.args)
elif isinstance(record.args, dict):
record.args = {k: _rel_if_subpath(v) for k, v in record.args.items()}
def format(self, record:logging.LogRecord) -> str:
# Deep copy fails if record.args contains objects with
# __init__(...) parameters (e.g., CaptchaEncountered).
# A shallow copy is sufficient to preserve the original.
record = copy.copy(record)
self._relativize_paths_under_cwd(record)
level_color = self.LEVEL_COLORS.get(record.levelno, "")
msg_color = self.MESSAGE_COLORS.get(record.levelno, "")
value_color = self.VALUE_COLORS.get(record.levelno, "")
# translate and colorize log level name
levelname = _(record.levelname) if record.levelno > DEBUG else record.levelname
record.levelname = f"{level_color}[{levelname}]{colorama.Style.RESET_ALL}"
# highlight message values enclosed by [...], "...", and '...'
record.msg = re.sub(
r"\[([^\]]+)\]|\"([^\"]+)\"|\'([^\']+)\'",
lambda match: f"[{value_color}{match.group(1) or match.group(2) or match.group(3)}{colorama.Fore.RESET}{msg_color}]",
str(record.msg),
)
# colorize message
record.msg = f"{msg_color}{record.msg}{colorama.Style.RESET_ALL}"
return super().format(record)
formatter = CustomFormatter("%(levelname)s %(message)s")
stdout_log = logging.StreamHandler(sys.stderr)
stdout_log.setLevel(DEBUG)
stdout_log.addFilter(_MaxLevelFilter(INFO))
stdout_log.setFormatter(formatter)
LOG_ROOT.addHandler(stdout_log)
stderr_log = logging.StreamHandler(sys.stderr)
stderr_log.setLevel(WARNING)
stderr_log.setFormatter(formatter)
LOG_ROOT.addHandler(stderr_log)
class LogFileHandle:
"""Encapsulates a log file handler with close and status methods."""
def __init__(self, file_path:str, handler:RotatingFileHandler, logger:Logger) -> None:
self.file_path = file_path
self._handler:RotatingFileHandler | None = handler
self._logger = logger
def close(self) -> None:
"""Flushes, removes, and closes the log handler."""
if self._handler:
self._handler.flush()
self._logger.removeHandler(self._handler)
self._handler.close()
self._handler = None
def is_closed(self) -> bool:
"""Returns whether the log handler has been closed."""
return not self._handler
def configure_file_logging(log_file_path:str) -> LogFileHandle:
"""
Sets up a file logger and returns a callable to flush, remove, and close it.
@param log_file_path: Path to the log file.
@return: Callable[[], None]: A function that cleans up the log handler.
"""
fh = RotatingFileHandler(
filename = log_file_path,
maxBytes = 10 * 1024 * 1024, # 10 MB
backupCount = 10,
encoding = "utf-8"
)
fh.setLevel(DEBUG)
fh.setFormatter(logging.Formatter("%(asctime)s [%(levelname)s] %(message)s"))
LOG_ROOT.addHandler(fh)
return LogFileHandle(log_file_path, fh, LOG_ROOT)
def flush_all_handlers() -> None:
for handler in LOG_ROOT.handlers:
handler.flush()
def get_logger(name:str | None = None) -> Logger:
"""
Returns a localized logger
"""
class TranslatingLogger(Logger):
def _log(self, level:int, msg:object, *args:Any, **kwargs:Any) -> None:
if level != DEBUG: # debug messages should not be translated
from . import i18n, reflect # noqa: PLC0415 # avoid cyclic import at module load
msg = i18n.translate(msg, reflect.get_caller(2))
super()._log(level, msg, *args, **kwargs)
logging.setLoggerClass(TranslatingLogger)
return logging.getLogger(name)
def is_debug(logger:Logger) -> bool:
return logger.isEnabledFor(DEBUG)

View File

@@ -0,0 +1,340 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import asyncio, decimal, re, sys, time # isort: skip
import unicodedata
from collections.abc import Callable
from datetime import datetime, timedelta, timezone
from gettext import gettext as _
from typing import Any, Mapping, TypeVar
from sanitize_filename import sanitize
from . import i18n
# https://mypy.readthedocs.io/en/stable/generics.html#generic-functions
T = TypeVar("T")
def coerce_page_number(value:Any) -> int | None:
"""Safely coerce a value to int or return None if conversion fails.
Whole-number floats are accepted; non-integer floats are rejected.
Args:
value: Value to coerce to int (can be int, str, float, or any type)
Returns:
int if value can be safely coerced, None otherwise
Examples:
>>> coerce_page_number(1)
1
>>> coerce_page_number("2")
2
>>> coerce_page_number(3.0)
3
>>> coerce_page_number(3.5) is None
True
>>> coerce_page_number(True) is None # Not 1!
True
>>> coerce_page_number(None) is None
True
>>> coerce_page_number("invalid") is None
True
>>> coerce_page_number([1, 2, 3]) is None
True
"""
if value is None:
return None
if isinstance(value, bool):
return None
if isinstance(value, float):
if value.is_integer():
return int(value)
return None
try:
return int(value)
except (TypeError, ValueError):
return None
def ensure(
condition:Any | bool | Callable[[], bool], # noqa: FBT001 Boolean-typed positional argument in function definition
error_message:str,
timeout:float = 5,
poll_frequency:float = 0.5,
) -> None:
"""
Ensure a condition is true, retrying until timeout.
:param condition: The condition to check (bool, value, or callable returning bool)
:param error_message: The error message to raise if the condition is not met
:param timeout: maximum time to wait in seconds, default is 5 seconds
:param poll_frequency: sleep interval between calls in seconds, default is 0.5 seconds
:raises AssertionError: if the condition is not met within the timeout
"""
if not isinstance(condition, Callable): # type: ignore[arg-type] # https://github.com/python/mypy/issues/6864
if condition:
return
raise AssertionError(_(error_message))
if timeout < 0:
raise AssertionError("[timeout] must be >= 0")
if poll_frequency < 0:
raise AssertionError("[poll_frequency] must be >= 0")
start_at = time.time()
while not condition(): # type: ignore[operator]
elapsed = time.time() - start_at
if elapsed >= timeout:
raise AssertionError(_(error_message))
time.sleep(poll_frequency)
def get_attr(obj:Mapping[str, Any] | Any, key:str, default:Any | None = None) -> Any:
"""
Unified getter for attribute or key access on objects or dicts.
Supports dot-separated paths for nested access.
Args:
obj: The object or dictionary to get the value from.
key: The attribute or key name, possibly nested via dot notation (e.g. 'contact.email').
default: A default value to return if the key/attribute path is not found.
Returns:
The found value or the default.
Examples:
>>> class User:
... def __init__(self, contact): self.contact = contact
# [object] normal nested access:
>>> get_attr(User({'email': 'user@example.com'}), 'contact.email')
'user@example.com'
# [object] missing key at depth:
>>> get_attr(User({'email': 'user@example.com'}), 'contact.foo') is None
True
# [object] explicit None treated as missing:
>>> get_attr(User({'email': None}), 'contact.email', default='n/a')
'n/a'
# [object] parent in path is None:
>>> get_attr(User(None), 'contact.email', default='n/a')
'n/a'
# [dict] normal nested access:
>>> get_attr({'contact': {'email': 'data@example.com'}}, 'contact.email')
'data@example.com'
# [dict] missing key at depth:
>>> get_attr({'contact': {'email': 'user@example.com'}}, 'contact.foo') is None
True
# [dict] explicit None treated as missing:
>>> get_attr({'contact': {'email': None}}, 'contact.email', default='n/a')
'n/a'
# [dict] parent in path is None:
>>> get_attr({}, 'contact.email', default='none')
'none'
"""
for part in key.split("."):
obj = obj.get(part) if isinstance(obj, Mapping) else getattr(obj, part, None)
if obj is None:
return default
return obj
def now() -> datetime:
return datetime.now(timezone.utc)
def is_frozen() -> bool:
"""
>>> is_frozen()
False
"""
return getattr(sys, "frozen", False)
async def ainput(prompt:str) -> str:
return await asyncio.to_thread(input, f"{prompt} ")
def parse_decimal(number:float | int | str) -> decimal.Decimal:
"""
>>> parse_decimal(5)
Decimal('5')
>>> parse_decimal(5.5)
Decimal('5.5')
>>> parse_decimal("5.5")
Decimal('5.5')
>>> parse_decimal("5,5")
Decimal('5.5')
>>> parse_decimal("1.005,5")
Decimal('1005.5')
>>> parse_decimal("1,005.5")
Decimal('1005.5')
"""
try:
return decimal.Decimal(number)
except decimal.InvalidOperation as ex:
parts = re.split("[.,]", str(number))
try:
return decimal.Decimal("".join(parts[:-1]) + "." + parts[-1])
except decimal.InvalidOperation:
raise decimal.DecimalException(f"Invalid number format: {number}") from ex
def parse_datetime(date:datetime | str | None, *, add_timezone_if_missing:bool = True, use_local_timezone:bool = True) -> datetime | None:
"""
Parses a datetime object or ISO-formatted string.
Args:
date: The input datetime object or ISO string.
add_timezone_if_missing: If True, add timezone info if missing.
use_local_timezone: If True, use local timezone; otherwise UTC if adding timezone.
Returns:
A timezone-aware or naive datetime object, depending on parameters.
>>> parse_datetime(datetime(2020, 1, 1, 0, 0), add_timezone_if_missing = False)
datetime.datetime(2020, 1, 1, 0, 0)
>>> parse_datetime("2020-01-01T00:00:00", add_timezone_if_missing = False)
datetime.datetime(2020, 1, 1, 0, 0)
>>> parse_datetime(None)
"""
if date is None:
return None
dt = date if isinstance(date, datetime) else datetime.fromisoformat(date)
if dt.tzinfo is None and add_timezone_if_missing:
dt = dt.astimezone() if use_local_timezone else dt.replace(tzinfo = timezone.utc)
return dt
def parse_duration(text:str) -> timedelta:
"""
Parses a human-readable duration string into a datetime.timedelta.
Supported units:
- d: days
- h: hours
- m: minutes
- s: seconds
Examples:
>>> parse_duration("1h 30m")
datetime.timedelta(seconds=5400)
>>> parse_duration("2d 4h 15m 10s")
datetime.timedelta(days=2, seconds=15310)
>>> parse_duration("45m")
datetime.timedelta(seconds=2700)
>>> parse_duration("3d")
datetime.timedelta(days=3)
>>> parse_duration("5h 5h")
datetime.timedelta(seconds=36000)
>>> parse_duration("invalid input")
datetime.timedelta(0)
"""
pattern = re.compile(r"(\d+)\s*([dhms])")
parts = pattern.findall(text.lower())
kwargs:dict[str, int] = {}
for value, unit in parts:
if unit == "d":
kwargs["days"] = kwargs.get("days", 0) + int(value)
elif unit == "h":
kwargs["hours"] = kwargs.get("hours", 0) + int(value)
elif unit == "m":
kwargs["minutes"] = kwargs.get("minutes", 0) + int(value)
elif unit == "s":
kwargs["seconds"] = kwargs.get("seconds", 0) + int(value)
return timedelta(**kwargs)
def format_timedelta(td:timedelta) -> str:
"""
Formats a timedelta into a human-readable string using the pluralize utility.
>>> format_timedelta(timedelta(seconds=90))
'1 minute, 30 seconds'
>>> format_timedelta(timedelta(hours=1))
'1 hour'
>>> format_timedelta(timedelta(days=2, hours=5))
'2 days, 5 hours'
>>> format_timedelta(timedelta(0))
'0 seconds'
"""
days = td.days
seconds = td.seconds
hours, remainder = divmod(seconds, 3600)
minutes, seconds = divmod(remainder, 60)
parts = []
if days:
parts.append(i18n.pluralize("day", days))
if hours:
parts.append(i18n.pluralize("hour", hours))
if minutes:
parts.append(i18n.pluralize("minute", minutes))
if seconds:
parts.append(i18n.pluralize("second", seconds))
return ", ".join(parts) if parts else i18n.pluralize("second", 0)
def sanitize_folder_name(name:str, max_length:int = 100) -> str:
"""
Sanitize a string for use as a folder name using `sanitize-filename`.
- Cross-platform safe (Windows/macOS/Linux)
- Removes invalid characters and Windows reserved names
- Handles path traversal attempts
- Truncates to `max_length`
Args:
name: The input string.
max_length: Maximum length of the resulting folder name (default: 100).
Returns:
A sanitized folder name (falls back to "untitled" when empty).
"""
# Normalize whitespace and handle empty input
raw = (name or "").strip()
if not raw:
return "untitled"
# Apply sanitization, then normalize to NFC
# Note: sanitize-filename converts to NFD, so we must normalize AFTER sanitizing
# to ensure consistent NFC encoding across platforms (macOS HFS+, Linux, Windows)
# This prevents path mismatches when saving files to sanitized directories (issue #728)
safe:str = sanitize(raw)
safe = unicodedata.normalize("NFC", safe)
# Truncate with word-boundary preference
if len(safe) > max_length:
truncated = safe[:max_length]
last_break = max(truncated.rfind(" "), truncated.rfind("_"))
safe = truncated[:last_break] if last_break > int(max_length * 0.7) else truncated
return safe

View File

@@ -0,0 +1,18 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import socket
def is_port_open(host:str, port:int) -> bool:
s:socket.socket | None = None
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(1)
s.connect((host, port))
return True
except Exception:
return False
finally:
if s:
s.close()

View File

@@ -0,0 +1,210 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from gettext import gettext as _
from typing import Any, Literal, cast
from pydantic import BaseModel, ValidationError
from pydantic_core import InitErrorDetails
from typing_extensions import Self
from kleinanzeigen_bot.utils.i18n import pluralize
class ContextualValidationError(ValidationError):
context:Any
class ContextualModel(BaseModel):
@classmethod
def model_validate(
cls,
obj:Any,
*,
strict:bool | None = None,
extra:Literal["allow", "ignore", "forbid"] | None = None,
from_attributes:bool | None = None,
context:Any | None = None,
by_alias:bool | None = None,
by_name:bool | None = None,
) -> Self:
"""
Proxy to BaseModel.model_validate, but on error reraise as
ContextualValidationError including the passed context.
Note: Pydantic v2 does not support call-time `extra=...`; this argument
is accepted for backward-compatibility but ignored.
"""
try:
_ = extra # kept for backward-compatibility; intentionally ignored
return super().model_validate(
obj,
strict = strict,
from_attributes = from_attributes,
context = context,
by_alias = by_alias,
by_name = by_name,
)
except ValidationError as ex:
new_ex = ContextualValidationError.from_exception_data(
title = ex.title,
line_errors = cast(list[InitErrorDetails], ex.errors()),
)
new_ex.context = context
raise new_ex from ex
def format_validation_error(ex:ValidationError) -> str:
"""
Turn a Pydantic ValidationError into the classic:
N validation errors for ModelName
field
message [type=code]
>>> from pydantic import BaseModel, ValidationError
>>> class M(BaseModel): x: int
>>> try:
... M(x="no-int")
... except ValidationError as e:
... print(format_validation_error(e))
1 validation error for [M]:
- x: Input should be a valid integer, unable to parse string as an integer
"""
errors = ex.errors(include_url = False, include_input = False, include_context = True)
ctx = ex.context if isinstance(ex, ContextualValidationError) and ex.context else ex.title
header = _("%s for [%s]:") % (pluralize("validation error", ex.error_count()), ctx)
lines = [header]
for err in errors:
loc = ".".join(str(p) for p in err["loc"])
msg_ctx = err.get("ctx")
code = err["type"]
msg_template = __get_message_template(code)
if msg_template:
msg = _(msg_template).format(**msg_ctx) if msg_ctx else msg_template
msg = msg.replace("' or '", _("' or '"))
lines.append(f"- {loc}: {msg}")
else:
lines.append(f"- {loc}: {err['msg']} [type={code}]")
return "\n".join(lines)
def __get_message_template(error_code:str) -> str | None:
# https://github.com/pydantic/pydantic-core/blob/d03bf4a01ca3b378cc8590bd481f307e82115bc6/src/errors/types.rs#L477
# ruff: noqa: PLR0911 Too many return statements
# ruff: noqa: PLR0912 Too many branches
# ruff: noqa: E701 Multiple statements on one line (colon)
match error_code:
case "no_such_attribute": return _("Object has no attribute '{attribute}'")
case "json_invalid": return _("Invalid JSON: {error}")
case "json_type": return _("JSON input should be string, bytes or bytearray")
case "needs_python_object": return _("Cannot check `{method_name}` when validating from json, use a JsonOrPython validator instead")
case "recursion_loop": return _("Recursion error - cyclic reference detected")
case "missing": return _("Field required")
case "frozen_field": return _("Field is frozen")
case "frozen_instance": return _("Instance is frozen")
case "extra_forbidden": return _("Extra inputs are not permitted")
case "invalid_key": return _("Keys should be strings")
case "get_attribute_error": return _("Error extracting attribute: {error}")
case "model_type": return _("Input should be a valid dictionary or instance of {class_name}")
case "model_attributes_type": return _("Input should be a valid dictionary or object to extract fields from")
case "dataclass_type": return _("Input should be a dictionary or an instance of {class_name}")
case "dataclass_exact_type": return _("Input should be an instance of {class_name}")
case "none_required": return _("Input should be None")
case "greater_than": return _("Input should be greater than {gt}")
case "greater_than_equal": return _("Input should be greater than or equal to {ge}")
case "less_than": return _("Input should be less than {lt}")
case "less_than_equal": return _("Input should be less than or equal to {le}")
case "multiple_of": return _("Input should be a multiple of {multiple_of}")
case "finite_number": return _("Input should be a finite number")
case "too_short": return _("{field_type} should have at least {min_length} item{expected_plural} after validation, not {actual_length}")
case "too_long": return _("{field_type} should have at most {max_length} item{expected_plural} after validation, not {actual_length}")
case "iterable_type": return _("Input should be iterable")
case "iteration_error": return _("Error iterating over object, error: {error}")
case "string_type": return _("Input should be a valid string")
case "string_sub_type": return _("Input should be a string, not an instance of a subclass of str")
case "string_unicode": return _("Input should be a valid string, unable to parse raw data as a unicode string")
case "string_too_short": return _("String should have at least {min_length} character{expected_plural}")
case "string_too_long": return _("String should have at most {max_length} character{expected_plural}")
case "string_pattern_mismatch": return _("String should match pattern '{pattern}'")
case "enum": return _("Input should be {expected}")
case "dict_type": return _("Input should be a valid dictionary")
case "mapping_type": return _("Input should be a valid mapping, error: {error}")
case "list_type": return _("Input should be a valid list")
case "tuple_type": return _("Input should be a valid tuple")
case "set_type": return _("Input should be a valid set")
case "set_item_not_hashable": return _("Set items should be hashable")
case "bool_type": return _("Input should be a valid boolean")
case "bool_parsing": return _("Input should be a valid boolean, unable to interpret input")
case "int_type": return _("Input should be a valid integer")
case "int_parsing": return _("Input should be a valid integer, unable to parse string as an integer")
case "int_from_float": return _("Input should be a valid integer, got a number with a fractional part")
case "int_parsing_size": return _("Unable to parse input string as an integer, exceeded maximum size")
case "float_type": return _("Input should be a valid number")
case "float_parsing": return _("Input should be a valid number, unable to parse string as a number")
case "bytes_type": return _("Input should be a valid bytes")
case "bytes_too_short": return _("Data should have at least {min_length} byte{expected_plural}")
case "bytes_too_long": return _("Data should have at most {max_length} byte{expected_plural}")
case "bytes_invalid_encoding": return _("Data should be valid {encoding}: {encoding_error}")
case "value_error": return _("Value error, {error}")
case "assertion_error": return _("Assertion failed, {error}")
case "custom_error": return None # handled separately
case "literal_error": return _("Input should be {expected}")
case "date_type": return _("Input should be a valid date")
case "date_parsing": return _("Input should be a valid date in the format YYYY-MM-DD, {error}")
case "date_from_datetime_parsing": return _("Input should be a valid date or datetime, {error}")
case "date_from_datetime_inexact": return _("Datetimes provided to dates should have zero time - e.g. be exact dates")
case "date_past": return _("Date should be in the past")
case "date_future": return _("Date should be in the future")
case "time_type": return _("Input should be a valid time")
case "time_parsing": return _("Input should be in a valid time format, {error}")
case "datetime_type": return _("Input should be a valid datetime")
case "datetime_parsing": return _("Input should be a valid datetime, {error}")
case "datetime_object_invalid": return _("Invalid datetime object, got {error}")
case "datetime_from_date_parsing": return _("Input should be a valid datetime or date, {error}")
case "datetime_past": return _("Input should be in the past")
case "datetime_future": return _("Input should be in the future")
case "timezone_naive": return _("Input should not have timezone info")
case "timezone_aware": return _("Input should have timezone info")
case "timezone_offset": return _("Timezone offset of {tz_expected} required, got {tz_actual}")
case "time_delta_type": return _("Input should be a valid timedelta")
case "time_delta_parsing": return _("Input should be a valid timedelta, {error}")
case "frozen_set_type": return _("Input should be a valid frozenset")
case "is_instance_of": return _("Input should be an instance of {class}")
case "is_subclass_of": return _("Input should be a subclass of {class}")
case "callable_type": return _("Input should be callable")
case "union_tag_invalid": return _("Input tag '{tag}' found using {discriminator} does not match any of the expected tags: {expected_tags}")
case "union_tag_not_found": return _("Unable to extract tag using discriminator {discriminator}")
case "arguments_type": return _("Arguments must be a tuple, list or a dictionary")
case "missing_argument": return _("Missing required argument")
case "unexpected_keyword_argument": return _("Unexpected keyword argument")
case "missing_keyword_only_argument": return _("Missing required keyword only argument")
case "unexpected_positional_argument": return _("Unexpected positional argument")
case "missing_positional_only_argument": return _("Missing required positional only argument")
case "multiple_argument_values": return _("Got multiple values for argument")
case "url_type": return _("URL input should be a string or URL")
case "url_parsing": return _("Input should be a valid URL, {error}")
case "url_syntax_violation": return _("Input violated strict URL syntax rules, {error}")
case "url_too_long": return _("URL should have at most {max_length} character{expected_plural}")
case "url_scheme": return _("URL scheme should be {expected_schemes}")
case "uuid_type": return _("UUID input should be a string, bytes or UUID object")
case "uuid_parsing": return _("Input should be a valid UUID, {error}")
case "uuid_version": return _("UUID version {expected_version} expected")
case "decimal_type": return _("Decimal input should be an integer, float, string or Decimal object")
case "decimal_parsing": return _("Input should be a valid decimal")
case "decimal_max_digits": return _("Decimal input should have no more than {max_digits} digit{expected_plural} in total")
case "decimal_max_places": return _("Decimal input should have no more than {decimal_places} decimal place{expected_plural}")
case "decimal_whole_digits": return _("Decimal input should have no more than {whole_digits} digit{expected_plural} before the decimal point")
case "complex_type":
return _(
"Input should be a valid python complex object, a number, or a valid complex string "
"following the rules at https://docs.python.org/3/library/functions.html#complex"
)
case "complex_str_parsing":
return _(
"Input should be a valid complex string following the rules at "
"https://docs.python.org/3/library/functions.html#complex"
)
case _:
pass
return None

View File

@@ -0,0 +1,29 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import inspect
from typing import Any
def get_caller(depth:int = 1) -> inspect.FrameInfo | None:
stack = inspect.stack()
try:
for frame in stack[depth + 1:]:
if frame.function and frame.function != "<lambda>":
return frame
return None
finally:
# Explicitly delete stack frames to prevent reference cycles and potential memory leaks.
# inspect.stack() returns FrameInfo objects that contain references to frame objects,
# which can create circular references. While Python's GC handles this, explicit cleanup
# is recommended per Python docs: https://docs.python.org/3/library/inspect.html#the-interpreter-stack
# codeql[py/unnecessary-delete]
del stack
def is_integer(obj:Any) -> bool:
try:
int(obj)
return True
except (ValueError, TypeError):
return False

View File

@@ -0,0 +1,168 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""Collect per-operation timeout timings and persist per-run JSON sessions.
`TimingCollector` records operation durations in seconds, grouped by a single bot run
(`session_id`). Call `record(...)` during runtime and `flush()` once at command end to
append the current session to `timing_data.json` with automatic 30-day retention.
The collector is best-effort and designed for troubleshooting, not strict telemetry.
"""
from __future__ import annotations
import json, uuid # isort: skip
import os
from dataclasses import asdict, dataclass
from datetime import timedelta
from typing import TYPE_CHECKING, Any, Final
if TYPE_CHECKING:
from pathlib import Path
from kleinanzeigen_bot.utils import loggers, misc
LOG:Final[loggers.Logger] = loggers.get_logger(__name__)
RETENTION_DAYS:Final[int] = 30
TIMING_FILE:Final[str] = "timing_data.json"
@dataclass
class TimingRecord:
timestamp:str
operation_key:str
operation_type:str
description:str
configured_timeout_sec:float
effective_timeout_sec:float
actual_duration_sec:float
attempt_index:int
success:bool
def to_dict(self) -> dict[str, Any]:
return asdict(self)
class TimingCollector:
def __init__(self, output_dir:Path, command:str) -> None:
self.output_dir = output_dir.resolve()
self.command = command
self.session_id = uuid.uuid4().hex[:8]
self.started_at = misc.now().isoformat()
self.records:list[TimingRecord] = []
self._flushed = False
LOG.debug("Timing collection initialized (session=%s, output_dir=%s, command=%s)", self.session_id, self.output_dir, command)
def record(
self,
*,
key:str,
operation_type:str,
description:str,
configured_timeout:float,
effective_timeout:float,
actual_duration:float,
attempt_index:int,
success:bool,
) -> None:
self.records.append(
TimingRecord(
timestamp = misc.now().isoformat(),
operation_key = key,
operation_type = operation_type,
description = description,
configured_timeout_sec = configured_timeout,
effective_timeout_sec = effective_timeout,
actual_duration_sec = actual_duration,
attempt_index = attempt_index,
success = success,
)
)
LOG.debug(
"Timing captured: %s [%s] duration=%.3fs timeout=%.3fs success=%s",
operation_type,
key,
actual_duration,
effective_timeout,
success,
)
def flush(self) -> Path | None:
if self._flushed:
LOG.debug("Timing collection already flushed for this run")
return None
if not self.records:
LOG.debug("Timing collection enabled but no records captured in this run")
return None
try:
self.output_dir.mkdir(parents = True, exist_ok = True)
data = self._load_existing_sessions()
data.append(
{
"session_id": self.session_id,
"command": self.command,
"started_at": self.started_at,
"ended_at": misc.now().isoformat(),
"records": [record.to_dict() for record in self.records],
}
)
cutoff = misc.now() - timedelta(days = RETENTION_DAYS)
retained:list[dict[str, Any]] = []
dropped = 0
for session in data:
try:
parsed = misc.parse_datetime(session.get("started_at"), add_timezone_if_missing = True)
except ValueError:
parsed = None
if parsed is None:
dropped += 1
continue
if parsed >= cutoff:
retained.append(session)
else:
dropped += 1
if dropped > 0:
LOG.debug("Timing collection pruned %d old or malformed sessions", dropped)
output_file = self.output_dir / TIMING_FILE
temp_file = self.output_dir / f".{TIMING_FILE}.{self.session_id}.tmp"
with temp_file.open("w", encoding = "utf-8") as fd:
json.dump(retained, fd, indent = 2)
fd.write("\n")
fd.flush()
os.fsync(fd.fileno())
temp_file.replace(output_file)
LOG.debug(
"Timing collection flushed to %s (%d sessions, %d current records, retention=%d days)",
output_file,
len(retained),
len(self.records),
RETENTION_DAYS,
)
self.records = []
self._flushed = True
return output_file
except Exception as exc: # noqa: BLE001
LOG.warning("Failed to flush timing collection data: %s", exc)
return None
def _load_existing_sessions(self) -> list[dict[str, Any]]:
file_path = self.output_dir / TIMING_FILE
if not file_path.exists():
return []
try:
with file_path.open(encoding = "utf-8") as fd:
payload = json.load(fd)
if isinstance(payload, list):
return [item for item in payload if isinstance(item, dict)]
except Exception as exc: # noqa: BLE001
LOG.warning("Unable to load timing collection data from %s: %s", file_path, exc)
return []

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,282 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""XDG Base Directory path resolution with workspace abstraction."""
from __future__ import annotations
import sys
from dataclasses import dataclass, replace
from gettext import gettext as _
from pathlib import Path
from typing import Final, Literal
import platformdirs
from kleinanzeigen_bot.utils import loggers
from kleinanzeigen_bot.utils.files import abspath
LOG:Final[loggers.Logger] = loggers.get_logger(__name__)
APP_NAME:Final[str] = "kleinanzeigen-bot"
InstallationMode = Literal["portable", "xdg"]
PathCategory = Literal["config", "cache", "state"]
@dataclass(frozen = True)
class Workspace:
"""Resolved workspace paths for all bot side effects."""
mode:InstallationMode
config_file:Path
config_dir:Path # root directory for mode-dependent artifacts
log_file:Path | None
state_dir:Path
download_dir:Path
browser_profile_dir:Path
diagnostics_dir:Path
@classmethod
def for_config(cls, config_file:Path, log_basename:str) -> Workspace:
"""Build a portable-style workspace rooted at the config parent directory."""
config_file = config_file.resolve()
config_dir = config_file.parent
state_dir = config_dir / ".temp"
return cls(
mode = "portable",
config_file = config_file,
config_dir = config_dir,
log_file = config_dir / f"{log_basename}.log",
state_dir = state_dir,
download_dir = config_dir / "downloaded-ads",
browser_profile_dir = state_dir / "browser-profile",
diagnostics_dir = state_dir / "diagnostics",
)
def ensure_directory(path:Path, description:str) -> None:
"""Create directory and verify it exists."""
LOG.debug("Creating directory: %s", path)
try:
path.mkdir(parents = True, exist_ok = True)
except OSError as exc:
LOG.error("Failed to create %s %s: %s", description, path, exc)
raise
if not path.is_dir():
raise NotADirectoryError(str(path))
def _build_xdg_workspace(log_basename:str, config_file_override:Path | None = None) -> Workspace:
"""Build an XDG-style workspace using standard user directories."""
config_dir = get_xdg_base_dir("config").resolve()
state_dir = get_xdg_base_dir("state").resolve()
config_file = config_file_override.resolve() if config_file_override is not None else config_dir / "config.yaml"
return Workspace(
mode = "xdg",
config_file = config_file,
config_dir = config_dir,
log_file = state_dir / f"{log_basename}.log",
state_dir = state_dir,
download_dir = config_dir / "downloaded-ads",
browser_profile_dir = (get_xdg_base_dir("cache") / "browser-profile").resolve(),
diagnostics_dir = (get_xdg_base_dir("cache") / "diagnostics").resolve(),
)
def get_xdg_base_dir(category:PathCategory) -> Path:
"""Get XDG base directory for the given category."""
resolved:str | None = None
match category:
case "config":
resolved = platformdirs.user_config_dir(APP_NAME)
case "cache":
resolved = platformdirs.user_cache_dir(APP_NAME)
case "state":
resolved = platformdirs.user_state_dir(APP_NAME)
case _:
raise ValueError(f"Unsupported XDG category: {category}")
if resolved is None:
raise RuntimeError(f"Failed to resolve XDG base directory for category: {category}")
base_dir = Path(resolved)
LOG.debug("XDG %s directory: %s", category, base_dir)
return base_dir
def detect_installation_mode() -> Literal["portable", "xdg"] | None:
"""Detect installation mode based on config file location."""
portable_config = Path.cwd() / "config.yaml"
LOG.debug("Checking for portable config at: %s", portable_config)
if portable_config.exists():
LOG.debug("Detected installation mode: %s", "portable")
return "portable"
xdg_config = get_xdg_base_dir("config") / "config.yaml"
LOG.debug("Checking for XDG config at: %s", xdg_config)
if xdg_config.exists():
LOG.debug("Detected installation mode: %s", "xdg")
return "xdg"
LOG.info("No existing configuration (portable or system-wide) found")
return None
def prompt_installation_mode() -> Literal["portable", "xdg"]:
"""Prompt user to choose installation mode on first run."""
if not sys.stdin or not sys.stdin.isatty():
LOG.info("Non-interactive mode detected, defaulting to portable installation")
return "portable"
portable_ws = Workspace.for_config((Path.cwd() / "config.yaml").resolve(), APP_NAME)
xdg_workspace = _build_xdg_workspace(APP_NAME)
print(_("Choose installation type:"))
print(_("[1] Portable (current directory)"))
print(f" config: {portable_ws.config_file}")
print(f" log: {portable_ws.log_file}")
print(_("[2] User directories (per-user standard locations)"))
print(f" config: {xdg_workspace.config_file}")
print(f" log: {xdg_workspace.log_file}")
while True:
try:
choice = input(_("Enter 1 or 2: ")).strip()
except (EOFError, KeyboardInterrupt):
print()
LOG.info("Defaulting to portable installation mode")
return "portable"
if choice == "1":
mode:Literal["portable", "xdg"] = "portable"
LOG.info("User selected installation mode: %s", mode)
return mode
if choice == "2":
mode = "xdg"
LOG.info("User selected installation mode: %s", mode)
return mode
print(_("Invalid choice. Please enter 1 or 2."))
def _detect_mode_from_footprints_with_hits(
config_file:Path,
) -> tuple[Literal["portable", "xdg", "ambiguous", "unknown"], list[Path], list[Path]]:
"""
Detect workspace mode and return concrete footprint hits for diagnostics.
"""
config_file = config_file.resolve()
cwd_config = (Path.cwd() / "config.yaml").resolve()
xdg_config_dir = get_xdg_base_dir("config").resolve()
xdg_cache_dir = get_xdg_base_dir("cache").resolve()
xdg_state_dir = get_xdg_base_dir("state").resolve()
config_in_xdg_tree = config_file.is_relative_to(xdg_config_dir)
portable_hits:list[Path] = []
xdg_hits:list[Path] = []
if config_file == cwd_config:
portable_hits.append(cwd_config)
if not config_in_xdg_tree:
if (config_file.parent / ".temp").exists():
portable_hits.append((config_file.parent / ".temp").resolve())
if (config_file.parent / "downloaded-ads").exists():
portable_hits.append((config_file.parent / "downloaded-ads").resolve())
if config_in_xdg_tree:
xdg_hits.append(config_file)
if not config_in_xdg_tree and (xdg_config_dir / "config.yaml").exists():
xdg_hits.append((xdg_config_dir / "config.yaml").resolve())
if (xdg_config_dir / "downloaded-ads").exists():
xdg_hits.append((xdg_config_dir / "downloaded-ads").resolve())
if (xdg_cache_dir / "browser-profile").exists():
xdg_hits.append((xdg_cache_dir / "browser-profile").resolve())
if (xdg_cache_dir / "diagnostics").exists():
xdg_hits.append((xdg_cache_dir / "diagnostics").resolve())
if (xdg_state_dir / "update_check_state.json").exists():
xdg_hits.append((xdg_state_dir / "update_check_state.json").resolve())
portable_detected = len(portable_hits) > 0
xdg_detected = len(xdg_hits) > 0
if portable_detected and xdg_detected:
return "ambiguous", portable_hits, xdg_hits
if portable_detected:
return "portable", portable_hits, xdg_hits
if xdg_detected:
return "xdg", portable_hits, xdg_hits
return "unknown", portable_hits, xdg_hits
def _workspace_mode_resolution_error(
config_file:Path,
detected_mode:Literal["ambiguous", "unknown"],
portable_hits:list[Path],
xdg_hits:list[Path],
) -> ValueError:
def _format_hits(label:str, hits:list[Path]) -> str:
if not hits:
return f"{label}: {_('none')}"
deduped = list(dict.fromkeys(hits))
return f"{label}:\n- " + "\n- ".join(str(hit) for hit in deduped)
guidance = _(
"Cannot determine workspace mode for --config=%(config_file)s. "
"Use --workspace-mode=portable or --workspace-mode=xdg.\n"
"For cleanup guidance, see: %(url)s"
) % {
"config_file": config_file,
"url": "https://github.com/Second-Hand-Friends/kleinanzeigen-bot/blob/main/docs/CONFIGURATION.md#installation-modes",
}
details = f"{_format_hits(_('Portable footprint hits'), portable_hits)}\n{_format_hits(_('XDG footprint hits'), xdg_hits)}"
if detected_mode == "ambiguous":
return ValueError(f"{guidance}\n{_('Detected both portable and XDG footprints.')}\n{details}")
return ValueError(f"{guidance}\n{_('Detected neither portable nor XDG footprints.')}\n{details}")
def resolve_workspace(
config_arg:str | None,
logfile_arg:str | None,
*,
workspace_mode:InstallationMode | None,
logfile_explicitly_provided:bool,
log_basename:str,
) -> Workspace:
"""Resolve workspace paths from CLI flags and auto-detected installation mode."""
config_path = Path(abspath(config_arg)).resolve() if config_arg else None
mode = workspace_mode
if config_path and mode is None:
detected_mode, portable_hits, xdg_hits = _detect_mode_from_footprints_with_hits(config_path)
if detected_mode == "portable":
mode = "portable"
elif detected_mode == "xdg":
mode = "xdg"
else:
raise _workspace_mode_resolution_error(
config_path,
detected_mode,
portable_hits,
xdg_hits,
)
if config_arg:
if config_path is None or mode is None:
raise RuntimeError("Workspace mode and config path must be resolved when --config is supplied")
if mode == "portable":
workspace = Workspace.for_config(config_path, log_basename)
else:
workspace = _build_xdg_workspace(log_basename, config_file_override = config_path)
else:
mode = mode or detect_installation_mode()
if mode is None:
mode = prompt_installation_mode()
workspace = Workspace.for_config((Path.cwd() / "config.yaml").resolve(), log_basename) if mode == "portable" else _build_xdg_workspace(log_basename)
if logfile_explicitly_provided:
workspace = replace(workspace, log_file = Path(abspath(logfile_arg)).resolve() if logfile_arg else None)
return workspace

View File

@@ -1,14 +1,8 @@
""" # SPDX-FileCopyrightText: © Jens Bergmann and contributors
SPDX-FileCopyrightText: © Sebastian Thomschke and contributors # SPDX-License-Identifier: AGPL-3.0-or-later
SPDX-License-Identifier: AGPL-3.0-or-later # SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""
import logging
from typing import Final
from kleinanzeigen_bot import utils # This file makes the tests/ directory a Python package.
# It is required so that direct imports like 'from tests.conftest import ...' work correctly,
utils.configure_console_logging() # and to avoid mypy errors about duplicate module names when using such imports.
# Pytest does not require this for fixture discovery, but Python and mypy do for package-style imports.
LOG:Final[logging.Logger] = logging.getLogger("kleinanzeigen_bot")
LOG.setLevel(logging.DEBUG)

271
tests/conftest.py Normal file
View File

@@ -0,0 +1,271 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""
Shared test fixtures for the kleinanzeigen-bot test suite.
This module contains fixtures that are used across multiple test files.
Test-specific fixtures should be defined in individual test files or local conftest.py files.
Fixture Organization:
- Core fixtures: Basic test infrastructure (test_data_dir, test_bot_config, test_bot)
- Mock fixtures: Mock objects for external dependencies (browser_mock)
- Utility fixtures: Helper fixtures for common test scenarios (log_file_path)
- Smoke test fixtures: Special fixtures for smoke tests (smoke_bot, DummyBrowser, etc.)
- Test data fixtures: Shared test data (description_test_cases)
"""
import os
from collections.abc import Iterator
from typing import Any, Final, cast
from unittest.mock import MagicMock
import pytest
from kleinanzeigen_bot import KleinanzeigenBot
from kleinanzeigen_bot.model.ad_model import Ad
from kleinanzeigen_bot.model.config_model import Config
from kleinanzeigen_bot.utils import i18n, loggers
from kleinanzeigen_bot.utils.web_scraping_mixin import Browser
loggers.configure_console_logging()
LOG:Final[loggers.Logger] = loggers.get_logger("kleinanzeigen_bot")
LOG.setLevel(loggers.DEBUG)
# ============================================================================
# Core Fixtures - Basic test infrastructure
# ============================================================================
@pytest.fixture
def test_data_dir(tmp_path:str) -> str:
"""Provides a temporary directory for test data.
This fixture uses pytest's built-in tmp_path fixture to create a temporary
directory that is automatically cleaned up after each test.
"""
return str(tmp_path)
@pytest.fixture
def test_bot_config() -> Config:
"""Provides a basic sample configuration for testing.
This configuration includes all required fields for the bot to function:
- Login credentials (username/password)
- Publishing settings
"""
return Config.model_validate({
"ad_defaults": {
"contact": {
"name": "dummy_name",
"zipcode": "12345"
},
},
"login": {
"username": "dummy_user",
"password": "dummy_password"
},
"publishing": {
"delete_old_ads": "BEFORE_PUBLISH",
"delete_old_ads_by_title": False
}
})
@pytest.fixture
def test_bot(test_bot_config:Config) -> KleinanzeigenBot:
"""Provides a fresh KleinanzeigenBot instance for all test methods.
Dependencies:
- test_bot_config: Used to initialize the bot with a valid configuration
"""
bot_instance = KleinanzeigenBot()
bot_instance.config = test_bot_config
return bot_instance
# ============================================================================
# Mock Fixtures - Mock objects for external dependencies
# ============================================================================
@pytest.fixture
def browser_mock() -> MagicMock:
"""Provides a mock browser instance for testing.
This mock is configured with the Browser spec to ensure it has all
the required methods and attributes of a real Browser instance.
"""
return MagicMock(spec = Browser)
# ============================================================================
# Utility Fixtures - Helper fixtures for common test scenarios
# ============================================================================
@pytest.fixture
def log_file_path(test_data_dir:str) -> str:
"""Provides a temporary path for log files.
Dependencies:
- test_data_dir: Used to create the log file in the temporary test directory
"""
return os.path.join(str(test_data_dir), "test.log")
# ============================================================================
# Test Data Fixtures - Shared test data
# ============================================================================
@pytest.fixture
def description_test_cases() -> list[tuple[dict[str, Any], str, str]]:
"""Provides test cases for description prefix/suffix handling.
Returns tuples of (config, raw_description, expected_description)
Used by test_init.py and test_extract.py for testing description processing.
"""
return [
# Test case 1: New flattened format
(
{
"ad_defaults": {
"description_prefix": "Global Prefix\n",
"description_suffix": "\nGlobal Suffix"
}
},
"Original Description", # Raw description without affixes
"Global Prefix\nOriginal Description\nGlobal Suffix" # Expected with affixes
),
# Test case 2: Legacy nested format
(
{
"ad_defaults": {
"description": {
"prefix": "Legacy Prefix\n",
"suffix": "\nLegacy Suffix"
}
}
},
"Original Description",
"Legacy Prefix\nOriginal Description\nLegacy Suffix"
),
# Test case 3: Both formats - new format takes precedence
(
{
"ad_defaults": {
"description_prefix": "New Prefix\n",
"description_suffix": "\nNew Suffix",
"description": {
"prefix": "Legacy Prefix\n",
"suffix": "\nLegacy Suffix"
}
}
},
"Original Description",
"New Prefix\nOriginal Description\nNew Suffix"
),
# Test case 4: Empty config
(
{"ad_defaults": {}},
"Original Description",
"Original Description"
),
# Test case 5: None values in config
(
{
"ad_defaults": {
"description_prefix": None,
"description_suffix": None,
"description": {
"prefix": None,
"suffix": None
}
}
},
"Original Description",
"Original Description"
),
]
# ============================================================================
# Global Setup Fixtures - Applied automatically to all tests
# ============================================================================
@pytest.fixture(autouse = True)
def silence_nodriver_logs() -> None:
"""Silence nodriver logs during testing to reduce noise."""
loggers.get_logger("nodriver").setLevel(loggers.WARNING)
@pytest.fixture(autouse = True)
def force_english_locale() -> Iterator[None]:
"""Ensure tests run with a deterministic English locale."""
previous_locale = i18n.get_current_locale()
i18n.set_current_locale(i18n.Locale("en", "US", "UTF-8"))
yield
i18n.set_current_locale(previous_locale)
# ============================================================================
# Smoke Test Fixtures - Special fixtures for smoke tests
# ============================================================================
class DummyBrowser:
def __init__(self) -> None:
self.page = DummyPage()
self._process_pid = None # Use None to indicate no real process
def stop(self) -> None:
pass # Dummy method to satisfy close_browser_session
class DummyPage:
def find_element(self, selector:str) -> "DummyElement":
return DummyElement()
class DummyElement:
def click(self) -> None:
pass
def type(self, text:str) -> None:
pass
class SmokeKleinanzeigenBot(KleinanzeigenBot):
"""A test subclass that overrides async methods for smoke testing."""
def __init__(self) -> None:
super().__init__()
# Use cast to satisfy type checker for browser attribute
self.browser = cast(Browser, DummyBrowser())
def close_browser_session(self) -> None:
# Override to avoid psutil.Process logic in tests
self.page = None # pyright: ignore[reportAttributeAccessIssue]
if self.browser:
self.browser.stop()
self.browser = None # pyright: ignore[reportAttributeAccessIssue]
async def login(self) -> None:
return None
async def publish_ads(self, ad_cfgs:list[tuple[str, Ad, dict[str, Any]]]) -> None:
return None
def load_ads(self, *, ignore_inactive:bool = True, exclude_ads_with_id:bool = True) -> list[tuple[str, Ad, dict[str, Any]]]:
# Use cast to satisfy type checker for dummy Ad value
return [("dummy_file", cast(Ad, None), {})]
def load_config(self) -> None:
return None
@pytest.fixture
def smoke_bot() -> SmokeKleinanzeigenBot:
"""Fixture providing a ready-to-use smoke test bot instance."""
bot = SmokeKleinanzeigenBot()
bot.command = "publish"
return bot

128
tests/fixtures/belen_conf_sample.json vendored Normal file
View File

@@ -0,0 +1,128 @@
{
"jsBaseUrl": "https://static.kleinanzeigen.de/static/js",
"isBrowse": "false",
"isProd": true,
"initTime": 1704067200000,
"universalAnalyticsOpts": {
"account": "UA-24356365-9",
"domain": "kleinanzeigen.de",
"userId": "dummy_user_id_1234567890abcdef12",
"dimensions": {
"dimension1": "MyAds",
"dimension2": "",
"dimension3": "",
"dimension6": "",
"dimension7": "",
"dimension8": "",
"dimension9": "",
"dimension10": "",
"dimension11": "",
"dimension12": "",
"dimension13": "",
"dimension15": "de_DE",
"dimension20": "dummy_user_id_1234567890abcdefgh",
"dimension21": "dummy_encrypted_token_abcdef1234567890/1234567890abcdefgh+ijkl=lmnopqrstuvwxyz01234567==",
"dimension23": "true",
"dimension24": "private",
"dimension25": "0031_A|0042_A|0021_A|0030_A|0006_B|0028_A|0029_B|0007_C|0037_B|0026_B|0004_A|0005_A|0002_B|0036_B|0058_A|0003_B|0011_R|0022_B|0044_B|0012_B|0023_A|60_A|0008_B",
"dimension28": "distribution_test-c;yo_s-A;liberty-experimental-DEFAULT;liberty-experimental-2-DEFAULT;Lib_E;",
"dimension50": "(NULL)",
"dimension53": "",
"dimension90": "",
"dimension91": "",
"dimension94": "",
"dimension95": "",
"dimension96": "",
"dimension97": "",
"dimension121": "registered",
"dimension125": "distribution_test-c",
"dimension128": "yo_s-A",
"dimension130": "liberty-experimental-DEFAULT",
"dimension131": "liberty-experimental-2-DEFAULT",
"dimension135": "Lib_E",
"dimension136": "PRIVATE"
},
"extraDimensions": {
"dimension73": "1"
},
"sendPageView": true
},
"tnsPhoneVerificationBundleUrl": "https://www.kleinanzeigen.de/bffstatic/tns-phone-verification-web/tns-phone-verification-web-bundle.js",
"labs": {
"activeExperiments": {
"BLN-25381-ka-offboarding": "B",
"BLN-23248_BuyNow_SB": "B",
"BLN-22726_buyer_banner": "B",
"BLN-25958-greensunday": "A",
"EKTP-2111-page-extraction": "B",
"KARE-1015-Cont-Highlights": "B",
"FLPRO-130-churn-reason": "B",
"EKMO-100_reorder_postad": "B",
"BLN-27366_mortgage_sim": "A",
"KLUE-274-financing": "B",
"lws-aws-traffic": "B",
"SPEX-1052-ads-feedback": "B",
"BLN-24652_category_alert": "B",
"FLPRO-753-motors-fee": "B",
"BLN-21783_testingtime": "B",
"EBAYKAD-2252_group-assign": "A",
"liberty-experiment-style": "A",
"PRO-leads-feedback": "A",
"SPEX-1077-adfree-sub": "D",
"BLN-26740_enable_drafts": "B",
"ka-follower-network": "B",
"EKPAY-3287-counter-offer": "B",
"PLC-189_plc-migration": "A",
"EKMO-271_mweb": "A",
"audex-libertyjs-update": "A",
"performance-test-desktop": "B",
"BLN-26541-radius_feature": "A",
"EKPAY-3409-hermes-heavy": "A",
"SPEX-1077-adfree-sub-tech": "B",
"EKMO-243_MyAdsC2b_ABC": "C",
"Pro-Business-Hub": "A",
"fp_pla_desktop": "A",
"SPEX-1250_prebid_gpid": "B",
"prebid-update": "A",
"EKPAY-4088-negotiation": "B",
"desktop_payment_badge_SRP": "R",
"BLN-23401_buyNow_in_chat": "B",
"BLN-18532_highlight": "B",
"cmp-equal-choice": "B",
"BLN-27207_checkout_page": "B",
"I2I-homepage-trendsetter": "A",
"ignite_web_better_session": "C",
"EBAYKAD-3536_floor_ai": "B",
"ignite_improve_session": "C",
"EKPAY-3214-NudgeBanner": "A",
"BLN-24684-enc-brndg-data": "A",
"BLN-25794-watchlist-feed": "B",
"PRPL-252_ces_postad": "A",
"BLN-25659-car-financing": "B",
"EKPAY-3370_klarna_hide": "A",
"AUDEX-519_pb_ortb_cfg": "B",
"BLN-26398_stepstone_link": "B",
"BLN-25450_Initial_message": "A",
"cmp-leg-int": "B",
"audex-awr-update": "A",
"BLN-25216-new-user-badges": "B",
"KAD-333_dominant_category": "B",
"EKPAY-4460-kyc-entrypoint": "A",
"BLN-27350_plc_rollback": "B",
"BLN-25556_INIT_MSG_V2": "B",
"KARE-1294_private_label": "B",
"SPEX-1529_adnami-script": "A",
"DESKTOP-promo-switch": "A",
"EKPAY-3478-buyer-dispute": "A",
"FLPRO-693-ad-duplication": "B",
"BLN-27554_lds_kaos_test": "B",
"BLN-26961": "C",
"BIPHONE-9700_buy_now": "B",
"EKPAY-3336-interstial_grp": "A",
"BLN-27261_smava_provider": "A",
"10149_desktop_offboarding": "B",
"SPEX-1504-confiant": "A",
"PLC-104_plc-login": "B"
}
}
}

View File

@@ -0,0 +1,39 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import platform
from typing import cast
import nodriver
import pytest
from kleinanzeigen_bot.utils.misc import ensure
from kleinanzeigen_bot.utils.web_scraping_mixin import WebScrapingMixin
pytestmark = pytest.mark.slow
# Configure logging for integration tests
# The main bot already handles nodriver logging via silence_nodriver_logs fixture
# and pytest handles verbosity with -v flag automatically
async def atest_init() -> None:
web_scraping_mixin = WebScrapingMixin()
if platform.system() == "Linux":
# required for Ubuntu 24.04 or newer
cast(list[str], web_scraping_mixin.browser_config.arguments).append("--no-sandbox")
browser_path = web_scraping_mixin.get_compatible_browser()
ensure(browser_path is not None, "Browser not auto-detected")
web_scraping_mixin.close_browser_session()
try:
await web_scraping_mixin.create_browser_session()
finally:
web_scraping_mixin.close_browser_session()
@pytest.mark.flaky(reruns = 5, reruns_delay = 10)
@pytest.mark.itest
def test_init() -> None:
nodriver.loop().run_until_complete(atest_init()) # type: ignore[attr-defined]

View File

@@ -0,0 +1,276 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""
Minimal smoke tests: post-deployment health checks for kleinanzeigen-bot.
These tests verify that the most essential components are operational.
"""
import contextlib
import io
import json
import logging
import os
import re
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Callable, Mapping
from unittest.mock import patch
import pytest
from ruyaml import YAML
import kleinanzeigen_bot
from kleinanzeigen_bot.model.config_model import Config
from kleinanzeigen_bot.utils.i18n import get_current_locale, set_current_locale
from tests.conftest import SmokeKleinanzeigenBot
pytestmark = pytest.mark.slow
@dataclass(slots = True)
class CLIResult:
returncode:int
stdout:str
stderr:str
def invoke_cli(
args:list[str],
cwd:Path | None = None,
env_overrides:Mapping[str, str] | None = None,
) -> CLIResult:
"""
Run the kleinanzeigen-bot CLI in-process and capture stdout/stderr.
Args:
args: CLI arguments passed to ``kleinanzeigen_bot.main``.
cwd: Optional working directory for this in-process CLI run.
env_overrides: Optional environment variable overrides merged into the
current environment for the run (useful to isolate HOME/XDG paths).
"""
stdout = io.StringIO()
stderr = io.StringIO()
previous_cwd:Path | None = None
previous_locale = get_current_locale()
def capture_register(func:Callable[..., object], *_cb_args:Any, **_cb_kwargs:Any) -> Callable[..., object]:
return func
log_capture = io.StringIO()
log_handler = logging.StreamHandler(log_capture)
log_handler.setLevel(logging.DEBUG)
def build_result(exit_code:object) -> CLIResult:
if exit_code is None:
normalized = 0
elif isinstance(exit_code, int):
normalized = exit_code
else:
normalized = 1
combined_stderr = stderr.getvalue() + log_capture.getvalue()
return CLIResult(normalized, stdout.getvalue(), combined_stderr)
try:
if cwd is not None:
previous_cwd = Path.cwd()
os.chdir(os.fspath(cwd))
logging.getLogger().addHandler(log_handler)
with contextlib.ExitStack() as stack:
stack.enter_context(patch("kleinanzeigen_bot.atexit.register", capture_register))
stack.enter_context(contextlib.redirect_stdout(stdout))
stack.enter_context(contextlib.redirect_stderr(stderr))
effective_env_overrides = env_overrides if env_overrides is not None else _default_smoke_env(cwd)
if effective_env_overrides is not None:
stack.enter_context(patch.dict(os.environ, effective_env_overrides))
try:
kleinanzeigen_bot.main(["kleinanzeigen-bot", *args])
except SystemExit as exc:
return build_result(exc.code)
return build_result(0)
finally:
logging.getLogger().removeHandler(log_handler)
log_handler.close()
if previous_cwd is not None:
os.chdir(previous_cwd)
set_current_locale(previous_locale)
def _xdg_env_overrides(base_path:Path) -> dict[str, str]:
"""Create temporary HOME/XDG environment overrides rooted at the provided base path."""
home = base_path / "home"
xdg_config = base_path / "xdg" / "config"
xdg_state = base_path / "xdg" / "state"
xdg_cache = base_path / "xdg" / "cache"
for path in (home, xdg_config, xdg_state, xdg_cache):
path.mkdir(parents = True, exist_ok = True)
return {
"HOME": os.fspath(home),
"XDG_CONFIG_HOME": os.fspath(xdg_config),
"XDG_STATE_HOME": os.fspath(xdg_state),
"XDG_CACHE_HOME": os.fspath(xdg_cache),
}
def _default_smoke_env(cwd:Path | None) -> dict[str, str] | None:
"""Isolate HOME/XDG paths to temporary directories during smoke CLI calls."""
if cwd is None:
return None
return _xdg_env_overrides(cwd)
@pytest.fixture(autouse = True)
def disable_update_checker(monkeypatch:pytest.MonkeyPatch) -> None:
"""Prevent smoke tests from hitting GitHub for update checks."""
def _no_update(*_args:object, **_kwargs:object) -> None:
return None
monkeypatch.setattr("kleinanzeigen_bot.update_checker.UpdateChecker.check_for_updates", _no_update)
@pytest.mark.smoke
def test_app_starts(smoke_bot:SmokeKleinanzeigenBot) -> None:
"""Smoke: Bot can be instantiated and started without error."""
assert smoke_bot is not None
# Optionally call a minimal method if available
assert hasattr(smoke_bot, "run") or hasattr(smoke_bot, "login")
@pytest.mark.smoke
@pytest.mark.parametrize("subcommand", [
"--help",
"help",
"version",
"diagnose",
])
def test_cli_subcommands_no_config(subcommand:str, tmp_path:Path) -> None:
"""
Smoke: CLI subcommands that do not require a config file (--help, help, version, diagnose).
"""
args = [subcommand]
result = invoke_cli(args, cwd = tmp_path)
assert result.returncode == 0
out = (result.stdout + "\n" + result.stderr).lower()
if subcommand in {"--help", "help"}:
assert "usage" in out or "help" in out, f"Expected help text in CLI output.\n{out}"
elif subcommand == "version":
assert re.match(r"^\s*\d{4}\+\w+", result.stdout.strip()), f"Output does not look like a version string: {result.stdout}"
elif subcommand == "diagnose":
assert "browser connection diagnostics" in out or "browser-verbindungsdiagnose" in out, f"Expected diagnostic output.\n{out}"
@pytest.mark.smoke
def test_cli_subcommands_create_config_creates_file(tmp_path:Path) -> None:
"""
Smoke: CLI 'create-config' creates a config.yaml file in the current directory.
"""
result = invoke_cli(["create-config"], cwd = tmp_path)
config_file = tmp_path / "config.yaml"
assert result.returncode == 0
assert config_file.exists(), "config.yaml was not created by create-config command"
out = (result.stdout + "\n" + result.stderr).lower()
assert "saving" in out, f"Expected saving message in CLI output.\n{out}"
assert "config.yaml" in out, f"Expected config.yaml in CLI output.\n{out}"
@pytest.mark.smoke
def test_cli_subcommands_create_config_fails_if_exists(tmp_path:Path) -> None:
"""
Smoke: CLI 'create-config' does not overwrite config.yaml if it already exists.
"""
config_file = tmp_path / "config.yaml"
config_file.write_text("# dummy config\n", encoding = "utf-8")
result = invoke_cli(["create-config"], cwd = tmp_path)
assert result.returncode == 0
assert config_file.exists(), "config.yaml was deleted or not present after second create-config run"
out = (result.stdout + "\n" + result.stderr).lower()
assert (
"already exists" in out or "not overwritten" in out or "saving" in out
), f"Expected message about existing config in CLI output.\n{out}"
@pytest.mark.smoke
@pytest.mark.parametrize(("subcommand", "output_check"), [
("verify", "verify"),
("update-check", "update"),
("update-content-hash", "update-content-hash"),
("diagnose", "diagnose"),
])
@pytest.mark.parametrize(("config_ext", "serializer"), [
("yaml", None),
("yml", None),
("json", json.dumps),
])
def test_cli_subcommands_with_config_formats(
subcommand:str,
output_check:str,
config_ext:str,
serializer:Callable[[dict[str, object]], str] | None,
tmp_path:Path,
test_bot_config:Config,
) -> None:
"""
Smoke: CLI subcommands that require a config file, tested with all supported formats.
"""
config_path = tmp_path / f"config.{config_ext}"
try:
config_dict = test_bot_config.model_dump()
except AttributeError:
config_dict = test_bot_config.dict()
if config_ext in {"yaml", "yml"}:
yaml = YAML(typ = "unsafe", pure = True)
with open(config_path, "w", encoding = "utf-8") as f:
yaml.dump(config_dict, f)
elif serializer is not None:
config_path.write_text(serializer(config_dict), encoding = "utf-8")
args = [subcommand, "--config", str(config_path), "--workspace-mode", "portable"]
result = invoke_cli(args, cwd = tmp_path)
assert result.returncode == 0
out = (result.stdout + "\n" + result.stderr).lower()
if subcommand == "verify":
assert "no configuration errors found" in out, f"Expected 'no configuration errors found' in output for 'verify'.\n{out}"
elif subcommand == "update-content-hash":
assert "no active ads found" in out, f"Expected 'no active ads found' in output for 'update-content-hash'.\n{out}"
elif subcommand == "update-check":
assert result.returncode == 0
elif subcommand == "diagnose":
assert "browser connection diagnostics" in out or "browser-verbindungsdiagnose" in out, f"Expected diagnostic output for 'diagnose'.\n{out}"
@pytest.mark.smoke
def test_verify_shows_auto_price_reduction_decisions(tmp_path:Path, test_bot_config:Config) -> None:
"""Smoke: verify command previews auto price reduction decisions for all configured ads."""
config_dict = test_bot_config.model_dump()
config_dict["ad_files"] = ["./**/ad_*.yaml"]
config_path = tmp_path / "config.yaml"
yaml = YAML(typ = "unsafe", pure = True)
with open(config_path, "w", encoding = "utf-8") as f:
yaml.dump(config_dict, f)
ad_dir = tmp_path / "ads"
ad_dir.mkdir()
ad_yaml = ad_dir / "ad_test_pricing.yaml"
ad_yaml.write_text(
"title: Test Auto Pricing Ad\n"
"description: A test ad to verify auto price reduction preview\n"
"category: 161/gezielt\n"
"price: 200\n"
"price_type: FIXED\n"
"repost_count: 3\n"
"auto_price_reduction:\n"
" enabled: true\n"
" strategy: PERCENTAGE\n"
" amount: 10\n"
" min_price: 100\n"
" delay_reposts: 0\n"
" delay_days: 0\n",
encoding = "utf-8",
)
args = ["verify", "--config", str(config_path), "--workspace-mode", "portable"]
result = invoke_cli(args, cwd = tmp_path)
assert result.returncode == 0
out = (result.stdout + "\n" + result.stderr).lower()
assert "no configuration errors found" in out, f"Expected 'no configuration errors found' in output.\n{out}"
assert "auto price reduction applied" in out, f"Expected auto price reduction applied log in output.\n{out}"

View File

@@ -1,22 +0,0 @@
"""
SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
SPDX-License-Identifier: AGPL-3.0-or-later
SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""
import pytest
from kleinanzeigen_bot.selenium_mixin import SeleniumMixin
from kleinanzeigen_bot import utils
@pytest.mark.itest
def test_webdriver_auto_init():
selenium_mixin = SeleniumMixin()
selenium_mixin.browser_config.arguments = ["--no-sandbox"]
browser_path = selenium_mixin.get_compatible_browser()
utils.ensure(browser_path is not None, "Browser not auto-detected")
selenium_mixin.webdriver = None
selenium_mixin.create_webdriver_session()
selenium_mixin.webdriver.quit()

View File

@@ -1,41 +0,0 @@
"""
SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
SPDX-License-Identifier: AGPL-3.0-or-later
SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""
import os, sys, time
import pytest
from kleinanzeigen_bot import utils
def test_ensure():
utils.ensure(True, "TRUE")
utils.ensure("Some Value", "TRUE")
utils.ensure(123, "TRUE")
utils.ensure(-123, "TRUE")
utils.ensure(lambda: True, "TRUE")
with pytest.raises(AssertionError):
utils.ensure(False, "FALSE")
with pytest.raises(AssertionError):
utils.ensure(0, "FALSE")
with pytest.raises(AssertionError):
utils.ensure("", "FALSE")
with pytest.raises(AssertionError):
utils.ensure(None, "FALSE")
with pytest.raises(AssertionError):
utils.ensure(lambda: False, "FALSE", timeout = 2)
def test_pause():
start = time.time()
utils.pause(100, 100)
elapsed = 1000 * (time.time() - start)
if sys.platform == "darwin" and os.getenv("GITHUB_ACTIONS", "true") == "true":
assert 99 < elapsed < 300
else:
assert 99 < elapsed < 120

434
tests/unit/test_ad_model.py Normal file
View File

@@ -0,0 +1,434 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import math
import pytest
from kleinanzeigen_bot.model.ad_model import MAX_DESCRIPTION_LENGTH, Ad, AdPartial, ShippingOption, calculate_auto_price
from kleinanzeigen_bot.model.config_model import AdDefaults, AutoPriceReductionConfig
from kleinanzeigen_bot.utils.pydantics import ContextualModel, ContextualValidationError
@pytest.mark.unit
def test_update_content_hash() -> None:
minimal_ad_cfg = {
"id": "123456789",
"title": "Test Ad Title",
"category": "160",
"description": "Test Description",
}
minimal_ad_cfg_hash = "ae3defaccd6b41f379eb8de17263caa1bd306e35e74b11aa03a4738621e96ece"
assert AdPartial.model_validate(minimal_ad_cfg).update_content_hash().content_hash == minimal_ad_cfg_hash
assert AdPartial.model_validate(minimal_ad_cfg | {
"id": "123456789",
"created_on": "2025-05-08T09:34:03",
"updated_on": "2025-05-14T20:43:16",
"content_hash": "5753ead7cf42b0ace5fe658ecb930b3a8f57ef49bd52b7ea2d64b91b2c75517e"
}).update_content_hash().content_hash == minimal_ad_cfg_hash
assert AdPartial.model_validate(minimal_ad_cfg | {
"active": None,
"images": None,
"shipping_options": None,
"special_attributes": None,
"contact": None,
}).update_content_hash().content_hash == minimal_ad_cfg_hash
assert AdPartial.model_validate(minimal_ad_cfg | {
"active": True,
"images": [],
"shipping_options": [],
"special_attributes": {},
"contact": {},
}).update_content_hash().content_hash != minimal_ad_cfg_hash
@pytest.mark.unit
def test_price_reduction_count_does_not_influence_content_hash() -> None:
base_ad_cfg = {
"id": "123456789",
"title": "Test Ad Title",
"category": "160",
"description": "Test Description",
"price_type": "NEGOTIABLE",
}
hash_without_reposts = AdPartial.model_validate(base_ad_cfg | {"price_reduction_count": 0}).update_content_hash().content_hash
hash_with_reposts = AdPartial.model_validate(base_ad_cfg | {"price_reduction_count": 5}).update_content_hash().content_hash
assert hash_without_reposts == hash_with_reposts
@pytest.mark.unit
def test_repost_count_does_not_influence_content_hash() -> None:
base_ad_cfg = {
"id": "123456789",
"title": "Test Ad Title",
"category": "160",
"description": "Test Description",
"price_type": "NEGOTIABLE",
}
hash_without_reposts = AdPartial.model_validate(base_ad_cfg | {"repost_count": 0}).update_content_hash().content_hash
hash_with_reposts = AdPartial.model_validate(base_ad_cfg | {"repost_count": 5}).update_content_hash().content_hash
assert hash_without_reposts == hash_with_reposts
@pytest.mark.unit
def test_shipping_costs() -> None:
minimal_ad_cfg = {
"id": "123456789",
"title": "Test Ad Title",
"category": "160",
"description": "Test Description",
}
def is_close(a:float | None, b:float) -> bool:
return a is not None and math.isclose(a, b, rel_tol = 1e-09, abs_tol = 1e-09)
assert AdPartial.model_validate(minimal_ad_cfg | {"shipping_costs": 0}).shipping_costs == 0
assert is_close(AdPartial.model_validate(minimal_ad_cfg | {"shipping_costs": 0.00}).shipping_costs, 0)
assert is_close(AdPartial.model_validate(minimal_ad_cfg | {"shipping_costs": 0.10}).shipping_costs, 0.10)
assert is_close(AdPartial.model_validate(minimal_ad_cfg | {"shipping_costs": 1.00}).shipping_costs, 1)
assert AdPartial.model_validate(minimal_ad_cfg | {"shipping_costs": ""}).shipping_costs is None
assert AdPartial.model_validate(minimal_ad_cfg | {"shipping_costs": " "}).shipping_costs is None
assert AdPartial.model_validate(minimal_ad_cfg | {"shipping_costs": None}).shipping_costs is None
assert AdPartial.model_validate(minimal_ad_cfg).shipping_costs is None
class ShippingOptionWrapper(ContextualModel):
option:ShippingOption
@pytest.mark.unit
def test_shipping_option_must_not_be_blank() -> None:
with pytest.raises(ContextualValidationError, match = "must be non-empty and non-blank"):
ShippingOptionWrapper.model_validate({"option": " "})
@pytest.mark.unit
def test_description_length_limit() -> None:
cfg = {
"title": "Description Length",
"category": "160",
"description": "x" * (MAX_DESCRIPTION_LENGTH + 1)
}
with pytest.raises(ContextualValidationError, match = f"description length exceeds {MAX_DESCRIPTION_LENGTH} characters"):
AdPartial.model_validate(cfg)
@pytest.fixture
def base_ad_cfg() -> dict[str, object]:
return {
"title": "Test Ad Title",
"category": "160",
"description": "Test Description",
"price_type": "NEGOTIABLE",
"contact": {"name": "Test User", "zipcode": "12345"},
"shipping_type": "PICKUP",
"sell_directly": False,
"type": "OFFER",
"active": True
}
@pytest.fixture
def complete_ad_cfg(base_ad_cfg:dict[str, object]) -> dict[str, object]:
return base_ad_cfg | {
"republication_interval": 7,
"price": 100,
"auto_price_reduction": {
"enabled": True,
"strategy": "FIXED",
"amount": 5,
"min_price": 50,
"delay_reposts": 0,
"delay_days": 0
}
}
class SparseDumpAdPartial(AdPartial):
def model_dump(self, *args:object, **kwargs:object) -> dict[str, object]:
data = super().model_dump(*args, **kwargs) # type: ignore[arg-type]
data.pop("price_reduction_count", None)
data.pop("repost_count", None)
return data
@pytest.mark.unit
def test_auto_reduce_requires_price(base_ad_cfg:dict[str, object]) -> None:
cfg = base_ad_cfg.copy() | {
"auto_price_reduction": {
"enabled": True,
"strategy": "FIXED",
"amount": 5,
"min_price": 50
}
}
with pytest.raises(ContextualValidationError, match = "price must be specified"):
AdPartial.model_validate(cfg).to_ad(AdDefaults())
@pytest.mark.unit
def test_auto_reduce_requires_strategy(base_ad_cfg:dict[str, object]) -> None:
cfg = base_ad_cfg.copy() | {
"price": 100,
"auto_price_reduction": {
"enabled": True,
"min_price": 50
}
}
with pytest.raises(ContextualValidationError, match = "strategy must be specified"):
AdPartial.model_validate(cfg).to_ad(AdDefaults())
@pytest.mark.unit
def test_prepare_ad_model_fills_missing_counters(base_ad_cfg:dict[str, object]) -> None:
cfg = base_ad_cfg.copy() | {
"price": 120,
"shipping_type": "SHIPPING",
"sell_directly": False
}
ad = AdPartial.model_validate(cfg).to_ad(AdDefaults())
assert ad.auto_price_reduction.delay_reposts == 0
assert ad.auto_price_reduction.delay_days == 0
assert ad.price_reduction_count == 0
assert ad.repost_count == 0
@pytest.mark.unit
def test_min_price_must_not_exceed_price(base_ad_cfg:dict[str, object]) -> None:
cfg = base_ad_cfg.copy() | {
"price": 100,
"auto_price_reduction": {
"enabled": True,
"strategy": "FIXED",
"amount": 5,
"min_price": 120
}
}
with pytest.raises(ContextualValidationError, match = "min_price must not exceed price"):
AdPartial.model_validate(cfg)
@pytest.mark.unit
def test_min_price_validation_defers_to_pydantic_for_invalid_types(base_ad_cfg:dict[str, object]) -> None:
# Test that invalid price/min_price types are handled gracefully
# The safe Decimal comparison should catch conversion errors and defer to Pydantic
cfg = base_ad_cfg.copy() | {
"price": "not_a_number",
"auto_price_reduction": {
"enabled": True,
"strategy": "FIXED",
"amount": 5,
"min_price": 100
}
}
# Should raise Pydantic validation error for invalid price type, not our custom validation error
with pytest.raises(ContextualValidationError):
AdPartial.model_validate(cfg)
# Test with invalid min_price type
cfg2 = base_ad_cfg.copy() | {
"price": 100,
"auto_price_reduction": {
"enabled": True,
"strategy": "FIXED",
"amount": 5,
"min_price": "invalid"
}
}
# Should raise Pydantic validation error for invalid min_price type
with pytest.raises(ContextualValidationError):
AdPartial.model_validate(cfg2)
@pytest.mark.unit
def test_auto_reduce_requires_min_price(base_ad_cfg:dict[str, object]) -> None:
cfg = base_ad_cfg.copy() | {
"price": 100,
"auto_price_reduction": {
"enabled": True,
"strategy": "FIXED",
"amount": 5
}
}
with pytest.raises(ContextualValidationError, match = "min_price must be specified"):
AdPartial.model_validate(cfg).to_ad(AdDefaults())
@pytest.mark.unit
def test_to_ad_stabilizes_counters_when_defaults_omit(base_ad_cfg:dict[str, object]) -> None:
cfg = base_ad_cfg.copy() | {
"republication_interval": 7,
"price": 120
}
ad = AdPartial.model_validate(cfg).to_ad(AdDefaults())
assert ad.auto_price_reduction.delay_reposts == 0
assert ad.auto_price_reduction.delay_days == 0
assert ad.price_reduction_count == 0
assert ad.repost_count == 0
@pytest.mark.unit
def test_to_ad_sets_zero_when_counts_missing_from_dump(base_ad_cfg:dict[str, object]) -> None:
cfg = base_ad_cfg.copy() | {
"republication_interval": 7,
"price": 130
}
ad = SparseDumpAdPartial.model_validate(cfg).to_ad(AdDefaults())
assert ad.price_reduction_count == 0
assert ad.repost_count == 0
@pytest.mark.unit
def test_ad_model_auto_reduce_requires_price(complete_ad_cfg:dict[str, object]) -> None:
cfg = complete_ad_cfg.copy() | {"price": None}
with pytest.raises(ContextualValidationError, match = "price must be specified"):
Ad.model_validate(cfg)
@pytest.mark.unit
def test_ad_model_auto_reduce_requires_strategy(complete_ad_cfg:dict[str, object]) -> None:
cfg_copy = complete_ad_cfg.copy()
cfg_copy["auto_price_reduction"] = {
"enabled": True,
"min_price": 50
}
with pytest.raises(ContextualValidationError, match = "strategy must be specified"):
Ad.model_validate(cfg_copy)
@pytest.mark.unit
def test_price_reduction_delay_inherited_from_defaults(complete_ad_cfg:dict[str, object]) -> None:
# When auto_price_reduction is not specified in ad config, it inherits from defaults
cfg = complete_ad_cfg.copy()
cfg.pop("auto_price_reduction", None) # Remove to inherit from defaults
defaults = AdDefaults(
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "FIXED",
amount = 5,
min_price = 50,
delay_reposts = 4,
delay_days = 0
)
)
ad = AdPartial.model_validate(cfg).to_ad(defaults)
assert ad.auto_price_reduction.delay_reposts == 4
@pytest.mark.unit
def test_price_reduction_delay_override_zero(complete_ad_cfg:dict[str, object]) -> None:
cfg = complete_ad_cfg.copy()
# Type-safe way to modify nested dict
cfg["auto_price_reduction"] = {
"enabled": True,
"strategy": "FIXED",
"amount": 5,
"min_price": 50,
"delay_reposts": 0,
"delay_days": 0
}
defaults = AdDefaults(
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "FIXED",
amount = 5,
min_price = 50,
delay_reposts = 4,
delay_days = 0
)
)
ad = AdPartial.model_validate(cfg).to_ad(defaults)
assert ad.auto_price_reduction.delay_reposts == 0
@pytest.mark.unit
def test_ad_model_auto_reduce_requires_min_price(complete_ad_cfg:dict[str, object]) -> None:
cfg_copy = complete_ad_cfg.copy()
cfg_copy["auto_price_reduction"] = {
"enabled": True,
"strategy": "FIXED",
"amount": 5
}
with pytest.raises(ContextualValidationError, match = "min_price must be specified"):
Ad.model_validate(cfg_copy)
@pytest.mark.unit
def test_ad_model_min_price_must_not_exceed_price(complete_ad_cfg:dict[str, object]) -> None:
cfg_copy = complete_ad_cfg.copy()
cfg_copy["price"] = 100
cfg_copy["auto_price_reduction"] = {
"enabled": True,
"strategy": "FIXED",
"amount": 5,
"min_price": 150,
"delay_reposts": 0,
"delay_days": 0
}
with pytest.raises(ContextualValidationError, match = "min_price must not exceed price"):
Ad.model_validate(cfg_copy)
@pytest.mark.unit
def test_calculate_auto_price_with_missing_strategy() -> None:
"""Test calculate_auto_price when strategy is None but enabled is True (defensive check)"""
# Use model_construct to bypass validation and reach defensive lines 234-235
config = AutoPriceReductionConfig.model_construct(
enabled = True, strategy = None, amount = None, min_price = 50
)
result = calculate_auto_price(
base_price = 100,
auto_price_reduction = config,
target_reduction_cycle = 1
)
assert result == 100 # Should return base price when strategy is None
@pytest.mark.unit
def test_calculate_auto_price_with_missing_amount() -> None:
"""Test calculate_auto_price when amount is None but enabled is True (defensive check)"""
# Use model_construct to bypass validation and reach defensive lines 234-235
config = AutoPriceReductionConfig.model_construct(
enabled = True, strategy = "FIXED", amount = None, min_price = 50
)
result = calculate_auto_price(
base_price = 100,
auto_price_reduction = config,
target_reduction_cycle = 1
)
assert result == 100 # Should return base price when amount is None
@pytest.mark.unit
def test_calculate_auto_price_raises_when_min_price_none_and_enabled() -> None:
"""Test that calculate_auto_price raises ValueError when min_price is None during calculation (defensive check)"""
# Use model_construct to bypass validation and reach defensive line 237-238
config = AutoPriceReductionConfig.model_construct(
enabled = True, strategy = "FIXED", amount = 10, min_price = None
)
with pytest.raises(ValueError, match = "min_price must be specified when auto_price_reduction is enabled"):
calculate_auto_price(
base_price = 100,
auto_price_reduction = config,
target_reduction_cycle = 1
)
@pytest.mark.unit
def test_auto_price_reduction_config_requires_amount_when_enabled() -> None:
"""Test AutoPriceReductionConfig validator requires amount when enabled"""
with pytest.raises(ValueError, match = "amount must be specified when auto_price_reduction is enabled"):
AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = None, min_price = 50)

81
tests/unit/test_bot.py Normal file
View File

@@ -0,0 +1,81 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import gc, pytest # isort: skip
import pathlib
from kleinanzeigen_bot import KleinanzeigenBot
class TestKleinanzeigenBot:
@pytest.fixture
def bot(self) -> KleinanzeigenBot:
return KleinanzeigenBot()
def test_parse_args_help(self, bot:KleinanzeigenBot) -> None:
"""Test parsing of help command"""
bot.parse_args(["app", "help"])
assert bot.command == "help"
assert bot.ads_selector == "due"
assert not bot.keep_old_ads
def test_parse_args_publish(self, bot:KleinanzeigenBot) -> None:
"""Test parsing of publish command with options"""
bot.parse_args(["app", "publish", "--ads=all", "--keep-old"])
assert bot.command == "publish"
assert bot.ads_selector == "all"
assert bot.keep_old_ads
def test_parse_args_create_config(self, bot:KleinanzeigenBot) -> None:
"""Test parsing of create-config command"""
bot.parse_args(["app", "create-config"])
assert bot.command == "create-config"
def test_create_default_config_logs_error_if_exists(self, tmp_path:pathlib.Path, bot:KleinanzeigenBot, caplog:pytest.LogCaptureFixture) -> None:
"""Test that create_default_config logs an error if the config file already exists."""
config_path = tmp_path / "config.yaml"
config_path.write_text("dummy: value")
bot.config_file_path = str(config_path)
with caplog.at_level("ERROR"):
bot.create_default_config()
assert any("already exists" in m for m in caplog.messages)
def test_create_default_config_creates_file(self, tmp_path:pathlib.Path, bot:KleinanzeigenBot) -> None:
"""Test that create_default_config creates a config file if it does not exist."""
config_path = tmp_path / "config.yaml"
bot.config_file_path = str(config_path)
assert not config_path.exists()
bot.create_default_config()
assert config_path.exists()
content = config_path.read_text()
assert "username: changeme" in content
def test_load_config_handles_missing_file(self, tmp_path:pathlib.Path, bot:KleinanzeigenBot) -> None:
"""Test that load_config creates a default config file if missing. No info log is expected anymore."""
config_path = tmp_path / "config.yaml"
bot.config_file_path = str(config_path)
bot.load_config()
assert config_path.exists()
def test_get_version(self, bot:KleinanzeigenBot) -> None:
"""Test version retrieval"""
version = bot.get_version()
assert isinstance(version, str)
assert len(version) > 0
def test_file_log_closed_after_bot_shutdown(self) -> None:
"""Ensure the file log handler is properly closed after the bot is deleted"""
# Directly instantiate the bot to control its lifecycle within the test
bot = KleinanzeigenBot()
bot.configure_file_logging()
file_log = bot.file_log
assert file_log is not None
assert not file_log.is_closed()
# Delete and garbage collect the bot instance to ensure the destructor (__del__) is called
del bot
gc.collect()
assert file_log.is_closed()

View File

@@ -0,0 +1,404 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import json
import subprocess # noqa: S404
from unittest.mock import Mock, patch
import pytest
from kleinanzeigen_bot.utils.chrome_version_detector import (
ChromeVersionInfo,
detect_chrome_version_from_binary,
detect_chrome_version_from_remote_debugging,
get_chrome_version_diagnostic_info,
parse_version_string,
validate_chrome_136_configuration,
)
class TestParseVersionString:
"""Test version string parsing functionality."""
def test_parse_version_string_basic(self) -> None:
"""Test parsing basic version string."""
version = parse_version_string("136.0.6778.0")
assert version == 136
def test_parse_version_string_with_build_info(self) -> None:
"""Test parsing version string with build information."""
version = parse_version_string("136.0.6778.0 (Developer Build)")
assert version == 136
def test_parse_version_string_with_architecture(self) -> None:
"""Test parsing version string with architecture information."""
version = parse_version_string("136.0.6778.0 (Official Build) (x86_64)")
assert version == 136
def test_parse_version_string_older_version(self) -> None:
"""Test parsing older Chrome version."""
version = parse_version_string("120.0.6099.109")
assert version == 120
def test_parse_version_string_invalid_format(self) -> None:
"""Test parsing invalid version string raises ValueError."""
with pytest.raises(ValueError, match = "Could not parse version string"):
parse_version_string("invalid-version")
def test_parse_version_string_empty(self) -> None:
"""Test parsing empty version string raises ValueError."""
with pytest.raises(ValueError, match = "Could not parse version string"):
parse_version_string("")
class TestChromeVersionInfo:
"""Test ChromeVersionInfo class."""
def test_chrome_version_info_creation(self) -> None:
"""Test creating ChromeVersionInfo instance."""
version_info = ChromeVersionInfo("136.0.6778.0", 136, "Chrome")
assert version_info.version_string == "136.0.6778.0"
assert version_info.major_version == 136
assert version_info.browser_name == "Chrome"
def test_chrome_version_info_is_chrome_136_plus_true(self) -> None:
"""Test is_chrome_136_plus returns True for Chrome 136+."""
version_info = ChromeVersionInfo("136.0.6778.0", 136, "Chrome")
assert version_info.is_chrome_136_plus is True
def test_chrome_version_info_is_chrome_136_plus_false(self) -> None:
"""Test is_chrome_136_plus returns False for Chrome < 136."""
version_info = ChromeVersionInfo("120.0.6099.109", 120, "Chrome")
assert version_info.is_chrome_136_plus is False
def test_chrome_version_info_is_chrome_136_plus_edge_case(self) -> None:
"""Test is_chrome_136_plus edge case for version 136."""
version_info = ChromeVersionInfo("136.0.0.0", 136, "Chrome")
assert version_info.is_chrome_136_plus is True
def test_chrome_version_info_str_representation(self) -> None:
"""Test string representation of ChromeVersionInfo."""
version_info = ChromeVersionInfo("136.0.6778.0", 136, "Chrome")
expected = "Chrome 136.0.6778.0 (major: 136)"
assert str(version_info) == expected
def test_chrome_version_info_edge_browser(self) -> None:
"""Test ChromeVersionInfo with Edge browser."""
version_info = ChromeVersionInfo("136.0.6778.0", 136, "Edge")
assert version_info.browser_name == "Edge"
assert str(version_info) == "Edge 136.0.6778.0 (major: 136)"
class TestDetectChromeVersionFromBinary:
"""Test Chrome version detection from binary."""
@patch("subprocess.run")
def test_detect_chrome_version_from_binary_success(self, mock_run:Mock) -> None:
"""Test successful Chrome version detection from binary."""
mock_result = Mock()
mock_result.returncode = 0
mock_result.stdout = "Google Chrome 136.0.6778.0\n"
mock_run.return_value = mock_result
version_info = detect_chrome_version_from_binary("/path/to/chrome")
assert version_info is not None
assert version_info.version_string == "136.0.6778.0"
assert version_info.major_version == 136
assert version_info.browser_name == "Chrome"
mock_run.assert_called_once_with(
["/path/to/chrome", "--version"],
check = False,
capture_output = True,
text = True,
timeout = 10
)
@patch("subprocess.run")
def test_detect_chrome_version_from_binary_edge(self, mock_run:Mock) -> None:
"""Test Chrome version detection for Edge browser."""
mock_result = Mock()
mock_result.returncode = 0
mock_result.stdout = "Microsoft Edge 136.0.6778.0\n"
mock_run.return_value = mock_result
version_info = detect_chrome_version_from_binary("/path/to/edge")
assert version_info is not None
assert version_info.browser_name == "Edge"
assert version_info.major_version == 136
@patch("subprocess.run")
def test_detect_chrome_version_from_binary_chromium(self, mock_run:Mock) -> None:
"""Test Chrome version detection for Chromium browser."""
mock_result = Mock()
mock_result.returncode = 0
mock_result.stdout = "Chromium 136.0.6778.0\n"
mock_run.return_value = mock_result
version_info = detect_chrome_version_from_binary("/path/to/chromium")
assert version_info is not None
assert version_info.browser_name == "Chromium"
assert version_info.major_version == 136
@patch("subprocess.run")
def test_detect_chrome_version_from_binary_failure(self, mock_run:Mock) -> None:
"""Test Chrome version detection failure."""
mock_result = Mock()
mock_result.returncode = 1
mock_result.stderr = "Command not found"
mock_run.return_value = mock_result
version_info = detect_chrome_version_from_binary("/path/to/chrome")
assert version_info is None
@patch("subprocess.run")
def test_detect_chrome_version_from_binary_timeout(self, mock_run:Mock) -> None:
"""Test Chrome version detection timeout."""
mock_run.side_effect = subprocess.TimeoutExpired("chrome", 10)
version_info = detect_chrome_version_from_binary("/path/to/chrome")
assert version_info is None
@patch("subprocess.run")
def test_detect_chrome_version_from_binary_invalid_output(self, mock_run:Mock) -> None:
"""Test Chrome version detection with invalid output."""
mock_result = Mock()
mock_result.returncode = 0
mock_result.stdout = "Invalid version string"
mock_run.return_value = mock_result
version_info = detect_chrome_version_from_binary("/path/to/chrome")
assert version_info is None
class TestDetectChromeVersionFromRemoteDebugging:
"""Test Chrome version detection from remote debugging API."""
@patch("urllib.request.urlopen")
def test_detect_chrome_version_from_remote_debugging_success(self, mock_urlopen:Mock) -> None:
"""Test successful Chrome version detection from remote debugging."""
mock_response = Mock()
mock_response.read.return_value = json.dumps({
"Browser": "Chrome/136.0.6778.0",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.6778.0 Safari/537.36"
}).encode()
mock_urlopen.return_value = mock_response
version_info = detect_chrome_version_from_remote_debugging("127.0.0.1", 9222)
assert version_info is not None
assert version_info.version_string == "136.0.6778.0"
assert version_info.major_version == 136
assert version_info.browser_name == "Chrome"
mock_urlopen.assert_called_once_with("http://127.0.0.1:9222/json/version", timeout = 5)
@patch("urllib.request.urlopen")
def test_detect_chrome_version_from_remote_debugging_edge(self, mock_urlopen:Mock) -> None:
"""Test Chrome version detection for Edge from remote debugging."""
mock_response = Mock()
mock_response.read.return_value = json.dumps({
"Browser": "Edg/136.0.6778.0",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.6778.0 Safari/537.36 Edg/136.0.6778.0"
}).encode()
mock_urlopen.return_value = mock_response
version_info = detect_chrome_version_from_remote_debugging("127.0.0.1", 9222)
assert version_info is not None
assert version_info.major_version == 136
assert version_info.browser_name == "Edge"
@patch("urllib.request.urlopen")
def test_detect_chrome_version_from_remote_debugging_no_chrome_in_user_agent(self, mock_urlopen:Mock) -> None:
"""Test Chrome version detection with no Chrome in User-Agent."""
mock_response = Mock()
mock_response.read.return_value = json.dumps({
"Browser": "Unknown",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36"
}).encode()
mock_urlopen.return_value = mock_response
version_info = detect_chrome_version_from_remote_debugging("127.0.0.1", 9222)
assert version_info is None
@patch("urllib.request.urlopen")
def test_detect_chrome_version_from_remote_debugging_connection_error(self, mock_urlopen:Mock) -> None:
"""Test Chrome version detection with connection error."""
mock_urlopen.side_effect = Exception("Connection refused")
version_info = detect_chrome_version_from_remote_debugging("127.0.0.1", 9222)
assert version_info is None
@patch("urllib.request.urlopen")
def test_detect_chrome_version_from_remote_debugging_invalid_json(self, mock_urlopen:Mock) -> None:
"""Test Chrome version detection with invalid JSON response."""
mock_response = Mock()
mock_response.read.return_value = b"Invalid JSON"
mock_urlopen.return_value = mock_response
version_info = detect_chrome_version_from_remote_debugging("127.0.0.1", 9222)
assert version_info is None
class TestValidateChrome136Configuration:
"""Test Chrome 136+ configuration validation."""
def test_validate_chrome_136_configuration_no_remote_debugging(self) -> None:
"""Test validation when no remote debugging is configured."""
# Chrome 136+ requires --user-data-dir regardless of remote debugging
is_valid, error_message = validate_chrome_136_configuration([], None)
assert is_valid is False
assert "Chrome/Edge 136+ requires --user-data-dir" in error_message
def test_validate_chrome_136_configuration_with_user_data_dir_arg(self) -> None:
"""Test validation with --user-data-dir in arguments."""
args = ["--remote-debugging-port=9222", "--user-data-dir=/tmp/chrome-debug"]
is_valid, error_message = validate_chrome_136_configuration(args, None)
assert is_valid is True
assert not error_message
def test_validate_chrome_136_configuration_with_user_data_dir_config(self) -> None:
"""Test validation with user_data_dir in configuration."""
args = ["--remote-debugging-port=9222"]
is_valid, error_message = validate_chrome_136_configuration(args, "/tmp/chrome-debug") # noqa: S108
assert is_valid is True
assert not error_message
def test_validate_chrome_136_configuration_with_both(self) -> None:
"""Test validation with both user_data_dir argument and config."""
args = ["--remote-debugging-port=9222", "--user-data-dir=/tmp/chrome-debug"]
is_valid, error_message = validate_chrome_136_configuration(args, "/tmp/chrome-debug") # noqa: S108
assert is_valid is True
assert not error_message
def test_validate_chrome_136_configuration_missing_user_data_dir(self) -> None:
"""Test validation failure when user_data_dir is missing."""
args = ["--remote-debugging-port=9222"]
is_valid, error_message = validate_chrome_136_configuration(args, None)
assert is_valid is False
assert "Chrome/Edge 136+ requires --user-data-dir" in error_message
assert "Add --user-data-dir=/path/to/directory to your browser arguments" in error_message
def test_validate_chrome_136_configuration_empty_user_data_dir_config(self) -> None:
"""Test validation failure when user_data_dir config is empty."""
args = ["--remote-debugging-port=9222"]
is_valid, error_message = validate_chrome_136_configuration(args, "")
assert is_valid is False
assert "Chrome/Edge 136+ requires --user-data-dir" in error_message
def test_validate_chrome_136_configuration_whitespace_user_data_dir_config(self) -> None:
"""Test validation failure when user_data_dir config is whitespace."""
args = ["--remote-debugging-port=9222"]
is_valid, error_message = validate_chrome_136_configuration(args, " ")
assert is_valid is False
assert "Chrome/Edge 136+ requires --user-data-dir" in error_message
class TestGetChromeVersionDiagnosticInfo:
"""Test Chrome version diagnostic information gathering."""
@patch("kleinanzeigen_bot.utils.chrome_version_detector.detect_chrome_version_from_binary")
@patch("kleinanzeigen_bot.utils.chrome_version_detector.detect_chrome_version_from_remote_debugging")
def test_get_chrome_version_diagnostic_info_binary_only(
self, mock_remote_detect:Mock, mock_binary_detect:Mock
) -> None:
"""Test diagnostic info with binary detection only."""
mock_binary_detect.return_value = ChromeVersionInfo("136.0.6778.0", 136, "Chrome")
mock_remote_detect.return_value = None
diagnostic_info = get_chrome_version_diagnostic_info(
binary_path = "/path/to/chrome",
remote_port = None
)
assert diagnostic_info["binary_detection"] is not None
assert diagnostic_info["binary_detection"]["major_version"] == 136
assert diagnostic_info["binary_detection"]["is_chrome_136_plus"] is True
assert diagnostic_info["remote_detection"] is None
assert diagnostic_info["chrome_136_plus_detected"] is True
assert len(diagnostic_info["recommendations"]) == 1
@patch("kleinanzeigen_bot.utils.chrome_version_detector.detect_chrome_version_from_binary")
@patch("kleinanzeigen_bot.utils.chrome_version_detector.detect_chrome_version_from_remote_debugging")
def test_get_chrome_version_diagnostic_info_remote_only(
self, mock_remote_detect:Mock, mock_binary_detect:Mock
) -> None:
"""Test diagnostic info with remote detection only."""
mock_binary_detect.return_value = None
mock_remote_detect.return_value = ChromeVersionInfo("120.0.6099.109", 120, "Chrome")
diagnostic_info = get_chrome_version_diagnostic_info(
binary_path = None,
remote_port = 9222
)
assert diagnostic_info["binary_detection"] is None
assert diagnostic_info["remote_detection"] is not None
assert diagnostic_info["remote_detection"]["major_version"] == 120
assert diagnostic_info["remote_detection"]["is_chrome_136_plus"] is False
assert diagnostic_info["chrome_136_plus_detected"] is False
assert len(diagnostic_info["recommendations"]) == 0
@patch("kleinanzeigen_bot.utils.chrome_version_detector.detect_chrome_version_from_binary")
@patch("kleinanzeigen_bot.utils.chrome_version_detector.detect_chrome_version_from_remote_debugging")
def test_get_chrome_version_diagnostic_info_both_detections(
self, mock_remote_detect:Mock, mock_binary_detect:Mock
) -> None:
"""Test diagnostic info with both binary and remote detection."""
mock_binary_detect.return_value = ChromeVersionInfo("136.0.6778.0", 136, "Chrome")
mock_remote_detect.return_value = ChromeVersionInfo("136.0.6778.0", 136, "Chrome")
diagnostic_info = get_chrome_version_diagnostic_info(
binary_path = "/path/to/chrome",
remote_port = 9222
)
assert diagnostic_info["binary_detection"] is not None
assert diagnostic_info["remote_detection"] is not None
assert diagnostic_info["chrome_136_plus_detected"] is True
assert len(diagnostic_info["recommendations"]) == 1
@patch("kleinanzeigen_bot.utils.chrome_version_detector.detect_chrome_version_from_binary")
@patch("kleinanzeigen_bot.utils.chrome_version_detector.detect_chrome_version_from_remote_debugging")
def test_get_chrome_version_diagnostic_info_no_detection(
self, mock_remote_detect:Mock, mock_binary_detect:Mock
) -> None:
"""Test diagnostic info with no detection."""
mock_binary_detect.return_value = None
mock_remote_detect.return_value = None
diagnostic_info = get_chrome_version_diagnostic_info(
binary_path = None,
remote_port = None
)
assert diagnostic_info["binary_detection"] is None
assert diagnostic_info["remote_detection"] is None
assert diagnostic_info["chrome_136_plus_detected"] is False
assert len(diagnostic_info["recommendations"]) == 0
def test_get_chrome_version_diagnostic_info_default_values(self) -> None:
"""Test diagnostic info with default values."""
diagnostic_info = get_chrome_version_diagnostic_info()
assert diagnostic_info["binary_detection"] is None
assert diagnostic_info["remote_detection"] is None
assert diagnostic_info["chrome_136_plus_detected"] is False
assert diagnostic_info["configuration_valid"] is True
assert diagnostic_info["recommendations"] == []
@patch("urllib.request.urlopen")
def test_detect_chrome_version_from_remote_debugging_json_decode_error(
self, mock_urlopen:Mock
) -> None:
"""Test detect_chrome_version_from_remote_debugging handles JSONDecodeError gracefully."""
# Mock urlopen to return invalid JSON
mock_response = Mock()
mock_response.read.return_value = b"invalid json content"
mock_urlopen.return_value = mock_response
# Should return None when JSON decode fails
result = detect_chrome_version_from_remote_debugging("127.0.0.1", 9222)
assert result is None

View File

@@ -0,0 +1,192 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import pytest
from kleinanzeigen_bot.model.config_model import AdDefaults, Config, TimeoutConfig
def test_migrate_legacy_description_prefix() -> None:
assert AdDefaults.model_validate({}).description_prefix == "" # noqa: PLC1901 explicit empty check is clearer
assert AdDefaults.model_validate({"description_prefix": "Prefix"}).description_prefix == "Prefix"
assert AdDefaults.model_validate({"description_prefix": "Prefix", "description": {"prefix": "Legacy Prefix"}}).description_prefix == "Prefix"
assert AdDefaults.model_validate({"description": {"prefix": "Legacy Prefix"}}).description_prefix == "Legacy Prefix"
assert AdDefaults.model_validate({"description_prefix": "", "description": {"prefix": "Legacy Prefix"}}).description_prefix == "Legacy Prefix"
def test_migrate_legacy_description_suffix() -> None:
assert AdDefaults.model_validate({}).description_suffix == "" # noqa: PLC1901 explicit empty check is clearer
assert AdDefaults.model_validate({"description_suffix": "Suffix"}).description_suffix == "Suffix"
assert AdDefaults.model_validate({"description_suffix": "Suffix", "description": {"suffix": "Legacy Suffix"}}).description_suffix == "Suffix"
assert AdDefaults.model_validate({"description": {"suffix": "Legacy Suffix"}}).description_suffix == "Legacy Suffix"
assert AdDefaults.model_validate({"description_suffix": "", "description": {"suffix": "Legacy Suffix"}}).description_suffix == "Legacy Suffix"
def test_minimal_config_validation() -> None:
"""
Unit: Minimal config validation.
"""
minimal_cfg = {
"ad_defaults": {"contact": {"name": "dummy", "zipcode": "12345"}},
"login": {"username": "dummy", "password": "dummy"},
"publishing": {"delete_old_ads": "BEFORE_PUBLISH", "delete_old_ads_by_title": False},
}
config = Config.model_validate(minimal_cfg)
assert config.login.username == "dummy"
assert config.login.password == "dummy" # noqa: S105
def test_timeout_config_defaults_and_effective_values() -> None:
cfg = Config.model_validate(
{
"login": {"username": "dummy", "password": "dummy"}, # noqa: S105
"timeouts": {"multiplier": 2.0, "pagination_initial": 12.0, "retry_max_attempts": 3, "retry_backoff_factor": 2.0},
}
)
timeouts = cfg.timeouts
base = timeouts.resolve("pagination_initial")
multiplier = timeouts.multiplier
backoff = timeouts.retry_backoff_factor
assert base == 12.0
assert timeouts.effective("pagination_initial") == base * multiplier * (backoff**0)
# attempt 1 should apply backoff factor once in addition to multiplier
assert timeouts.effective("pagination_initial", attempt = 1) == base * multiplier * (backoff**1)
def test_validate_glob_pattern_rejects_blank_strings() -> None:
with pytest.raises(ValueError, match = "must be a non-empty, non-blank glob pattern"):
Config.model_validate(
{"ad_files": [" "], "ad_defaults": {"contact": {"name": "dummy", "zipcode": "12345"}}, "login": {"username": "dummy", "password": "dummy"}}
)
cfg = Config.model_validate(
{"ad_files": ["*.yaml"], "ad_defaults": {"contact": {"name": "dummy", "zipcode": "12345"}}, "login": {"username": "dummy", "password": "dummy"}}
)
assert cfg.ad_files == ["*.yaml"]
def test_timeout_config_resolve_returns_specific_value() -> None:
timeouts = TimeoutConfig(default = 4.0, page_load = 12.5)
assert timeouts.resolve("page_load") == 12.5
def test_timeout_config_resolve_falls_back_to_default() -> None:
timeouts = TimeoutConfig(default = 3.0)
assert timeouts.resolve("nonexistent_key") == 3.0
def test_diagnostics_pause_requires_capture_validation() -> None:
"""
Unit: DiagnosticsConfig validator ensures pause_on_login_detection_failure
requires capture_on.login_detection to be enabled.
"""
minimal_cfg = {
"ad_defaults": {"contact": {"name": "dummy", "zipcode": "12345"}},
"login": {"username": "dummy", "password": "dummy"}, # noqa: S105
"publishing": {"delete_old_ads": "BEFORE_PUBLISH", "delete_old_ads_by_title": False},
}
valid_cfg = {**minimal_cfg, "diagnostics": {"capture_on": {"login_detection": True}, "pause_on_login_detection_failure": True}}
config = Config.model_validate(valid_cfg)
assert config.diagnostics is not None
assert config.diagnostics.pause_on_login_detection_failure is True
assert config.diagnostics.capture_on.login_detection is True
invalid_cfg = {**minimal_cfg, "diagnostics": {"capture_on": {"login_detection": False}, "pause_on_login_detection_failure": True}}
with pytest.raises(ValueError, match = "pause_on_login_detection_failure requires capture_on.login_detection to be enabled"):
Config.model_validate(invalid_cfg)
def test_diagnostics_legacy_login_detection_capture_migration_when_capture_on_exists() -> None:
"""
Unit: Test that legacy login_detection_capture is removed but doesn't overwrite explicit capture_on.login_detection.
"""
minimal_cfg = {
"ad_defaults": {"contact": {"name": "dummy", "zipcode": "12345"}},
"login": {"username": "dummy", "password": "dummy"}, # noqa: S105
}
# When capture_on.login_detection is explicitly set to False, legacy True should be ignored
cfg_with_explicit = {
**minimal_cfg,
"diagnostics": {
"login_detection_capture": True, # legacy key
"capture_on": {"login_detection": False}, # explicit new key set to False
},
}
config = Config.model_validate(cfg_with_explicit)
assert config.diagnostics is not None
assert config.diagnostics.capture_on.login_detection is False # explicit value preserved
def test_diagnostics_legacy_publish_error_capture_migration_when_capture_on_exists() -> None:
"""
Unit: Test that legacy publish_error_capture is removed but doesn't overwrite explicit capture_on.publish.
"""
minimal_cfg = {
"ad_defaults": {"contact": {"name": "dummy", "zipcode": "12345"}},
"login": {"username": "dummy", "password": "dummy"}, # noqa: S105
}
# When capture_on.publish is explicitly set to False, legacy True should be ignored
cfg_with_explicit = {
**minimal_cfg,
"diagnostics": {
"publish_error_capture": True, # legacy key
"capture_on": {"publish": False}, # explicit new key set to False
},
}
config = Config.model_validate(cfg_with_explicit)
assert config.diagnostics is not None
assert config.diagnostics.capture_on.publish is False # explicit value preserved
def test_diagnostics_legacy_login_detection_capture_migration_when_capture_on_is_none() -> None:
"""
Unit: Test that legacy login_detection_capture is migrated when capture_on is None.
"""
minimal_cfg = {
"ad_defaults": {"contact": {"name": "dummy", "zipcode": "12345"}},
"login": {"username": "dummy", "password": "dummy"}, # noqa: S105
}
cfg_with_null_capture_on = {
**minimal_cfg,
"diagnostics": {
"login_detection_capture": True, # legacy key
"capture_on": None, # capture_on is explicitly None
},
}
config = Config.model_validate(cfg_with_null_capture_on)
assert config.diagnostics is not None
assert config.diagnostics.capture_on.login_detection is True # legacy value migrated
def test_diagnostics_legacy_publish_error_capture_migration_when_capture_on_is_none() -> None:
"""
Unit: Test that legacy publish_error_capture is migrated when capture_on is None.
"""
minimal_cfg = {
"ad_defaults": {"contact": {"name": "dummy", "zipcode": "12345"}},
"login": {"username": "dummy", "password": "dummy"}, # noqa: S105
}
cfg_with_null_capture_on = {
**minimal_cfg,
"diagnostics": {
"publish_error_capture": True, # legacy key
"capture_on": None, # capture_on is explicitly None
},
}
config = Config.model_validate(cfg_with_null_capture_on)
assert config.diagnostics is not None
assert config.diagnostics.capture_on.publish is True # legacy value migrated

View File

@@ -0,0 +1,224 @@
# SPDX-FileCopyrightText: © 2025 Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock
import pytest
from kleinanzeigen_bot.utils import diagnostics as diagnostics_module
from kleinanzeigen_bot.utils.diagnostics import capture_diagnostics
@pytest.mark.unit
class TestDiagnosticsCapture:
"""Tests for diagnostics capture functionality."""
@pytest.mark.asyncio
async def test_capture_diagnostics_creates_output_dir(self, tmp_path:Path) -> None:
"""Test that capture_diagnostics creates output directory."""
mock_page = AsyncMock()
output_dir = tmp_path / "diagnostics"
_ = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = mock_page,
)
# Verify directory was created
assert output_dir.exists()
@pytest.mark.asyncio
async def test_capture_diagnostics_creates_screenshot(self, tmp_path:Path) -> None:
"""Test that capture_diagnostics creates screenshot file."""
mock_page = AsyncMock()
mock_page.save_screenshot = AsyncMock()
output_dir = tmp_path / "diagnostics"
result = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = mock_page,
)
# Verify screenshot file was created and page method was called
assert len(result.saved_artifacts) == 1
assert result.saved_artifacts[0].suffix == ".png"
mock_page.save_screenshot.assert_awaited_once()
@pytest.mark.asyncio
async def test_capture_diagnostics_creates_html(self, tmp_path:Path) -> None:
"""Test that capture_diagnostics creates HTML file."""
mock_page = AsyncMock()
mock_page.get_content = AsyncMock(return_value = "<html></html>")
output_dir = tmp_path / "diagnostics"
result = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = mock_page,
)
# Verify HTML file was created along with screenshot
assert len(result.saved_artifacts) == 2
assert any(a.suffix == ".html" for a in result.saved_artifacts)
@pytest.mark.asyncio
async def test_capture_diagnostics_creates_json(self, tmp_path:Path) -> None:
"""Test that capture_diagnostics creates JSON file."""
mock_page = AsyncMock()
mock_page.get_content = AsyncMock(return_value = "<html></html>")
output_dir = tmp_path / "diagnostics"
result = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = mock_page,
json_payload = {"test": "data"},
)
# Verify JSON file was created along with HTML and screenshot
assert len(result.saved_artifacts) == 3
assert any(a.suffix == ".json" for a in result.saved_artifacts)
@pytest.mark.asyncio
async def test_capture_diagnostics_copies_log_file(self, tmp_path:Path) -> None:
"""Test that capture_diagnostics copies log file when enabled."""
log_file = tmp_path / "test.log"
log_file.write_text("test log content")
output_dir = tmp_path / "diagnostics"
result = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = None, # No page to avoid screenshot
log_file_path = str(log_file),
copy_log = True,
)
# Verify log was copied
assert len(result.saved_artifacts) == 1
assert result.saved_artifacts[0].suffix == ".log"
def test_copy_log_sync_returns_false_when_file_not_found(self, tmp_path:Path) -> None:
"""Test _copy_log_sync returns False when log file does not exist."""
non_existent_log = tmp_path / "non_existent.log"
log_path = tmp_path / "output.log"
result = diagnostics_module._copy_log_sync(str(non_existent_log), log_path)
assert result is False
assert not log_path.exists()
@pytest.mark.asyncio
async def test_capture_diagnostics_handles_screenshot_exception(self, tmp_path:Path, caplog:pytest.LogCaptureFixture) -> None:
"""Test that capture_diagnostics handles screenshot capture exceptions gracefully."""
mock_page = AsyncMock()
mock_page.save_screenshot = AsyncMock(side_effect = Exception("Screenshot failed"))
output_dir = tmp_path / "diagnostics"
result = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = mock_page,
)
# Verify no artifacts were saved due to exception
assert len(result.saved_artifacts) == 0
assert "Diagnostics screenshot capture failed" in caplog.text
@pytest.mark.asyncio
async def test_capture_diagnostics_handles_json_exception(self, tmp_path:Path, caplog:pytest.LogCaptureFixture, monkeypatch:pytest.MonkeyPatch) -> None:
"""Test that capture_diagnostics handles JSON write exceptions gracefully."""
mock_page = AsyncMock()
mock_page.get_content = AsyncMock(return_value = "<html></html>")
output_dir = tmp_path / "diagnostics"
# Mock _write_json_sync to raise an exception
monkeypatch.setattr(diagnostics_module, "_write_json_sync", MagicMock(side_effect = Exception("JSON write failed")))
result = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = mock_page,
json_payload = {"test": "data"},
)
# Verify screenshot and HTML were saved, but JSON failed
assert len(result.saved_artifacts) == 2
assert any(a.suffix == ".png" for a in result.saved_artifacts)
assert any(a.suffix == ".html" for a in result.saved_artifacts)
assert not any(a.suffix == ".json" for a in result.saved_artifacts)
assert "Diagnostics JSON capture failed" in caplog.text
@pytest.mark.asyncio
async def test_capture_diagnostics_handles_log_copy_exception(
self, tmp_path:Path, caplog:pytest.LogCaptureFixture, monkeypatch:pytest.MonkeyPatch
) -> None:
"""Test that capture_diagnostics handles log copy exceptions gracefully."""
# Create a log file
log_file = tmp_path / "test.log"
log_file.write_text("test log content")
output_dir = tmp_path / "diagnostics"
# Mock _copy_log_sync to raise an exception
original_copy_log = diagnostics_module._copy_log_sync
monkeypatch.setattr(diagnostics_module, "_copy_log_sync", MagicMock(side_effect = Exception("Copy failed")))
try:
result = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = None,
log_file_path = str(log_file),
copy_log = True,
)
# Verify no artifacts were saved due to exception
assert len(result.saved_artifacts) == 0
assert "Diagnostics log copy failed" in caplog.text
finally:
monkeypatch.setattr(diagnostics_module, "_copy_log_sync", original_copy_log)
@pytest.mark.asyncio
async def test_capture_diagnostics_logs_warning_when_all_captures_fail(
self, tmp_path:Path, caplog:pytest.LogCaptureFixture, monkeypatch:pytest.MonkeyPatch
) -> None:
"""Test warning is logged when capture is requested but all fail."""
mock_page = AsyncMock()
mock_page.save_screenshot = AsyncMock(side_effect = Exception("Screenshot failed"))
mock_page.get_content = AsyncMock(side_effect = Exception("HTML failed"))
# Mock JSON write to also fail
monkeypatch.setattr(diagnostics_module, "_write_json_sync", MagicMock(side_effect = Exception("JSON write failed")))
output_dir = tmp_path / "diagnostics"
result = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = mock_page,
json_payload = {"test": "data"},
)
# Verify no artifacts were saved
assert len(result.saved_artifacts) == 0
assert "Diagnostics capture attempted but no artifacts were saved" in caplog.text
@pytest.mark.asyncio
async def test_capture_diagnostics_logs_debug_when_no_capture_requested(self, tmp_path:Path, caplog:pytest.LogCaptureFixture) -> None:
"""Test debug is logged when no diagnostics capture is requested."""
output_dir = tmp_path / "diagnostics"
with caplog.at_level("DEBUG"):
_ = await capture_diagnostics(
output_dir = output_dir,
base_prefix = "test",
page = None,
json_payload = None,
copy_log = False,
)
assert "No diagnostics capture requested" in caplog.text

205
tests/unit/test_dicts.py Normal file
View File

@@ -0,0 +1,205 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""Tests for the dicts utility module."""
import unicodedata
from pathlib import Path
from pydantic import BaseModel, Field
def test_save_dict_normalizes_unicode_paths(tmp_path:Path) -> None:
"""Test that save_dict normalizes paths to NFC for cross-platform consistency (issue #728).
Directories are created with NFC normalization (via sanitize_folder_name).
This test verifies save_dict's defensive normalization handles edge cases where
an NFD path is passed (e.g., "ä" as "a" + combining diacritic vs single character).
It should normalize to NFC and use the existing NFC directory.
"""
from kleinanzeigen_bot.utils import dicts # noqa: PLC0415
# Create directory with NFC normalization (as sanitize_folder_name does)
title_nfc = unicodedata.normalize("NFC", "KitchenAid Zuhälter - nie benutzt")
nfc_dir = tmp_path / f"ad_12345_{title_nfc}"
nfc_dir.mkdir(parents = True)
# Call save_dict with NFD path (different normalization)
title_nfd = unicodedata.normalize("NFD", title_nfc)
assert title_nfc != title_nfd, "NFC and NFD should be different strings"
nfd_path = tmp_path / f"ad_12345_{title_nfd}" / "ad_12345.yaml"
dicts.save_dict(str(nfd_path), {"test": "data", "title": title_nfc})
# Verify file was saved successfully
nfc_files = list(nfc_dir.glob("*.yaml"))
assert len(nfc_files) == 1, "Should have exactly one file in NFC directory"
assert nfc_files[0].name == "ad_12345.yaml"
# On macOS/APFS, the filesystem normalizes both NFC and NFD to the same directory
# On Linux ext4, NFC normalization in save_dict ensures it uses the existing directory
# Either way, we should have exactly one YAML file total (no duplicates)
all_yaml_files = list(tmp_path.rglob("*.yaml"))
assert len(all_yaml_files) == 1, f"Expected exactly 1 YAML file total, found {len(all_yaml_files)}: {all_yaml_files}"
def test_safe_get_with_type_error() -> None:
"""Test safe_get returns None when accessing a non-dict value (TypeError)."""
from kleinanzeigen_bot.utils import dicts # noqa: PLC0415
# Accessing a key on a string causes TypeError
result = dicts.safe_get({"foo": "bar"}, "foo", "baz")
assert result is None
def test_safe_get_with_empty_dict() -> None:
"""Test safe_get returns empty dict when given empty dict."""
from kleinanzeigen_bot.utils import dicts # noqa: PLC0415
# Empty dict should return the dict itself (falsy but valid)
result = dicts.safe_get({})
assert result == {}
def test_model_to_commented_yaml_with_dict_exclude() -> None:
"""Test model_to_commented_yaml with dict exclude where field is not in exclude dict."""
from kleinanzeigen_bot.utils.dicts import model_to_commented_yaml # noqa: PLC0415
class TestModel(BaseModel):
included_field:str = Field(default = "value", description = "This field")
excluded_field:str = Field(default = "excluded", description = "Excluded field")
model = TestModel()
# Exclude only excluded_field, included_field should remain
result = model_to_commented_yaml(model, exclude = {"excluded_field": None})
assert "included_field" in result
assert "excluded_field" not in result
def test_model_to_commented_yaml_with_list() -> None:
"""Test model_to_commented_yaml handles list fields correctly."""
from kleinanzeigen_bot.utils.dicts import model_to_commented_yaml # noqa: PLC0415
class TestModel(BaseModel):
items:list[str] = Field(default_factory = lambda: ["item1", "item2"], description = "List of items")
model = TestModel()
result = model_to_commented_yaml(model)
assert "items" in result
assert isinstance(result["items"], list)
assert result["items"] == ["item1", "item2"]
def test_model_to_commented_yaml_with_multiple_scalar_examples() -> None:
"""Test model_to_commented_yaml formats multiple scalar examples with bullets."""
from kleinanzeigen_bot.utils.dicts import model_to_commented_yaml # noqa: PLC0415
class TestModel(BaseModel):
choice:str = Field(default = "A", description = "Choose one", examples = ["A", "B", "C"])
model = TestModel()
result = model_to_commented_yaml(model)
# Verify the field exists
assert "choice" in result
# Verify comment was added (check via the yaml_set_comment_before_after_key mechanism)
assert result.ca is not None
def test_model_to_commented_yaml_with_set_exclude() -> None:
"""Test model_to_commented_yaml with set exclude (covers line 170 branch)."""
from kleinanzeigen_bot.utils.dicts import model_to_commented_yaml # noqa: PLC0415
class TestModel(BaseModel):
field1:str = Field(default = "value1", description = "First field")
field2:str = Field(default = "value2", description = "Second field")
model = TestModel()
# Use set for exclude (not dict)
result = model_to_commented_yaml(model, exclude = {"field2"})
assert "field1" in result
assert "field2" not in result
def test_model_to_commented_yaml_with_nested_dict_exclude() -> None:
"""Test model_to_commented_yaml with nested dict exclude (covers lines 186-187)."""
from kleinanzeigen_bot.utils.dicts import model_to_commented_yaml # noqa: PLC0415
class NestedModel(BaseModel):
nested_field:str = Field(default = "nested", description = "Nested")
class TestModel(BaseModel):
parent:NestedModel = Field(default_factory = NestedModel, description = "Parent")
model = TestModel()
# Nested exclude with None value
result = model_to_commented_yaml(model, exclude = {"parent": None})
assert "parent" not in result
def test_model_to_commented_yaml_with_plain_dict() -> None:
"""Test model_to_commented_yaml with plain dict (covers lines 238-241)."""
from kleinanzeigen_bot.utils.dicts import model_to_commented_yaml # noqa: PLC0415
# Plain dict (not a Pydantic model)
plain_dict = {"key1": "value1", "key2": "value2"}
result = model_to_commented_yaml(plain_dict)
assert "key1" in result
assert "key2" in result
assert result["key1"] == "value1"
def test_model_to_commented_yaml_fallback() -> None:
"""Test model_to_commented_yaml fallback for unsupported types (covers line 318)."""
from kleinanzeigen_bot.utils.dicts import model_to_commented_yaml # noqa: PLC0415
# Custom object that's not a BaseModel, dict, list, or primitive
class CustomObject:
pass
obj = CustomObject()
result = model_to_commented_yaml(obj)
# Should return as-is
assert result is obj
def test_save_commented_model_without_header(tmp_path:Path) -> None:
"""Test save_commented_model without header (covers line 358)."""
from kleinanzeigen_bot.utils.dicts import save_commented_model # noqa: PLC0415
class TestModel(BaseModel):
field:str = Field(default = "value", description = "A field")
model = TestModel()
filepath = tmp_path / "test.yaml"
# Save without header (header=None)
save_commented_model(filepath, model, header = None)
assert filepath.exists()
content = filepath.read_text()
# Should not have a blank line at the start
assert not content.startswith("\n")
def test_model_to_commented_yaml_with_empty_list() -> None:
"""Test model_to_commented_yaml correctly detects empty list fields via type annotation."""
from kleinanzeigen_bot.utils.dicts import model_to_commented_yaml # noqa: PLC0415
class TestModel(BaseModel):
items:list[str] = Field(default_factory = list, description = "List of items", examples = ["item1", "item2"])
model = TestModel()
# Model has empty list, but should still be detected as list field via annotation
result = model_to_commented_yaml(model)
assert "items" in result
assert isinstance(result["items"], list)
assert len(result["items"]) == 0
# Verify comment includes "Example usage:" (list field format) not "Examples:" (scalar format)
assert result.ca is not None

View File

@@ -0,0 +1,169 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""Tests for the error handlers module.
This module contains tests for the error handling functionality of the kleinanzeigen-bot application.
It tests both the exception handler and signal handler functionality.
"""
import sys
from collections.abc import Generator
from unittest.mock import MagicMock, patch
import pytest
from pydantic import BaseModel, ValidationError
from kleinanzeigen_bot.utils.error_handlers import on_exception, on_sigint
# --------------------------------------------------------------------------- #
# Test fixtures
# --------------------------------------------------------------------------- #
@pytest.fixture
def mock_logger() -> Generator[MagicMock, None, None]:
"""Fixture to mock the logger."""
with patch("kleinanzeigen_bot.utils.error_handlers.LOG") as mock_log:
yield mock_log
@pytest.fixture
def mock_sys_exit() -> Generator[MagicMock, None, None]:
"""Fixture to mock sys.exit to prevent actual program termination."""
with patch("sys.exit") as mock_exit:
yield mock_exit
# --------------------------------------------------------------------------- #
# Test cases
# --------------------------------------------------------------------------- #
class TestExceptionHandler:
"""Test cases for the exception handler."""
def test_keyboard_interrupt(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test that KeyboardInterrupt is handled by the system excepthook."""
with patch("sys.__excepthook__") as mock_excepthook:
on_exception(KeyboardInterrupt, KeyboardInterrupt(), None)
mock_excepthook.assert_called_once()
mock_sys_exit.assert_called_once_with(1)
def test_validation_error(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test that ValidationError is formatted and logged."""
class TestModel(BaseModel):
field:int
try:
TestModel(field = "not an int") # type: ignore[arg-type]
except ValidationError as error:
on_exception(ValidationError, error, None)
mock_logger.error.assert_called_once()
mock_sys_exit.assert_called_once_with(1)
def test_assertion_error(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test that AssertionError is logged directly."""
error = AssertionError("Test error")
on_exception(AssertionError, error, None)
# Accept both with and without trailing newline
logged = mock_logger.error.call_args[0][0]
assert logged.strip() == str(error) or logged.strip() == f"{error.__class__.__name__}: {error}"
mock_sys_exit.assert_called_once_with(1)
def test_unknown_exception(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test that unknown exceptions are logged with type and message."""
error = RuntimeError("Test error")
on_exception(RuntimeError, error, None)
logged = mock_logger.error.call_args[0][0]
assert logged.strip() == f"{error.__class__.__name__}: {error}"
mock_sys_exit.assert_called_once_with(1)
def test_missing_exception_info(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test handling of missing exception information."""
on_exception(None, None, None)
mock_logger.error.assert_called_once()
# sys.exit is not called for missing exception info
mock_sys_exit.assert_not_called()
def test_debug_mode_error(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test error handling in debug mode."""
with patch("kleinanzeigen_bot.utils.error_handlers.loggers.is_debug", return_value = True):
try:
raise ValueError("Test error")
except ValueError as error:
_, _, tb = sys.exc_info()
on_exception(ValueError, error, tb)
mock_logger.error.assert_called_once()
# Verify that traceback was included
logged = mock_logger.error.call_args[0][0]
assert "Traceback" in logged
assert "ValueError: Test error" in logged
mock_sys_exit.assert_called_once_with(1)
def test_attribute_error(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test handling of AttributeError."""
try:
raise AttributeError("Test error")
except AttributeError as error:
_, _, tb = sys.exc_info()
on_exception(AttributeError, error, tb)
mock_logger.error.assert_called_once()
# Verify that traceback was included
logged = mock_logger.error.call_args[0][0]
assert "Traceback" in logged
assert "AttributeError: Test error" in logged
mock_sys_exit.assert_called_once_with(1)
def test_import_error(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test handling of ImportError."""
try:
raise ImportError("Test error")
except ImportError as error:
_, _, tb = sys.exc_info()
on_exception(ImportError, error, tb)
mock_logger.error.assert_called_once()
# Verify that traceback was included
logged = mock_logger.error.call_args[0][0]
assert "Traceback" in logged
assert "ImportError: Test error" in logged
mock_sys_exit.assert_called_once_with(1)
def test_name_error(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test handling of NameError."""
try:
raise NameError("Test error")
except NameError as error:
_, _, tb = sys.exc_info()
on_exception(NameError, error, tb)
mock_logger.error.assert_called_once()
# Verify that traceback was included
logged = mock_logger.error.call_args[0][0]
assert "Traceback" in logged
assert "NameError: Test error" in logged
mock_sys_exit.assert_called_once_with(1)
def test_type_error(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test handling of TypeError."""
try:
raise TypeError("Test error")
except TypeError as error:
_, _, tb = sys.exc_info()
on_exception(TypeError, error, tb)
mock_logger.error.assert_called_once()
# Verify that traceback was included
logged = mock_logger.error.call_args[0][0]
assert "Traceback" in logged
assert "TypeError: Test error" in logged
mock_sys_exit.assert_called_once_with(1)
class TestSignalHandler:
"""Test cases for the signal handler."""
def test_sigint_handler(self, mock_logger:MagicMock, mock_sys_exit:MagicMock) -> None:
"""Test that SIGINT is handled with a warning message."""
on_sigint(2, None) # 2 is SIGINT
mock_logger.warning.assert_called_once_with("Aborted on user request.")
mock_sys_exit.assert_called_once_with(0)

View File

@@ -0,0 +1,527 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import json # isort: skip
from datetime import datetime, timedelta
from pathlib import Path
from typing import Any
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from kleinanzeigen_bot import KleinanzeigenBot, misc
from kleinanzeigen_bot.model.ad_model import Ad
from kleinanzeigen_bot.utils import dicts
from kleinanzeigen_bot.utils.web_scraping_mixin import By, Element
@pytest.fixture
def base_ad_config_with_id() -> dict[str, Any]:
"""Provide a base ad configuration with an ID for extend tests."""
return {
"id": 12345,
"title": "Test Ad Title",
"description": "Test Description",
"type": "OFFER",
"price_type": "FIXED",
"price": 100,
"shipping_type": "SHIPPING",
"shipping_options": [],
"category": "160",
"special_attributes": {},
"sell_directly": False,
"images": [],
"active": True,
"republication_interval": 7,
"created_on": "2024-12-07T10:00:00",
"updated_on": "2024-12-10T15:20:00",
"contact": {"name": "Test User", "zipcode": "12345", "location": "Test City", "street": "", "phone": ""},
}
class TestExtendCommand:
"""Tests for the extend command functionality."""
@pytest.mark.asyncio
async def test_run_extend_command_no_ads(self, test_bot:KleinanzeigenBot) -> None:
"""Test running extend command with no ads."""
with patch.object(test_bot, "load_config"), patch.object(test_bot, "load_ads", return_value = []), patch("kleinanzeigen_bot.UpdateChecker"):
await test_bot.run(["script.py", "extend"])
assert test_bot.command == "extend"
assert test_bot.ads_selector == "all"
@pytest.mark.asyncio
async def test_run_extend_command_with_specific_ids(self, test_bot:KleinanzeigenBot) -> None:
"""Test running extend command with specific ad IDs."""
with (
patch.object(test_bot, "load_config"),
patch.object(test_bot, "load_ads", return_value = []),
patch.object(test_bot, "create_browser_session", new_callable = AsyncMock),
patch.object(test_bot, "login", new_callable = AsyncMock),
patch("kleinanzeigen_bot.UpdateChecker"),
):
await test_bot.run(["script.py", "extend", "--ads=12345,67890"])
assert test_bot.command == "extend"
assert test_bot.ads_selector == "12345,67890"
class TestExtendAdsMethod:
"""Tests for the extend_ads() method."""
@pytest.mark.asyncio
async def test_extend_ads_skips_unpublished_ad(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that extend_ads skips ads without an ID (unpublished)."""
# Create ad without ID
ad_config = base_ad_config_with_id.copy()
ad_config["id"] = None
ad_cfg = Ad.model_validate(ad_config)
with patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request, patch.object(test_bot, "web_sleep", new_callable = AsyncMock):
mock_request.return_value = {"content": '{"ads": []}'}
await test_bot.extend_ads([("test.yaml", ad_cfg, ad_config)])
# Verify no extension was attempted
mock_request.assert_called_once() # Only the API call to get published ads
@pytest.mark.asyncio
async def test_extend_ads_skips_ad_not_in_published_list(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that extend_ads skips ads not found in the published ads API response."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
with patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request, patch.object(test_bot, "web_sleep", new_callable = AsyncMock):
# Return empty published ads list
mock_request.return_value = {"content": '{"ads": []}'}
await test_bot.extend_ads([("test.yaml", ad_cfg, base_ad_config_with_id)])
# Verify no extension was attempted
mock_request.assert_called_once()
@pytest.mark.asyncio
async def test_extend_ads_skips_inactive_ad(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that extend_ads skips ads with state != 'active'."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
published_ads_json = {
"ads": [
{
"id": 12345,
"title": "Test Ad Title",
"state": "paused", # Not active
"endDate": "05.02.2026",
}
]
}
with (
patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request,
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "extend_ad", new_callable = AsyncMock) as mock_extend_ad,
):
mock_request.return_value = {"content": json.dumps(published_ads_json)}
await test_bot.extend_ads([("test.yaml", ad_cfg, base_ad_config_with_id)])
# Verify extend_ad was not called
mock_extend_ad.assert_not_called()
@pytest.mark.asyncio
async def test_extend_ads_skips_ad_without_enddate(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that extend_ads skips ads without endDate in API response."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
published_ads_json = {
"ads": [
{
"id": 12345,
"title": "Test Ad Title",
"state": "active",
# No endDate field
}
]
}
with (
patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request,
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "extend_ad", new_callable = AsyncMock) as mock_extend_ad,
):
mock_request.return_value = {"content": json.dumps(published_ads_json)}
await test_bot.extend_ads([("test.yaml", ad_cfg, base_ad_config_with_id)])
# Verify extend_ad was not called
mock_extend_ad.assert_not_called()
@pytest.mark.asyncio
async def test_extend_ads_skips_ad_outside_window(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that extend_ads skips ads expiring more than 8 days in the future."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Set end date to 30 days from now (outside 8-day window)
future_date = misc.now() + timedelta(days = 30)
end_date_str = future_date.strftime("%d.%m.%Y")
published_ads_json = {"ads": [{"id": 12345, "title": "Test Ad Title", "state": "active", "endDate": end_date_str}]}
with (
patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request,
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "extend_ad", new_callable = AsyncMock) as mock_extend_ad,
):
mock_request.return_value = {"content": json.dumps(published_ads_json)}
await test_bot.extend_ads([("test.yaml", ad_cfg, base_ad_config_with_id)])
# Verify extend_ad was not called
mock_extend_ad.assert_not_called()
@pytest.mark.asyncio
async def test_extend_ads_extends_ad_within_window(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that extend_ads extends ads within the 8-day window."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Set end date to 5 days from now (within 8-day window)
future_date = misc.now() + timedelta(days = 5)
end_date_str = future_date.strftime("%d.%m.%Y")
published_ads_json = {"ads": [{"id": 12345, "title": "Test Ad Title", "state": "active", "endDate": end_date_str}]}
with (
patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request,
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "extend_ad", new_callable = AsyncMock) as mock_extend_ad,
):
mock_request.return_value = {"content": json.dumps(published_ads_json)}
mock_extend_ad.return_value = True
await test_bot.extend_ads([("test.yaml", ad_cfg, base_ad_config_with_id)])
# Verify extend_ad was called
mock_extend_ad.assert_called_once()
@pytest.mark.asyncio
async def test_extend_ads_no_eligible_ads(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test extend_ads when no ads are eligible for extension."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Set end date to 30 days from now (outside window)
future_date = misc.now() + timedelta(days = 30)
end_date_str = future_date.strftime("%d.%m.%Y")
published_ads_json = {"ads": [{"id": 12345, "title": "Test Ad Title", "state": "active", "endDate": end_date_str}]}
with (
patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request,
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "extend_ad", new_callable = AsyncMock) as mock_extend_ad,
):
mock_request.return_value = {"content": json.dumps(published_ads_json)}
await test_bot.extend_ads([("test.yaml", ad_cfg, base_ad_config_with_id)])
# Verify extend_ad was not called
mock_extend_ad.assert_not_called()
@pytest.mark.asyncio
async def test_extend_ads_handles_multiple_ads(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that extend_ads processes multiple ads correctly."""
ad_cfg1 = Ad.model_validate(base_ad_config_with_id)
# Create second ad
ad_config2 = base_ad_config_with_id.copy()
ad_config2["id"] = 67890
ad_config2["title"] = "Second Test Ad"
ad_cfg2 = Ad.model_validate(ad_config2)
# Set end dates - one within window, one outside
within_window = misc.now() + timedelta(days = 5)
outside_window = misc.now() + timedelta(days = 30)
published_ads_json = {
"ads": [
{"id": 12345, "title": "Test Ad Title", "state": "active", "endDate": within_window.strftime("%d.%m.%Y")},
{"id": 67890, "title": "Second Test Ad", "state": "active", "endDate": outside_window.strftime("%d.%m.%Y")},
]
}
with (
patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request,
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "extend_ad", new_callable = AsyncMock) as mock_extend_ad,
):
mock_request.return_value = {"content": json.dumps(published_ads_json)}
mock_extend_ad.return_value = True
await test_bot.extend_ads([("test1.yaml", ad_cfg1, base_ad_config_with_id), ("test2.yaml", ad_cfg2, ad_config2)])
# Verify extend_ad was called only once (for the ad within window)
assert mock_extend_ad.call_count == 1
class TestExtendAdMethod:
"""Tests for the extend_ad() method.
Note: These tests mock `_navigate_paginated_ad_overview` rather than individual browser methods
(web_find, web_click, etc.) because the pagination helper involves complex multi-step browser
interactions that would require extensive, brittle mock choreography. Mocking at this level
keeps tests focused on extend_ad's own logic (dialog handling, YAML persistence, error paths).
"""
@pytest.mark.asyncio
async def test_extend_ad_success(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any], tmp_path:Path) -> None:
"""Test successful ad extension."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Create temporary YAML file
ad_file = tmp_path / "test_ad.yaml"
dicts.save_dict(str(ad_file), base_ad_config_with_id)
with (
patch.object(test_bot, "_navigate_paginated_ad_overview", new_callable = AsyncMock) as mock_paginate,
patch.object(test_bot, "web_click", new_callable = AsyncMock),
patch("kleinanzeigen_bot.misc.now") as mock_now,
):
# Test mock datetime - timezone not relevant for timestamp formatting test
mock_now.return_value = datetime(2025, 1, 28, 14, 30, 0) # noqa: DTZ001
mock_paginate.return_value = True
result = await test_bot.extend_ad(str(ad_file), ad_cfg, base_ad_config_with_id)
assert result is True
assert mock_paginate.call_count == 1
# Verify updated_on was updated in the YAML file
updated_config = dicts.load_dict(str(ad_file))
assert updated_config["updated_on"] == "2025-01-28T14:30:00"
@pytest.mark.asyncio
async def test_extend_ad_button_not_found(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any], tmp_path:Path) -> None:
"""Test extend_ad when the Verlängern button is not found."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Create temporary YAML file
ad_file = tmp_path / "test_ad.yaml"
dicts.save_dict(str(ad_file), base_ad_config_with_id)
with patch.object(test_bot, "_navigate_paginated_ad_overview", new_callable = AsyncMock) as mock_paginate:
# Simulate button not found by having pagination return False (not found on any page)
mock_paginate.return_value = False
result = await test_bot.extend_ad(str(ad_file), ad_cfg, base_ad_config_with_id)
assert result is False
assert mock_paginate.call_count == 1
@pytest.mark.asyncio
async def test_extend_ad_dialog_timeout(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any], tmp_path:Path) -> None:
"""Test extend_ad when the confirmation dialog times out (no dialog appears)."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Create temporary YAML file
ad_file = tmp_path / "test_ad.yaml"
dicts.save_dict(str(ad_file), base_ad_config_with_id)
with (
patch.object(test_bot, "_navigate_paginated_ad_overview", new_callable = AsyncMock) as mock_paginate,
patch.object(test_bot, "web_click", new_callable = AsyncMock) as mock_click,
patch("kleinanzeigen_bot.misc.now") as mock_now,
):
# Test mock datetime - timezone not relevant for timestamp formatting test
mock_now.return_value = datetime(2025, 1, 28, 14, 30, 0) # noqa: DTZ001
# Pagination succeeds (button found and clicked)
mock_paginate.return_value = True
# Dialog close button times out
mock_click.side_effect = TimeoutError("Dialog not found")
result = await test_bot.extend_ad(str(ad_file), ad_cfg, base_ad_config_with_id)
# Should still succeed (dialog might not appear)
assert result is True
@pytest.mark.asyncio
async def test_extend_ad_exception_handling(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any], tmp_path:Path) -> None:
"""Test extend_ad propagates unexpected exceptions."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Create temporary YAML file
ad_file = tmp_path / "test_ad.yaml"
dicts.save_dict(str(ad_file), base_ad_config_with_id)
with patch.object(test_bot, "_navigate_paginated_ad_overview", new_callable = AsyncMock) as mock_paginate:
# Simulate unexpected exception during pagination
mock_paginate.side_effect = Exception("Unexpected error")
with pytest.raises(Exception, match = "Unexpected error"):
await test_bot.extend_ad(str(ad_file), ad_cfg, base_ad_config_with_id)
@pytest.mark.asyncio
async def test_extend_ad_updates_yaml_file(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any], tmp_path:Path) -> None:
"""Test that extend_ad correctly updates the YAML file with new timestamp."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Create temporary YAML file
ad_file = tmp_path / "test_ad.yaml"
original_updated_on = base_ad_config_with_id["updated_on"]
dicts.save_dict(str(ad_file), base_ad_config_with_id)
with (
patch.object(test_bot, "_navigate_paginated_ad_overview", new_callable = AsyncMock) as mock_paginate,
patch.object(test_bot, "web_click", new_callable = AsyncMock),
patch("kleinanzeigen_bot.misc.now") as mock_now,
):
# Test mock datetime - timezone not relevant for timestamp formatting test
mock_now.return_value = datetime(2025, 1, 28, 14, 30, 0) # noqa: DTZ001
# Pagination succeeds (button found and clicked)
mock_paginate.return_value = True
await test_bot.extend_ad(str(ad_file), ad_cfg, base_ad_config_with_id)
# Load the updated file and verify the timestamp changed
updated_config = dicts.load_dict(str(ad_file))
assert updated_config["updated_on"] != original_updated_on
assert updated_config["updated_on"] == "2025-01-28T14:30:00"
@pytest.mark.asyncio
async def test_extend_ad_with_web_mocks(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any], tmp_path:Path) -> None:
"""Test extend_ad with web-level mocks to exercise the find_and_click_extend_button callback."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Create temporary YAML file
ad_file = tmp_path / "test_ad.yaml"
dicts.save_dict(str(ad_file), base_ad_config_with_id)
extend_button_mock = AsyncMock()
extend_button_mock.click = AsyncMock()
pagination_section = MagicMock()
find_call_count = {"count": 0}
async def mock_web_find(selector_type:By, selector_value:str, **kwargs:Any) -> Element:
find_call_count["count"] += 1
# Ad list container (called by pagination helper)
if selector_type == By.ID and selector_value == "my-manageitems-adlist":
return MagicMock()
# Pagination section (called by pagination helper)
if selector_type == By.CSS_SELECTOR and selector_value == ".Pagination":
# Raise TimeoutError on first call (pagination detection) to indicate single page
if find_call_count["count"] == 2:
raise TimeoutError("No pagination")
return pagination_section
# Extend button (called by find_and_click_extend_button callback)
if selector_type == By.XPATH and "Verlängern" in selector_value:
return extend_button_mock
raise TimeoutError(f"Unexpected find: {selector_type} {selector_value}")
with (
patch.object(test_bot, "web_open", new_callable = AsyncMock),
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "web_find", new_callable = AsyncMock, side_effect = mock_web_find),
patch.object(test_bot, "web_find_all", new_callable = AsyncMock, return_value = []),
patch.object(test_bot, "web_scroll_page_down", new_callable = AsyncMock),
patch.object(test_bot, "web_click", new_callable = AsyncMock),
patch.object(test_bot, "_timeout", return_value = 10),
patch("kleinanzeigen_bot.misc.now") as mock_now,
):
# Test mock datetime - timezone not relevant for timestamp formatting test
mock_now.return_value = datetime(2025, 1, 28, 15, 0, 0) # noqa: DTZ001
result = await test_bot.extend_ad(str(ad_file), ad_cfg, base_ad_config_with_id)
assert result is True
# Verify the extend button was found and clicked
extend_button_mock.click.assert_awaited_once()
# Verify updated_on was updated
updated_config = dicts.load_dict(str(ad_file))
assert updated_config["updated_on"] == "2025-01-28T15:00:00"
class TestExtendEdgeCases:
"""Tests for edge cases and boundary conditions."""
@pytest.mark.asyncio
async def test_extend_ads_exactly_8_days(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that ads expiring exactly in 8 days are eligible for extension."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Set end date to exactly 8 days from now (boundary case)
future_date = misc.now() + timedelta(days = 8)
end_date_str = future_date.strftime("%d.%m.%Y")
published_ads_json = {"ads": [{"id": 12345, "title": "Test Ad Title", "state": "active", "endDate": end_date_str}]}
with (
patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request,
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "extend_ad", new_callable = AsyncMock) as mock_extend_ad,
):
mock_request.return_value = {"content": json.dumps(published_ads_json)}
mock_extend_ad.return_value = True
await test_bot.extend_ads([("test.yaml", ad_cfg, base_ad_config_with_id)])
# Verify extend_ad was called (8 days is within the window)
mock_extend_ad.assert_called_once()
@pytest.mark.asyncio
async def test_extend_ads_exactly_9_days(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that ads expiring in exactly 9 days are not eligible for extension."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Set end date to exactly 9 days from now (just outside window)
future_date = misc.now() + timedelta(days = 9)
end_date_str = future_date.strftime("%d.%m.%Y")
published_ads_json = {"ads": [{"id": 12345, "title": "Test Ad Title", "state": "active", "endDate": end_date_str}]}
with (
patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request,
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "extend_ad", new_callable = AsyncMock) as mock_extend_ad,
):
mock_request.return_value = {"content": json.dumps(published_ads_json)}
await test_bot.extend_ads([("test.yaml", ad_cfg, base_ad_config_with_id)])
# Verify extend_ad was not called (9 days is outside the window)
mock_extend_ad.assert_not_called()
@pytest.mark.asyncio
async def test_extend_ads_date_parsing_german_format(self, test_bot:KleinanzeigenBot, base_ad_config_with_id:dict[str, Any]) -> None:
"""Test that extend_ads correctly parses German date format (DD.MM.YYYY)."""
ad_cfg = Ad.model_validate(base_ad_config_with_id)
# Use a specific German date format
published_ads_json = {
"ads": [
{
"id": 12345,
"title": "Test Ad Title",
"state": "active",
"endDate": "05.02.2026", # German format: DD.MM.YYYY
}
]
}
with (
patch.object(test_bot, "web_request", new_callable = AsyncMock) as mock_request,
patch.object(test_bot, "web_sleep", new_callable = AsyncMock),
patch.object(test_bot, "extend_ad", new_callable = AsyncMock) as mock_extend_ad,
patch("kleinanzeigen_bot.misc.now") as mock_now,
):
# Mock now() to return a date where 05.02.2026 would be within 8 days
# Test mock datetime - timezone not relevant for date comparison test
mock_now.return_value = datetime(2026, 1, 28) # noqa: DTZ001
mock_request.return_value = {"content": json.dumps(published_ads_json)}
mock_extend_ad.return_value = True
await test_bot.extend_ads([("test.yaml", ad_cfg, base_ad_config_with_id)])
# Verify extend_ad was called (date was parsed correctly)
mock_extend_ad.assert_called_once()

1600
tests/unit/test_extract.py Normal file

File diff suppressed because it is too large Load Diff

87
tests/unit/test_files.py Normal file
View File

@@ -0,0 +1,87 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""Tests for the files utility module."""
import os
import tempfile
from kleinanzeigen_bot.utils.files import abspath
class TestFiles:
"""Test suite for files utility functions."""
def test_abspath_without_relative_to(self) -> None:
"""Test abspath function without relative_to parameter."""
# Test with a simple path
result = abspath("test/path")
assert os.path.isabs(result)
# Use os.path.normpath to handle path separators correctly on all platforms
assert os.path.normpath(result).endswith(os.path.normpath("test/path"))
# Test with an absolute path
abs_path = os.path.abspath("test/path")
result = abspath(abs_path)
assert result == abs_path
def test_abspath_with_file_reference(self) -> None:
"""Test abspath function with a file as relative_to."""
with tempfile.NamedTemporaryFile() as temp_file:
# Test with a relative path
result = abspath("test/path", temp_file.name)
expected = os.path.normpath(os.path.join(os.path.dirname(temp_file.name), "test/path"))
assert result == expected
# Test with an absolute path
abs_path = os.path.abspath("test/path")
result = abspath(abs_path, temp_file.name)
assert result == abs_path
def test_abspath_with_directory_reference(self) -> None:
"""Test abspath function with a directory as relative_to."""
with tempfile.TemporaryDirectory() as temp_dir:
# Test with a relative path
result = abspath("test/path", temp_dir)
expected = os.path.normpath(os.path.join(temp_dir, "test/path"))
assert result == expected
# Test with an absolute path
abs_path = os.path.abspath("test/path")
result = abspath(abs_path, temp_dir)
assert result == abs_path
def test_abspath_with_nonexistent_reference(self) -> None:
"""Test abspath function with a nonexistent file/directory as relative_to."""
nonexistent_path = "nonexistent/path"
# Test with a relative path; should still yield an absolute path
result = abspath("test/path", nonexistent_path)
expected = os.path.normpath(os.path.join(os.path.abspath(nonexistent_path), "test/path"))
assert result == expected
# Test with an absolute path
abs_path = os.path.abspath("test/path")
result = abspath(abs_path, nonexistent_path)
assert result == abs_path
def test_abspath_with_special_paths(self) -> None:
"""Test abspath function with special path cases."""
# Test with empty path
result = abspath("")
assert os.path.isabs(result)
assert result == os.path.abspath("")
# Test with current directory
result = abspath(".")
assert os.path.isabs(result)
assert result == os.path.abspath(".")
# Test with parent directory
result = abspath("..")
assert os.path.isabs(result)
assert result == os.path.abspath("..")
# Test with path containing ../
result = abspath("../test/path")
assert os.path.isabs(result)
assert result == os.path.abspath("../test/path")

57
tests/unit/test_i18n.py Normal file
View File

@@ -0,0 +1,57 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import pytest
from _pytest.monkeypatch import MonkeyPatch # pylint: disable=import-private-name
from kleinanzeigen_bot.utils import i18n
@pytest.mark.parametrize(("lang", "expected"), [
(None, ("en", "US", "UTF-8")), # Test with no LANG variable (should default to ("en", "US", "UTF-8"))
("fr", ("fr", None, "UTF-8")), # Test with just a language code
("fr_CA", ("fr", "CA", "UTF-8")), # Test with language + region, no encoding
("pt_BR.iso8859-1", ("pt", "BR", "ISO8859-1")), # Test with language + region + encoding
])
def test_detect_locale(monkeypatch:MonkeyPatch, lang:str | None, expected:i18n.Locale) -> None:
"""
Pytest test case to verify detect_system_language() behavior under various LANG values.
"""
# Clear or set the LANG environment variable as needed.
if lang is None:
monkeypatch.delenv("LANG", raising = False)
else:
monkeypatch.setenv("LANG", lang)
# Call the function and compare the result to the expected output.
result = i18n._detect_locale() # pylint: disable=protected-access
assert result == expected, f"For LANG={lang}, expected {expected} but got {result}"
@pytest.mark.parametrize(("lang", "noun", "count", "prefix_with_count", "expected"), [
("en", "field", 1, True, "1 field"),
("en", "field", 2, True, "2 fields"),
("en", "field", 2, False, "fields"),
("en", "attribute", 2, False, "attributes"),
("en", "bus", 2, False, "buses"),
("en", "city", 2, False, "cities"),
("de", "Feld", 1, True, "1 Feld"),
("de", "Feld", 2, True, "2 Felder"),
("de", "Feld", 2, False, "Felder"),
("de", "Anzeige", 2, False, "Anzeigen"),
("de", "Attribute", 2, False, "Attribute"),
("de", "Bild", 2, False, "Bilder"),
("de", "Datei", 2, False, "Dateien"),
("de", "Kategorie", 2, False, "Kategorien")
])
def test_pluralize(
lang:str,
noun:str,
count:int,
prefix_with_count:bool,
expected:str
) -> None:
i18n.set_current_locale(i18n.Locale(lang, "US", "UTF_8"))
result = i18n.pluralize(noun, count, prefix_with_count = prefix_with_count)
assert result == expected, f"For LANG={lang}, expected {expected} but got {result}"

2161
tests/unit/test_init.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,231 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""Tests for JSON API pagination helper methods."""
import json
from unittest.mock import AsyncMock, patch
import pytest
from kleinanzeigen_bot import KleinanzeigenBot
from kleinanzeigen_bot.utils import misc
@pytest.mark.unit
class TestJSONPagination:
"""Tests for _coerce_page_number and _fetch_published_ads methods."""
@pytest.fixture
def bot(self) -> KleinanzeigenBot:
return KleinanzeigenBot()
def test_coerce_page_number_with_valid_int(self) -> None:
"""Test that valid integers are returned as-is."""
result = misc.coerce_page_number(1)
if result != 1:
pytest.fail(f"_coerce_page_number(1) expected 1, got {result}")
result = misc.coerce_page_number(0)
if result != 0:
pytest.fail(f"_coerce_page_number(0) expected 0, got {result}")
result = misc.coerce_page_number(42)
if result != 42:
pytest.fail(f"_coerce_page_number(42) expected 42, got {result}")
def test_coerce_page_number_with_string_int(self) -> None:
"""Test that string integers are converted to int."""
result = misc.coerce_page_number("1")
if result != 1:
pytest.fail(f"_coerce_page_number('1') expected 1, got {result}")
result = misc.coerce_page_number("0")
if result != 0:
pytest.fail(f"_coerce_page_number('0') expected 0, got {result}")
result = misc.coerce_page_number("42")
if result != 42:
pytest.fail(f"_coerce_page_number('42') expected 42, got {result}")
def test_coerce_page_number_with_none(self) -> None:
"""Test that None returns None."""
result = misc.coerce_page_number(None)
if result is not None:
pytest.fail(f"_coerce_page_number(None) expected None, got {result}")
def test_coerce_page_number_with_invalid_types(self) -> None:
"""Test that invalid types return None."""
result = misc.coerce_page_number("invalid")
if result is not None:
pytest.fail(f'_coerce_page_number("invalid") expected None, got {result}')
result = misc.coerce_page_number("")
if result is not None:
pytest.fail(f'_coerce_page_number("") expected None, got {result}')
result = misc.coerce_page_number([])
if result is not None:
pytest.fail(f"_coerce_page_number([]) expected None, got {result}")
result = misc.coerce_page_number({})
if result is not None:
pytest.fail(f"_coerce_page_number({{}}) expected None, got {result}")
result = misc.coerce_page_number(3.14)
if result is not None:
pytest.fail(f"_coerce_page_number(3.14) expected None, got {result}")
def test_coerce_page_number_with_whole_number_float(self) -> None:
"""Test that whole-number floats are accepted and converted to int."""
result = misc.coerce_page_number(2.0)
if result != 2:
pytest.fail(f"_coerce_page_number(2.0) expected 2, got {result}")
result = misc.coerce_page_number(0.0)
if result != 0:
pytest.fail(f"_coerce_page_number(0.0) expected 0, got {result}")
result = misc.coerce_page_number(42.0)
if result != 42:
pytest.fail(f"_coerce_page_number(42.0) expected 42, got {result}")
@pytest.mark.asyncio
async def test_fetch_published_ads_single_page_no_paging(self, bot:KleinanzeigenBot) -> None:
"""Test fetching ads from single page with no paging info."""
with patch.object(bot, "web_request", new_callable = AsyncMock) as mock_request:
mock_request.return_value = {"content": '{"ads": [{"id": 1, "title": "Ad 1"}, {"id": 2, "title": "Ad 2"}]}'}
result = await bot._fetch_published_ads()
if len(result) != 2:
pytest.fail(f"Expected 2 results, got {len(result)}")
if result[0]["id"] != 1:
pytest.fail(f"Expected result[0]['id'] == 1, got {result[0]['id']}")
if result[1]["id"] != 2:
pytest.fail(f"Expected result[1]['id'] == 2, got {result[1]['id']}")
mock_request.assert_awaited_once_with(f"{bot.root_url}/m-meine-anzeigen-verwalten.json?sort=DEFAULT&pageNum=1")
@pytest.mark.asyncio
async def test_fetch_published_ads_single_page_with_paging(self, bot:KleinanzeigenBot) -> None:
"""Test fetching ads from single page with paging info showing 1/1."""
response_data = {"ads": [{"id": 1, "title": "Ad 1"}], "paging": {"pageNum": 1, "last": 1}}
with patch.object(bot, "web_request", new_callable = AsyncMock) as mock_request:
mock_request.return_value = {"content": json.dumps(response_data)}
result = await bot._fetch_published_ads()
if len(result) != 1:
pytest.fail(f"Expected 1 ad, got {len(result)}")
if result[0].get("id") != 1:
pytest.fail(f"Expected ad id 1, got {result[0].get('id')}")
mock_request.assert_awaited_once_with(f"{bot.root_url}/m-meine-anzeigen-verwalten.json?sort=DEFAULT&pageNum=1")
@pytest.mark.asyncio
async def test_fetch_published_ads_multi_page(self, bot:KleinanzeigenBot) -> None:
"""Test fetching ads from multiple pages (3 pages, 2 ads each)."""
page1_data = {"ads": [{"id": 1}, {"id": 2}], "paging": {"pageNum": 1, "last": 3, "next": 2}}
page2_data = {"ads": [{"id": 3}, {"id": 4}], "paging": {"pageNum": 2, "last": 3, "next": 3}}
page3_data = {"ads": [{"id": 5}, {"id": 6}], "paging": {"pageNum": 3, "last": 3}}
with patch.object(bot, "web_request", new_callable = AsyncMock) as mock_request:
mock_request.side_effect = [
{"content": json.dumps(page1_data)},
{"content": json.dumps(page2_data)},
{"content": json.dumps(page3_data)},
]
result = await bot._fetch_published_ads()
if len(result) != 6:
pytest.fail(f"Expected 6 ads but got {len(result)}")
if [ad["id"] for ad in result] != [1, 2, 3, 4, 5, 6]:
pytest.fail(f"Expected ids [1, 2, 3, 4, 5, 6] but got {[ad['id'] for ad in result]}")
if mock_request.call_count != 3:
pytest.fail(f"Expected 3 web_request calls but got {mock_request.call_count}")
mock_request.assert_any_await(f"{bot.root_url}/m-meine-anzeigen-verwalten.json?sort=DEFAULT&pageNum=1")
mock_request.assert_any_await(f"{bot.root_url}/m-meine-anzeigen-verwalten.json?sort=DEFAULT&pageNum=2")
mock_request.assert_any_await(f"{bot.root_url}/m-meine-anzeigen-verwalten.json?sort=DEFAULT&pageNum=3")
@pytest.mark.asyncio
async def test_fetch_published_ads_empty_list(self, bot:KleinanzeigenBot) -> None:
"""Test handling of empty ads list."""
response_data = {"ads": [], "paging": {"pageNum": 1, "last": 1}}
with patch.object(bot, "web_request", new_callable = AsyncMock) as mock_request:
mock_request.return_value = {"content": json.dumps(response_data)}
result = await bot._fetch_published_ads()
if not isinstance(result, list):
pytest.fail(f"expected result to be list, got {type(result).__name__}")
if len(result) != 0:
pytest.fail(f"expected empty list from _fetch_published_ads, got {len(result)} items")
@pytest.mark.asyncio
async def test_fetch_published_ads_invalid_json(self, bot:KleinanzeigenBot) -> None:
"""Test handling of invalid JSON response."""
with patch.object(bot, "web_request", new_callable = AsyncMock) as mock_request:
mock_request.return_value = {"content": "invalid json"}
result = await bot._fetch_published_ads()
if result != []:
pytest.fail(f"Expected empty list on invalid JSON, got {result}")
@pytest.mark.asyncio
async def test_fetch_published_ads_missing_paging_dict(self, bot:KleinanzeigenBot) -> None:
"""Test handling of missing paging dict."""
response_data = {"ads": [{"id": 1}, {"id": 2}]}
with patch.object(bot, "web_request", new_callable = AsyncMock) as mock_request:
mock_request.return_value = {"content": json.dumps(response_data)}
result = await bot._fetch_published_ads()
if len(result) != 2:
pytest.fail(f"expected 2 ads, got {len(result)}")
mock_request.assert_awaited_once()
@pytest.mark.asyncio
async def test_fetch_published_ads_non_integer_paging_values(self, bot:KleinanzeigenBot) -> None:
"""Test handling of non-integer paging values."""
response_data = {"ads": [{"id": 1}], "paging": {"pageNum": "invalid", "last": "also-invalid"}}
with patch.object(bot, "web_request", new_callable = AsyncMock) as mock_request:
mock_request.return_value = {"content": json.dumps(response_data)}
result = await bot._fetch_published_ads()
# Should return ads from first page and stop due to invalid paging
if len(result) != 1:
pytest.fail(f"Expected 1 ad, got {len(result)}")
if result[0].get("id") != 1:
pytest.fail(f"Expected ad id 1, got {result[0].get('id')}")
@pytest.mark.asyncio
async def test_fetch_published_ads_non_list_ads(self, bot:KleinanzeigenBot) -> None:
"""Test handling of non-list ads field."""
response_data = {"ads": "not a list", "paging": {"pageNum": 1, "last": 1}}
with patch.object(bot, "web_request", new_callable = AsyncMock) as mock_request:
mock_request.return_value = {"content": json.dumps(response_data)}
result = await bot._fetch_published_ads()
# Should return empty list when ads is not a list
if not isinstance(result, list):
pytest.fail(f"expected empty list when 'ads' is not a list, got: {result}")
if len(result) != 0:
pytest.fail(f"expected empty list when 'ads' is not a list, got: {result}")
@pytest.mark.asyncio
async def test_fetch_published_ads_timeout(self, bot:KleinanzeigenBot) -> None:
"""Test handling of timeout during pagination."""
with patch.object(bot, "web_request", new_callable = AsyncMock) as mock_request:
mock_request.side_effect = TimeoutError("timeout")
result = await bot._fetch_published_ads()
if result != []:
pytest.fail(f"Expected empty list on timeout, got {result}")

View File

@@ -0,0 +1,107 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import builtins, importlib, sys # isort: skip
from unittest import mock
import pytest
from kleinanzeigen_bot.utils.i18n import Locale
# --- Platform-specific test for Windows double-click guard ---
@pytest.mark.parametrize(
("compiled_exe", "windows_double_click_launch", "expected_error_msg_lang"),
[
(True, True, "en"), # Windows Explorer double-click - English locale
(True, True, "de"), # Windows Explorer double-click - German locale
(True, False, None), # Windows Terminal launch - compiled exe
(False, False, None), # Windows Terminal launch - from source code
],
)
@pytest.mark.skipif(sys.platform != "win32", reason = "ctypes.windll only exists on Windows")
def test_guard_triggers_on_double_click_windows(
monkeypatch:pytest.MonkeyPatch,
capsys:pytest.CaptureFixture[str],
compiled_exe:bool,
windows_double_click_launch:bool | None,
expected_error_msg_lang:str | None
) -> None:
# Prevent blocking in tests
monkeypatch.setattr(builtins, "input", lambda: None)
# Simulate target platform
monkeypatch.setattr(sys, "platform", "win32")
# Simulate compiled executable
monkeypatch.setattr(
"kleinanzeigen_bot.utils.misc.is_frozen",
lambda: compiled_exe,
)
# Force specific locale
if expected_error_msg_lang:
monkeypatch.setattr(
"kleinanzeigen_bot.utils.i18n.get_current_locale",
lambda: Locale(expected_error_msg_lang),
)
# Spy on sys.exit
exit_mock = mock.Mock(wraps = sys.exit)
monkeypatch.setattr(sys, "exit", exit_mock)
# Simulate double-click launch on Windows
if windows_double_click_launch is not None:
pid_count = 2 if windows_double_click_launch else 3 # 2 -> Explorer, 3 -> Terminal
k32 = mock.Mock()
k32.GetConsoleProcessList.return_value = pid_count
monkeypatch.setattr("ctypes.windll.kernel32", k32)
# Reload module to pick up system monkeypatches
guard = importlib.reload(
importlib.import_module("kleinanzeigen_bot.utils.launch_mode_guard")
)
if expected_error_msg_lang:
with pytest.raises(SystemExit) as exc:
guard.ensure_not_launched_from_windows_explorer()
assert exc.value.code == 1
exit_mock.assert_called_once_with(1)
captured = capsys.readouterr()
if expected_error_msg_lang == "de":
assert "Du hast das Programm scheinbar per Doppelklick gestartet." in captured.err
else:
assert "It looks like you launched it by double-clicking the EXE." in captured.err
assert not captured.out # nothing to stdout
else:
guard.ensure_not_launched_from_windows_explorer()
exit_mock.assert_not_called()
captured = capsys.readouterr()
assert not captured.err # nothing to stderr
# --- Platform-agnostic tests for non-Windows and non-frozen code paths ---
@pytest.mark.parametrize(
("platform", "compiled_exe"),
[
("linux", True),
("linux", False),
("darwin", True),
("darwin", False),
],
)
def test_guard_non_windows_and_non_frozen(
monkeypatch:pytest.MonkeyPatch,
platform:str,
compiled_exe:bool
) -> None:
monkeypatch.setattr(sys, "platform", platform)
monkeypatch.setattr("kleinanzeigen_bot.utils.misc.is_frozen", lambda: compiled_exe)
# Reload module to pick up system monkeypatches
guard = importlib.reload(
importlib.import_module("kleinanzeigen_bot.utils.launch_mode_guard")
)
# Should not raise or print anything
guard.ensure_not_launched_from_windows_explorer()

64
tests/unit/test_net.py Normal file
View File

@@ -0,0 +1,64 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""Tests for the network utilities module.
Covers port availability checking functionality.
"""
import socket
from typing import Generator
from unittest.mock import MagicMock, patch
import pytest
from kleinanzeigen_bot.utils.net import is_port_open
# --------------------------------------------------------------------------- #
# Test fixtures
# --------------------------------------------------------------------------- #
@pytest.fixture
def mock_socket() -> Generator[MagicMock, None, None]:
"""Create a mock socket for testing."""
with patch("socket.socket") as mock:
yield mock
# --------------------------------------------------------------------------- #
# Test cases
# --------------------------------------------------------------------------- #
class TestIsPortOpen:
"""Test port availability checking functionality."""
def test_port_open(self, mock_socket:MagicMock) -> None:
"""Test when port is open."""
mock_socket.return_value.connect.return_value = None
assert is_port_open("localhost", 8080) is True
mock_socket.return_value.connect.assert_called_once_with(("localhost", 8080))
mock_socket.return_value.close.assert_called_once()
def test_port_closed(self, mock_socket:MagicMock) -> None:
"""Test when port is closed."""
mock_socket.return_value.connect.side_effect = socket.error
assert is_port_open("localhost", 8080) is False
mock_socket.return_value.connect.assert_called_once_with(("localhost", 8080))
mock_socket.return_value.close.assert_called_once()
def test_connection_timeout(self, mock_socket:MagicMock) -> None:
"""Test when connection times out."""
mock_socket.return_value.connect.side_effect = socket.timeout
assert is_port_open("localhost", 8080) is False
mock_socket.return_value.connect.assert_called_once_with(("localhost", 8080))
mock_socket.return_value.close.assert_called_once()
def test_socket_creation_failure(self, mock_socket:MagicMock) -> None:
"""Test when socket creation fails."""
mock_socket.side_effect = socket.error
assert is_port_open("localhost", 8080) is False
mock_socket.assert_called_once()
# Ensure no close is called since socket creation failed
mock_socket.return_value.close.assert_not_called()

View File

@@ -0,0 +1,560 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import logging
from datetime import datetime, timedelta, timezone
from gettext import gettext as _
from types import SimpleNamespace
from typing import Any, Protocol, runtime_checkable
import pytest
import kleinanzeigen_bot
from kleinanzeigen_bot.model.ad_model import calculate_auto_price
from kleinanzeigen_bot.model.config_model import AutoPriceReductionConfig
from kleinanzeigen_bot.utils.pydantics import ContextualValidationError
@runtime_checkable
class _ApplyAutoPriceReduction(Protocol):
def __call__(self, ad_cfg:SimpleNamespace, ad_cfg_orig:dict[str, Any], ad_file_relative:str) -> None:
pass
@pytest.fixture
def apply_auto_price_reduction() -> _ApplyAutoPriceReduction:
# Return the module-level function directly (no more name-mangling!)
return kleinanzeigen_bot.apply_auto_price_reduction # type: ignore[return-value]
@pytest.mark.unit
def test_initial_posting_uses_base_price() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "PERCENTAGE", amount = 10, min_price = 50)
assert calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 0) == 100
@pytest.mark.unit
def test_auto_price_returns_none_without_base_price() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "PERCENTAGE", amount = 10, min_price = 10)
assert calculate_auto_price(base_price = None, auto_price_reduction = config, target_reduction_cycle = 3) is None
@pytest.mark.unit
def test_negative_price_reduction_count_is_treated_like_zero() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "PERCENTAGE", amount = 25, min_price = 50)
assert calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = -3) == 100
@pytest.mark.unit
def test_missing_price_reduction_returns_base_price() -> None:
assert calculate_auto_price(base_price = 150, auto_price_reduction = None, target_reduction_cycle = 4) == 150
@pytest.mark.unit
def test_percentage_reduction_on_float_rounds_half_up() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "PERCENTAGE", amount = 12.5, min_price = 50)
assert calculate_auto_price(base_price = 99.99, auto_price_reduction = config, target_reduction_cycle = 1) == 87
@pytest.mark.unit
def test_fixed_reduction_on_float_rounds_half_up() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = 12.4, min_price = 50)
assert calculate_auto_price(base_price = 80.51, auto_price_reduction = config, target_reduction_cycle = 1) == 68
@pytest.mark.unit
def test_percentage_price_reduction_over_time() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "PERCENTAGE", amount = 10, min_price = 50)
assert calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 1) == 90
assert calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 2) == 81
assert calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 3) == 73
@pytest.mark.unit
def test_fixed_price_reduction_over_time() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = 15, min_price = 50)
assert calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 1) == 85
assert calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 2) == 70
assert calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 3) == 55
@pytest.mark.unit
def test_min_price_boundary_is_respected() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = 20, min_price = 50)
assert calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 5) == 50
@pytest.mark.unit
def test_min_price_zero_is_allowed() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = 5, min_price = 0)
assert calculate_auto_price(base_price = 20, auto_price_reduction = config, target_reduction_cycle = 5) == 0
@pytest.mark.unit
def test_missing_min_price_raises_error() -> None:
# min_price validation happens at config initialization when enabled=True
with pytest.raises(ContextualValidationError, match = "min_price must be specified"):
AutoPriceReductionConfig.model_validate({"enabled": True, "strategy": "PERCENTAGE", "amount": 50, "min_price": None})
@pytest.mark.unit
def test_percentage_above_100_raises_error() -> None:
with pytest.raises(ContextualValidationError, match = "Percentage reduction amount must not exceed 100"):
AutoPriceReductionConfig.model_validate({"enabled": True, "strategy": "PERCENTAGE", "amount": 150, "min_price": 50})
@pytest.mark.unit
def test_feature_disabled_path_leaves_price_unchanged() -> None:
config = AutoPriceReductionConfig(enabled = False, strategy = "PERCENTAGE", amount = 25, min_price = 50)
price = calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 4)
assert price == 100
@pytest.mark.unit
def test_apply_auto_price_reduction_disabled_emits_no_decision_logs(
caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction
) -> None:
ad_cfg = SimpleNamespace(
price = 100,
auto_price_reduction = AutoPriceReductionConfig(
enabled = False,
strategy = "PERCENTAGE",
amount = 10,
min_price = 50,
delay_reposts = 0,
delay_days = 0,
),
price_reduction_count = 0,
repost_count = 0,
updated_on = None,
created_on = None,
)
with caplog.at_level(logging.INFO):
apply_auto_price_reduction(ad_cfg, {}, "ad_disabled.yaml")
assert not any("Auto price reduction decision for" in message for message in caplog.messages)
@pytest.mark.unit
def test_apply_auto_price_reduction_logs_drop(caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction) -> None:
ad_cfg = SimpleNamespace(
price = 200,
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "PERCENTAGE",
amount = 25,
min_price = 50,
delay_reposts = 0,
delay_days = 0,
),
price_reduction_count = 0,
repost_count = 1,
updated_on = None,
created_on = None,
)
ad_orig:dict[str, Any] = {}
with caplog.at_level(logging.INFO):
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_test.yaml")
expected = _("Auto price reduction applied: %s -> %s after %s reduction cycles") % (200, 150, 1)
assert any(expected in message for message in caplog.messages)
assert ad_cfg.price == 150
assert ad_cfg.price_reduction_count == 1
# Note: price_reduction_count is NOT persisted to ad_orig until after successful publish
assert "price_reduction_count" not in ad_orig
@pytest.mark.unit
def test_apply_auto_price_reduction_logs_unchanged_price_at_floor(
caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction
) -> None:
# Test scenario: price has been reduced to just above min_price,
# and the next reduction would drop it below, so it gets clamped
ad_cfg = SimpleNamespace(
price = 95,
auto_price_reduction = AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = 10, min_price = 90, delay_reposts = 0, delay_days = 0),
price_reduction_count = 0,
repost_count = 1,
updated_on = None,
created_on = None,
)
ad_orig:dict[str, Any] = {}
with caplog.at_level(logging.INFO):
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_test.yaml")
# Price: 95 - 10 = 85, clamped to 90 (floor)
# So the effective price is 90, not 95, meaning reduction was applied
expected = _("Auto price reduction applied: %s -> %s after %s reduction cycles") % (95, 90, 1)
assert any(expected in message for message in caplog.messages)
assert ad_cfg.price == 90
assert ad_cfg.price_reduction_count == 1
assert "price_reduction_count" not in ad_orig
@pytest.mark.unit
def test_apply_auto_price_reduction_warns_when_price_missing(caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction) -> None:
ad_cfg = SimpleNamespace(
price = None,
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "PERCENTAGE",
amount = 25,
min_price = 10,
delay_reposts = 0,
delay_days = 0,
),
price_reduction_count = 2,
repost_count = 2,
updated_on = None,
created_on = None,
)
ad_orig:dict[str, Any] = {}
with caplog.at_level(logging.WARNING):
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_warning.yaml")
expected = _("Auto price reduction is enabled for [%s] but no price is configured.") % ("ad_warning.yaml",)
assert any(expected in message for message in caplog.messages)
assert ad_cfg.price is None
@pytest.mark.unit
def test_apply_auto_price_reduction_warns_when_min_price_equals_price(
caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction
) -> None:
ad_cfg = SimpleNamespace(
price = 100,
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "PERCENTAGE",
amount = 25,
min_price = 100,
delay_reposts = 0,
delay_days = 0,
),
price_reduction_count = 0,
repost_count = 1,
updated_on = None,
created_on = None,
)
ad_orig:dict[str, Any] = {}
with caplog.at_level(logging.WARNING):
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_equal_prices.yaml")
expected = _("Auto price reduction is enabled for [%s] but min_price equals price (%s) - no reductions will occur.") % ("ad_equal_prices.yaml", 100)
assert any(expected in message for message in caplog.messages)
assert ad_cfg.price == 100
assert ad_cfg.price_reduction_count == 0
@pytest.mark.unit
def test_apply_auto_price_reduction_respects_repost_delay(caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction) -> None:
ad_cfg = SimpleNamespace(
price = 200,
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "PERCENTAGE",
amount = 25,
min_price = 50,
delay_reposts = 3,
delay_days = 0,
),
price_reduction_count = 0,
repost_count = 2,
updated_on = None,
created_on = None,
)
ad_orig:dict[str, Any] = {}
with caplog.at_level(logging.DEBUG):
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_delay.yaml")
assert ad_cfg.price == 200
delayed_message = _("Auto price reduction delayed for [%s]: waiting %s more reposts (completed %s, applied %s reductions)") % ("ad_delay.yaml", 2, 2, 0)
assert any(delayed_message in message for message in caplog.messages)
decision_message = (
"Auto price reduction decision for [ad_delay.yaml]: skipped (repost delay). "
"next reduction earliest at repost >= 4 and day delay 0/0 days. repost_count=2 eligible_cycles=0 applied_cycles=0"
)
assert any(message.startswith(decision_message) for message in caplog.messages)
@pytest.mark.unit
def test_apply_auto_price_reduction_after_repost_delay_reduces_once(apply_auto_price_reduction:_ApplyAutoPriceReduction) -> None:
ad_cfg = SimpleNamespace(
price = 100,
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "PERCENTAGE",
amount = 10,
min_price = 50,
delay_reposts = 2,
delay_days = 0,
),
price_reduction_count = 0,
repost_count = 3,
updated_on = None,
created_on = None,
)
ad_cfg_orig:dict[str, Any] = {}
apply_auto_price_reduction(ad_cfg, ad_cfg_orig, "ad_after_delay.yaml")
assert ad_cfg.price == 90
assert ad_cfg.price_reduction_count == 1
# Note: price_reduction_count is NOT persisted to ad_orig until after successful publish
assert "price_reduction_count" not in ad_cfg_orig
@pytest.mark.unit
def test_apply_auto_price_reduction_waits_when_reduction_already_applied(
caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction
) -> None:
ad_cfg = SimpleNamespace(
price = 100,
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "PERCENTAGE",
amount = 10,
min_price = 50,
delay_reposts = 0,
delay_days = 0,
),
price_reduction_count = 3,
repost_count = 3,
updated_on = None,
created_on = None,
)
ad_orig:dict[str, Any] = {}
with caplog.at_level(logging.DEBUG, logger = "kleinanzeigen_bot"):
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_already.yaml")
expected = _("Auto price reduction already applied for [%s]: %s reductions match %s eligible reposts") % ("ad_already.yaml", 3, 3)
assert any(expected in message for message in caplog.messages)
decision_message = (
"Auto price reduction decision for [ad_already.yaml]: skipped (repost delay). "
"next reduction earliest at repost >= 4 and day delay 0/0 days. repost_count=3 eligible_cycles=3 applied_cycles=3"
)
assert any(message.startswith(decision_message) for message in caplog.messages)
assert ad_cfg.price == 100
assert ad_cfg.price_reduction_count == 3
assert "price_reduction_count" not in ad_orig
@pytest.mark.unit
def test_apply_auto_price_reduction_respects_day_delay(
monkeypatch:pytest.MonkeyPatch, caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction
) -> None:
reference = datetime(2025, 1, 1, tzinfo = timezone.utc)
ad_cfg = SimpleNamespace(
price = 150,
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "PERCENTAGE",
amount = 25,
min_price = 50,
delay_reposts = 0,
delay_days = 3,
),
price_reduction_count = 0,
repost_count = 1,
updated_on = reference,
created_on = reference,
)
monkeypatch.setattr("kleinanzeigen_bot.misc.now", lambda: reference + timedelta(days = 1))
ad_orig:dict[str, Any] = {}
with caplog.at_level("INFO"):
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_delay_days.yaml")
assert ad_cfg.price == 150
delayed_message = _("Auto price reduction delayed for [%s]: waiting %s days (elapsed %s)") % ("ad_delay_days.yaml", 3, 1)
assert any(delayed_message in message for message in caplog.messages)
@pytest.mark.unit
def test_apply_auto_price_reduction_runs_after_delays(monkeypatch:pytest.MonkeyPatch, apply_auto_price_reduction:_ApplyAutoPriceReduction) -> None:
reference = datetime(2025, 1, 1, tzinfo = timezone.utc)
ad_cfg = SimpleNamespace(
price = 120,
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "PERCENTAGE",
amount = 25,
min_price = 60,
delay_reposts = 2,
delay_days = 3,
),
price_reduction_count = 0,
repost_count = 3,
updated_on = reference - timedelta(days = 5),
created_on = reference - timedelta(days = 10),
)
monkeypatch.setattr("kleinanzeigen_bot.misc.now", lambda: reference)
ad_orig:dict[str, Any] = {}
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_ready.yaml")
assert ad_cfg.price == 90
@pytest.mark.unit
def test_apply_auto_price_reduction_delayed_when_timestamp_missing(
caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction
) -> None:
ad_cfg = SimpleNamespace(
price = 200,
auto_price_reduction = AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = 20, min_price = 50, delay_reposts = 0, delay_days = 2),
price_reduction_count = 0,
repost_count = 1,
updated_on = None,
created_on = None,
)
ad_orig:dict[str, Any] = {}
with caplog.at_level("INFO"):
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_missing_time.yaml")
expected = _("Auto price reduction delayed for [%s]: waiting %s days but publish timestamp missing") % ("ad_missing_time.yaml", 2)
assert any(expected in message for message in caplog.messages)
@pytest.mark.unit
def test_fractional_reduction_increments_counter_even_when_price_unchanged(
caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction
) -> None:
# Test that small fractional reductions increment the counter even when rounded price doesn't change
# This allows cumulative reductions to eventually show visible effect
ad_cfg = SimpleNamespace(
price = 100,
auto_price_reduction = AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = 0.3, min_price = 50, delay_reposts = 0, delay_days = 0),
price_reduction_count = 0,
repost_count = 1,
updated_on = None,
created_on = None,
)
ad_orig:dict[str, Any] = {}
with caplog.at_level(logging.INFO):
apply_auto_price_reduction(ad_cfg, ad_orig, "ad_fractional.yaml")
# Price: 100 - 0.3 = 99.7, rounds to 100 (no visible change)
# But counter should still increment for future cumulative reductions
expected = _("Auto price reduction kept price %s after attempting %s reduction cycles") % (100, 1)
assert any(expected in message for message in caplog.messages)
assert ad_cfg.price == 100
assert ad_cfg.price_reduction_count == 1 # Counter incremented despite no visible price change
assert "price_reduction_count" not in ad_orig
@pytest.mark.unit
def test_apply_auto_price_reduction_verbose_logs_trace(caplog:pytest.LogCaptureFixture, apply_auto_price_reduction:_ApplyAutoPriceReduction) -> None:
ad_cfg = SimpleNamespace(
price = 200,
auto_price_reduction = AutoPriceReductionConfig(
enabled = True,
strategy = "PERCENTAGE",
amount = 25,
min_price = 50,
delay_reposts = 0,
delay_days = 0,
),
price_reduction_count = 0,
repost_count = 1,
updated_on = None,
created_on = None,
)
with caplog.at_level(logging.DEBUG, logger = "kleinanzeigen_bot"):
apply_auto_price_reduction(ad_cfg, {}, "ad_trace.yaml")
assert any("Auto price reduction trace for [ad_trace.yaml]" in message for message in caplog.messages)
assert any(" -> cycle=1 before=200 reduction=50.0 after_rounding=150 floor_applied=False" in message for message in caplog.messages)
@pytest.mark.unit
def test_reduction_value_zero_raises_error() -> None:
with pytest.raises(ContextualValidationError, match = "Input should be greater than 0"):
AutoPriceReductionConfig.model_validate({"enabled": True, "strategy": "PERCENTAGE", "amount": 0, "min_price": 50})
@pytest.mark.unit
def test_reduction_value_negative_raises_error() -> None:
with pytest.raises(ContextualValidationError, match = "Input should be greater than 0"):
AutoPriceReductionConfig.model_validate({"enabled": True, "strategy": "FIXED", "amount": -5, "min_price": 50})
@pytest.mark.unit
def test_percentage_reduction_100_percent() -> None:
config = AutoPriceReductionConfig(enabled = True, strategy = "PERCENTAGE", amount = 100, min_price = 0)
assert calculate_auto_price(base_price = 150, auto_price_reduction = config, target_reduction_cycle = 1) == 0
@pytest.mark.unit
def test_extreme_reduction_cycles() -> None:
# Test that extreme cycle counts don't cause performance issues or errors
config = AutoPriceReductionConfig(enabled = True, strategy = "PERCENTAGE", amount = 10, min_price = 0)
result = calculate_auto_price(base_price = 1000, auto_price_reduction = config, target_reduction_cycle = 100)
# With commercial rounding (round after each step), price stabilizes at 5
# because 5 * 0.9 = 4.5 rounds back to 5 with ROUND_HALF_UP
assert result == 5
@pytest.mark.unit
def test_commercial_rounding_each_step() -> None:
"""Test that commercial rounding is applied after each reduction step, not just at the end."""
config = AutoPriceReductionConfig(enabled = True, strategy = "PERCENTAGE", amount = 10, min_price = 0)
# With 135 EUR and 2x 10% reduction:
# Step 1: 135 * 0.9 = 121.5 → rounds to 122 EUR
# Step 2: 122 * 0.9 = 109.8 → rounds to 110 EUR
# (Without intermediate rounding, it would be: 135 * 0.9^2 = 109.35 → 109 EUR)
result = calculate_auto_price(base_price = 135, auto_price_reduction = config, target_reduction_cycle = 2)
assert result == 110 # Commercial rounding result
@pytest.mark.unit
def test_extreme_reduction_cycles_with_floor() -> None:
# Test that extreme cycles stop at min_price and don't cause issues
config = AutoPriceReductionConfig(enabled = True, strategy = "PERCENTAGE", amount = 10, min_price = 50)
result = calculate_auto_price(base_price = 1000, auto_price_reduction = config, target_reduction_cycle = 1000)
# Should stop at min_price, not go to 0, regardless of cycle count
assert result == 50
@pytest.mark.unit
def test_fractional_min_price_is_rounded_up_with_ceiling() -> None:
# Test that fractional min_price is rounded UP using ROUND_CEILING
# This prevents the price from going below min_price due to int() conversion
# Example: min_price=90.5 should become floor of 91, not 90
config = AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = 10, min_price = 90.5)
# Start at 100, reduce by 10 = 90
# But min_price=90.5 rounds UP to 91 with ROUND_CEILING
# So the result should be 91, not 90
result = calculate_auto_price(base_price = 100, auto_price_reduction = config, target_reduction_cycle = 1)
assert result == 91 # Rounded up from 90.5 floor
# Verify with another fractional value
config2 = AutoPriceReductionConfig(enabled = True, strategy = "FIXED", amount = 5, min_price = 49.1)
result2 = calculate_auto_price(
base_price = 60,
auto_price_reduction = config2,
target_reduction_cycle = 3, # 60 - 5 - 5 - 5 = 45, clamped to ceil(49.1) = 50
)
assert result2 == 50 # Rounded up from 49.1 floor

View File

@@ -0,0 +1,300 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""Tests for the pydantics utilities module.
Covers ContextualValidationError, ContextualModel, and format_validation_error.
"""
from typing import Any, TypedDict, cast
import pytest
from pydantic import BaseModel, ValidationError
from pydantic_core import ErrorDetails as PydanticErrorDetails
from typing_extensions import NotRequired
from kleinanzeigen_bot.utils.pydantics import (
ContextualModel,
ContextualValidationError,
format_validation_error,
)
class ErrorDetails(TypedDict):
loc:tuple[str, ...]
msg:str
type:str
input:NotRequired[Any]
ctx:NotRequired[dict[str, Any]]
# --------------------------------------------------------------------------- #
# Test fixtures
# --------------------------------------------------------------------------- #
@pytest.fixture
def context() -> dict[str, Any]:
"""Fixture for a sample context."""
return {"user": "test", "reason": "unit-test"}
# --------------------------------------------------------------------------- #
# Test cases
# --------------------------------------------------------------------------- #
class TestContextualValidationError:
"""Test ContextualValidationError behavior."""
def test_context_attached(self, context:dict[str, Any]) -> None:
"""Context is attached to the exception."""
ex = ContextualValidationError("test", [])
ex.context = context
assert ex.context == context
def test_context_missing(self) -> None:
"""Context is missing (default)."""
ex = ContextualValidationError("test", [])
assert not hasattr(ex, "context") or ex.context is None
class TestContextualModel:
"""Test ContextualModel validation logic."""
class SimpleModel(ContextualModel): # type: ignore[unused-ignore,misc]
x:int
def test_model_validate_success(self) -> None:
"""Valid input returns a model instance."""
result = self.SimpleModel.model_validate({"x": 42})
assert isinstance(result, self.SimpleModel)
assert result.x == 42
def test_model_validate_failure_with_context(self, context:dict[str, Any]) -> None:
"""Invalid input raises ContextualValidationError with context."""
with pytest.raises(ContextualValidationError) as exc_info:
self.SimpleModel.model_validate({"x": "not-an-int"}, context = context)
assert exc_info.value.context == context
class TestFormatValidationError:
"""Test format_validation_error output."""
class SimpleModel(BaseModel):
y:int
@pytest.mark.parametrize(
("error_details", "expected"),
[
# Standard error with known code and context
(
[{"loc": ("foo",), "msg": "dummy", "type": "int_parsing", "ctx": {}}],
"Input should be a valid integer, unable to parse string as an integer",
),
# Error with context variable
(
[{"loc": ("bar",), "msg": "dummy", "type": "greater_than", "ctx": {"gt": 5}}],
"greater than 5",
),
# Error with unknown code
(
[{"loc": ("baz",), "msg": "dummy", "type": "unknown_code"}],
"[type=unknown_code]",
),
# Error with message template containing ' or '
(
[{"loc": ("qux",), "msg": "dummy", "type": "enum", "ctx": {"expected": "'a' or 'b'"}}],
"' or '",
),
# Error with no context
(
[{"loc": ("nocontext",), "msg": "dummy", "type": "string_type"}],
"Input should be a valid string",
),
# Date/time related errors
(
[{"loc": ("date",), "msg": "dummy", "type": "date_parsing", "ctx": {"error": "invalid format"}}],
"Input should be a valid date in the format YYYY-MM-DD",
),
(
[{"loc": ("datetime",), "msg": "dummy", "type": "datetime_parsing", "ctx": {"error": "invalid format"}}],
"Input should be a valid datetime",
),
(
[{"loc": ("time",), "msg": "dummy", "type": "time_parsing", "ctx": {"error": "invalid format"}}],
"Input should be in a valid time format",
),
# URL related errors
(
[{"loc": ("url",), "msg": "dummy", "type": "url_parsing", "ctx": {"error": "invalid format"}}],
"Input should be a valid URL",
),
(
[{"loc": ("url_scheme",), "msg": "dummy", "type": "url_scheme", "ctx": {"expected_schemes": "http,https"}}],
"URL scheme should be http,https",
),
# UUID related errors
(
[{"loc": ("uuid",), "msg": "dummy", "type": "uuid_parsing", "ctx": {"error": "invalid format"}}],
"Input should be a valid UUID",
),
(
[{"loc": ("uuid_version",), "msg": "dummy", "type": "uuid_version", "ctx": {"expected_version": 4}}],
"UUID version 4 expected",
),
# Decimal related errors
(
[{"loc": ("decimal",), "msg": "dummy", "type": "decimal_parsing"}],
"Input should be a valid decimal",
),
(
[{"loc": ("decimal_max_digits",), "msg": "dummy", "type": "decimal_max_digits", "ctx": {"max_digits": 10, "expected_plural": "s"}}],
"Decimal input should have no more than 10 digits in total",
),
(
[{"loc": ("decimal_max_places",), "msg": "dummy", "type": "decimal_max_places", "ctx": {"decimal_places": 2, "expected_plural": "s"}}],
"Decimal input should have no more than 2 decimal places",
),
(
[{"loc": ("decimal_whole_digits",), "msg": "dummy", "type": "decimal_whole_digits", "ctx": {"whole_digits": 3, "expected_plural": ""}}],
"Decimal input should have no more than 3 digits before the decimal point",
),
# Complex number related errors
(
[{"loc": ("complex",), "msg": "dummy", "type": "complex_type"}],
"Input should be a valid python complex object",
),
(
[{"loc": ("complex_str",), "msg": "dummy", "type": "complex_str_parsing"}],
"Input should be a valid complex string",
),
# List/sequence related errors
(
[{"loc": ("list",), "msg": "dummy", "type": "list_type"}],
"Input should be a valid list",
),
(
[{"loc": ("tuple",), "msg": "dummy", "type": "tuple_type"}],
"Input should be a valid tuple",
),
(
[{"loc": ("set",), "msg": "dummy", "type": "set_type"}],
"Input should be a valid set",
),
# String related errors
(
[{"loc": ("string_pattern",), "msg": "dummy", "type": "string_pattern_mismatch", "ctx": {"pattern": r"\d+"}}],
"String should match pattern '\\d+'",
),
(
[{"loc": ("string_length",), "msg": "dummy", "type": "string_too_short", "ctx": {"min_length": 5, "expected_plural": "s"}}],
"String should have at least 5 characters",
),
# Number related errors
(
[{"loc": ("float",), "msg": "dummy", "type": "float_type"}],
"Input should be a valid number",
),
(
[{"loc": ("int",), "msg": "dummy", "type": "int_type"}],
"Input should be a valid integer",
),
# Boolean related errors
(
[{"loc": ("bool",), "msg": "dummy", "type": "bool_type"}],
"Input should be a valid boolean",
),
(
[{"loc": ("bool_parsing",), "msg": "dummy", "type": "bool_parsing"}],
"Input should be a valid boolean, unable to interpret input",
),
],
)
def test_various_error_codes(self, error_details:list[dict[str, Any]], expected:str) -> None:
"""Test various error codes and message formatting."""
class DummyValidationError(ValidationError):
def errors(self, *, include_url:bool = True, include_context:bool = True, include_input:bool = True) -> list[PydanticErrorDetails]:
return cast(list[PydanticErrorDetails], error_details)
def error_count(self) -> int:
return len(error_details)
@property
def title(self) -> str:
return "Dummy"
ex = DummyValidationError("dummy", [])
out = format_validation_error(ex)
assert any(exp in out for exp in expected.split()), f"Expected '{expected}' in output: {out}"
def test_format_standard_validation_error(self) -> None:
"""Standard ValidationError produces expected string."""
try:
self.SimpleModel(y = "not an int") # type: ignore[arg-type]
except ValidationError as ex:
out = format_validation_error(ex)
assert "validation error" in out
assert "y" in out
assert "integer" in out
def test_format_contextual_validation_error(self, context:dict[str, Any]) -> None:
"""ContextualValidationError includes context in output."""
class Model(ContextualModel): # type: ignore[unused-ignore,misc]
z:int
with pytest.raises(ContextualValidationError) as exc_info:
Model.model_validate({"z": "not an int"}, context = context)
assert exc_info.value.context == context
def test_format_unknown_error_code(self) -> None:
"""Unknown error code falls back to default formatting."""
class DummyValidationError(ValidationError):
def errors(self, *, include_url:bool = True, include_context:bool = True, include_input:bool = True) -> list[PydanticErrorDetails]:
return cast(list[PydanticErrorDetails], [{"loc": ("foo",), "msg": "dummy", "type": "unknown_code", "input": None}])
def error_count(self) -> int:
return 1
@property
def title(self) -> str:
return "Dummy"
ex = DummyValidationError("dummy", [])
out = format_validation_error(ex)
assert "foo" in out
assert "dummy" in out
assert "[type=unknown_code]" in out
def test_pluralization_and_empty_errors(self) -> None:
"""Test pluralization in header and empty error list edge case."""
class DummyValidationError(ValidationError):
def errors(self, *, include_url:bool = True, include_context:bool = True, include_input:bool = True) -> list[PydanticErrorDetails]:
return cast(list[PydanticErrorDetails], [
{"loc": ("a",), "msg": "dummy", "type": "int_type"},
{"loc": ("b",), "msg": "dummy", "type": "int_type"},
])
def error_count(self) -> int:
return 2
@property
def title(self) -> str:
return "Dummy"
ex1 = DummyValidationError("dummy", [])
out = format_validation_error(ex1)
assert "2 validation errors" in out
assert "a" in out
assert "b" in out
# Empty error list
class EmptyValidationError(ValidationError):
def errors(self, *, include_url:bool = True, include_context:bool = True, include_input:bool = True) -> list[PydanticErrorDetails]:
return cast(list[PydanticErrorDetails], [])
def error_count(self) -> int:
return 0
@property
def title(self) -> str:
return "Empty"
ex2 = EmptyValidationError("empty", [])
out = format_validation_error(ex2)
assert "0 validation errors" in out
assert out.count("-") == 0

View File

@@ -0,0 +1,204 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import json
from datetime import timedelta
from pathlib import Path
from unittest.mock import patch
import pytest
from kleinanzeigen_bot.utils import misc
from kleinanzeigen_bot.utils.timing_collector import RETENTION_DAYS, TimingCollector
pytestmark = pytest.mark.unit
class TestTimingCollector:
def test_output_dir_resolves_to_given_path(self, tmp_path:Path) -> None:
collector = TimingCollector(tmp_path / "xdg-cache" / "timing", "publish")
assert collector.output_dir == (tmp_path / "xdg-cache" / "timing").resolve()
def test_flush_writes_session_data(self, tmp_path:Path, monkeypatch:pytest.MonkeyPatch) -> None:
monkeypatch.chdir(tmp_path)
collector = TimingCollector(tmp_path / ".temp" / "timing", "publish")
collector.record(
key = "default",
operation_type = "web_find",
description = "web_find(ID, submit)",
configured_timeout = 5.0,
effective_timeout = 5.0,
actual_duration = 0.4,
attempt_index = 0,
success = True,
)
file_path = collector.flush()
assert file_path is not None
assert file_path.exists()
data = json.loads(file_path.read_text(encoding = "utf-8"))
assert isinstance(data, list)
assert len(data) == 1
assert data[0]["command"] == "publish"
assert len(data[0]["records"]) == 1
assert data[0]["records"][0]["operation_key"] == "default"
def test_flush_prunes_old_and_malformed_sessions(self, tmp_path:Path, monkeypatch:pytest.MonkeyPatch) -> None:
monkeypatch.chdir(tmp_path)
output_dir = tmp_path / ".temp" / "timing"
output_dir.mkdir(parents = True, exist_ok = True)
data_path = output_dir / "timing_data.json"
old_started = (misc.now() - timedelta(days = RETENTION_DAYS + 1)).isoformat()
recent_started = (misc.now() - timedelta(days = 2)).isoformat()
existing_payload = [
{
"session_id": "old-session",
"command": "publish",
"started_at": old_started,
"ended_at": old_started,
"records": [],
},
{
"session_id": "recent-session",
"command": "publish",
"started_at": recent_started,
"ended_at": recent_started,
"records": [],
},
{
"session_id": "malformed-session",
"command": "publish",
"started_at": "not-a-datetime",
"ended_at": "not-a-datetime",
"records": [],
},
]
data_path.write_text(json.dumps(existing_payload), encoding = "utf-8")
collector = TimingCollector(tmp_path / ".temp" / "timing", "verify")
collector.record(
key = "default",
operation_type = "web_find",
description = "web_find(ID, submit)",
configured_timeout = 5.0,
effective_timeout = 5.0,
actual_duration = 0.2,
attempt_index = 0,
success = True,
)
file_path = collector.flush()
assert file_path is not None
data = json.loads(file_path.read_text(encoding = "utf-8"))
session_ids = [session["session_id"] for session in data]
assert "old-session" not in session_ids
assert "malformed-session" not in session_ids
assert "recent-session" in session_ids
assert collector.session_id in session_ids
def test_flush_returns_none_when_already_flushed(self, tmp_path:Path, monkeypatch:pytest.MonkeyPatch) -> None:
monkeypatch.chdir(tmp_path)
collector = TimingCollector(tmp_path / ".temp" / "timing", "publish")
collector.record(
key = "default",
operation_type = "web_find",
description = "web_find(ID, submit)",
configured_timeout = 5.0,
effective_timeout = 5.0,
actual_duration = 0.1,
attempt_index = 0,
success = True,
)
first = collector.flush()
second = collector.flush()
assert first is not None
assert second is None
def test_flush_returns_none_when_no_records(self, tmp_path:Path, monkeypatch:pytest.MonkeyPatch) -> None:
monkeypatch.chdir(tmp_path)
collector = TimingCollector(tmp_path / ".temp" / "timing", "publish")
assert collector.flush() is None
def test_flush_recovers_from_corrupted_json(self, tmp_path:Path, monkeypatch:pytest.MonkeyPatch) -> None:
monkeypatch.chdir(tmp_path)
output_dir = tmp_path / ".temp" / "timing"
output_dir.mkdir(parents = True, exist_ok = True)
data_path = output_dir / "timing_data.json"
data_path.write_text("{ this is invalid json", encoding = "utf-8")
collector = TimingCollector(tmp_path / ".temp" / "timing", "verify")
collector.record(
key = "default",
operation_type = "web_find",
description = "web_find(ID, submit)",
configured_timeout = 5.0,
effective_timeout = 5.0,
actual_duration = 0.1,
attempt_index = 0,
success = True,
)
file_path = collector.flush()
assert file_path is not None
payload = json.loads(file_path.read_text(encoding = "utf-8"))
assert isinstance(payload, list)
assert len(payload) == 1
assert payload[0]["session_id"] == collector.session_id
def test_flush_ignores_non_list_payload(self, tmp_path:Path, monkeypatch:pytest.MonkeyPatch) -> None:
monkeypatch.chdir(tmp_path)
output_dir = tmp_path / ".temp" / "timing"
output_dir.mkdir(parents = True, exist_ok = True)
data_path = output_dir / "timing_data.json"
data_path.write_text(json.dumps({"unexpected": "shape"}), encoding = "utf-8")
collector = TimingCollector(tmp_path / ".temp" / "timing", "verify")
collector.record(
key = "default",
operation_type = "web_find",
description = "web_find(ID, submit)",
configured_timeout = 5.0,
effective_timeout = 5.0,
actual_duration = 0.1,
attempt_index = 0,
success = True,
)
file_path = collector.flush()
assert file_path is not None
payload = json.loads(file_path.read_text(encoding = "utf-8"))
assert isinstance(payload, list)
assert len(payload) == 1
assert payload[0]["session_id"] == collector.session_id
def test_flush_returns_none_when_write_raises(self, tmp_path:Path, monkeypatch:pytest.MonkeyPatch) -> None:
monkeypatch.chdir(tmp_path)
collector = TimingCollector(tmp_path / ".temp" / "timing", "verify")
collector.record(
key = "default",
operation_type = "web_find",
description = "web_find(ID, submit)",
configured_timeout = 5.0,
effective_timeout = 5.0,
actual_duration = 0.1,
attempt_index = 0,
success = True,
)
with patch.object(Path, "mkdir", side_effect = OSError("cannot create dir")):
assert collector.flush() is None

View File

@@ -0,0 +1,436 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
"""
This module contains tests for verifying the completeness and correctness of translations in the project.
It ensures that:
1. All log messages in the code have corresponding translations
2. All translations in the YAML files are actually used in the code
3. No obsolete translations exist in the YAML files
The tests work by:
1. Extracting all translatable messages from Python source files
2. Loading translations from YAML files
3. Comparing the extracted messages with translations
4. Verifying no unused translations exist
"""
import ast, os # isort: skip
from collections import defaultdict
from dataclasses import dataclass
from importlib.resources import files
import pytest
from ruamel.yaml import YAML
from kleinanzeigen_bot import resources
# Messages that are intentionally not translated (internal/debug messages)
EXCLUDED_MESSAGES:dict[str, set[str]] = {
"kleinanzeigen_bot/__init__.py": {"############################################"}
}
# Special modules that are known to be needed even if not in messages_by_file
KNOWN_NEEDED_MODULES = {"getopt.py"}
# Type aliases for better readability
ModulePath = str
FunctionName = str
Message = str
TranslationDict = dict[ModulePath, dict[FunctionName, dict[Message, str]]]
MessageDict = dict[FunctionName, dict[Message, set[Message]]]
MissingDict = dict[FunctionName, dict[Message, set[Message]]]
@dataclass
class MessageLocation:
"""Represents the location of a message in the codebase."""
module:str
function:str
message:str
def _get_function_name(node:ast.AST) -> str:
"""
Get the name of the function containing this AST node.
This matches i18n.py's behavior which only uses the function name for translation lookups.
For module-level code, returns "module" to match i18n.py's convention.
Args:
node: The AST node to analyze
Returns:
The function name or "module" for module-level code
"""
def find_parent_context(n:ast.AST) -> tuple[str | None, str | None]:
"""Find the containing class and function names."""
class_name = None
function_name = None
current = n
while hasattr(current, "_parent"):
current = getattr(current, "_parent")
if isinstance(current, ast.ClassDef) and not class_name:
class_name = current.name
elif isinstance(current, ast.FunctionDef) or isinstance(current, ast.AsyncFunctionDef) and not function_name:
function_name = current.name
break # We only need the immediate function name
return class_name, function_name
_, function_name = find_parent_context(node)
if function_name:
return function_name
return "module" # For module-level code
def _extract_log_messages(file_path:str, exclude_debug:bool = False) -> MessageDict:
"""
Extract all translatable messages from a Python file with their function context.
Args:
file_path: Path to the Python file to analyze
Returns:
Dictionary mapping function names to their messages
"""
with open(file_path, "r", encoding = "utf-8") as file:
tree = ast.parse(file.read(), filename = file_path)
# Add parent references for context tracking
for parent in ast.walk(tree):
for child in ast.iter_child_nodes(parent):
setattr(child, "_parent", parent)
messages:MessageDict = defaultdict(lambda: defaultdict(set))
def add_message(function:str, msg:str) -> None:
"""Add a message to the messages dictionary."""
if function not in messages:
messages[function] = defaultdict(set)
if msg not in messages[function]:
messages[function][msg] = {msg}
def extract_string_constant(node:ast.AST) -> str | None:
"""Safely extract string value from an AST node."""
if isinstance(node, ast.Constant):
value = getattr(node, "value", None)
return value if isinstance(value, str) else None
return None
for node in ast.walk(tree):
if not isinstance(node, ast.Call):
continue
function_name = _get_function_name(node)
# Extract messages from various call types
# 1) Logging calls: LOG.info(…), logger.warning(…), etc.
if (
isinstance(node.func, ast.Attribute) and
isinstance(node.func.value, ast.Name) and
node.func.value.id in {"LOG", "logger", "logging"} and
node.func.attr in {None if exclude_debug else "debug", "info", "warning", "error", "exception", "critical"}
):
if node.args:
msg = extract_string_constant(node.args[0])
if msg:
add_message(function_name, msg)
# 2) gettext: _("…") or obj.gettext("…")
elif (
(isinstance(node.func, ast.Name) and node.func.id == "_") or
(isinstance(node.func, ast.Attribute) and node.func.attr == "gettext")
):
if node.args:
msg = extract_string_constant(node.args[0])
if msg:
add_message(function_name, msg)
# Handle other translatable function calls
elif isinstance(node.func, ast.Name) and node.func.id in {"ainput", "pluralize", "ensure"}:
arg_index = 1 if node.func.id == "ensure" else 0
if len(node.args) > arg_index:
msg = extract_string_constant(node.args[arg_index])
if msg:
add_message(function_name, msg)
print(f"Messages: {len(messages)} in {file_path}")
return messages
def _get_all_log_messages(exclude_debug:bool = False) -> dict[str, MessageDict]:
"""
Get all translatable messages from all Python files in the project.
Returns:
Dictionary mapping module paths to their function messages
"""
src_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), "src", "kleinanzeigen_bot")
print(f"\nScanning for messages in directory: {src_dir}")
messages_by_file:dict[str, MessageDict] = {
# Special case for getopt.py which is imported
"getopt.py": {
"do_longs": {
"option --%s requires argument": {"option --%s requires argument"},
"option --%s must not have an argument": {"option --%s must not have an argument"}
},
"long_has_args": {
"option --%s not recognized": {"option --%s not recognized"},
"option --%s not a unique prefix": {"option --%s not a unique prefix"}
},
"do_shorts": {
"option -%s requires argument": {"option -%s requires argument"}
},
"short_has_arg": {
"option -%s not recognized": {"option -%s not recognized"}
}
}
}
for root, _, filenames in os.walk(src_dir):
for filename in filenames:
if filename.endswith(".py"):
file_path = os.path.join(root, filename)
relative_path = os.path.relpath(file_path, src_dir)
if relative_path.startswith("resources/"):
continue
messages = _extract_log_messages(file_path, exclude_debug)
if messages:
module_path = os.path.join("kleinanzeigen_bot", relative_path)
module_path = module_path.replace(os.sep, "/")
messages_by_file[module_path] = messages
return messages_by_file
def _get_available_languages() -> list[str]:
"""
Get list of available translation languages from translation files.
Returns:
List of language codes (e.g. ['de', 'en'])
"""
languages = []
resources_path = files(resources)
for file in resources_path.iterdir():
if file.name.startswith("translations.") and file.name.endswith(".yaml"):
lang = file.name[13:-5] # Remove "translations." and ".yaml"
languages.append(lang)
return sorted(languages)
def _get_translations_for_language(lang:str) -> TranslationDict:
"""
Get translations for a specific language from its YAML file.
Args:
lang: Language code (e.g. 'de')
Returns:
Dictionary containing all translations for the language
"""
yaml = YAML(typ = "safe")
translation_file = f"translations.{lang}.yaml"
print(f"Loading translations from {translation_file}")
content = files(resources).joinpath(translation_file).read_text()
translations = yaml.load(content) or {}
return translations
def _find_translation(translations:TranslationDict,
module:str,
function:str,
message:str) -> bool:
"""
Check if a translation exists for a given message in the exact location where i18n.py will look.
This matches the lookup logic in i18n.py which uses dicts.safe_get().
Args:
translations: Dictionary of all translations
module: Module path
function: Function name
message: Message to find translation for
Returns:
True if translation exists in the correct location, False otherwise
"""
# Special case for getopt.py
if module == "getopt.py":
return bool(translations.get(module, {}).get(function, {}).get(message))
# Add kleinanzeigen_bot/ prefix if not present
module_path = f"kleinanzeigen_bot/{module}" if not module.startswith("kleinanzeigen_bot/") else module
# Check if module exists in translations
module_trans = translations.get(module_path, {})
if not isinstance(module_trans, dict):
print(f"Module {module_path} translations is not a dictionary")
return False
# Check if function exists in module translations
function_trans = module_trans.get(function, {})
if not isinstance(function_trans, dict):
print(f"Function {function} translations in module {module_path} is not a dictionary")
return False
# Check if message exists in function translations
has_translation = message in function_trans
return has_translation
def _message_exists_in_code(code_messages:dict[str, MessageDict],
module:str,
function:str,
message:str) -> bool:
"""
Check if a message exists in the code at the given location.
This is the reverse of _find_translation - it checks if a translation's message
exists in the code messages.
Args:
code_messages: Dictionary of all code messages
module: Module path
function: Function name
message: Message to find in code
Returns:
True if message exists in the code, False otherwise
"""
# Special case for getopt.py
if module == "getopt.py":
return bool(code_messages.get(module, {}).get(function, {}).get(message))
# Remove kleinanzeigen_bot/ prefix if present for code message lookup
module_path = module[len("kleinanzeigen_bot/"):] if module.startswith("kleinanzeigen_bot/") else module
module_path = f"kleinanzeigen_bot/{module_path}"
# Check if module exists in code messages
module_msgs = code_messages.get(module_path)
if not module_msgs:
return False
# Check if function exists in module messages
function_msgs = module_msgs.get(function)
if not function_msgs:
return False
# Check if message exists in any of the function's message sets
return any(message in msg_dict for msg_dict in function_msgs.values())
@pytest.mark.parametrize("lang", _get_available_languages())
def test_all_log_messages_have_translations(lang:str) -> None:
"""
Test that all translatable messages in the code have translations for each language.
This test ensures that no untranslated messages exist in the codebase.
"""
messages_by_file = _get_all_log_messages(exclude_debug = True)
translations = _get_translations_for_language(lang)
missing_translations = []
for module, functions in messages_by_file.items():
excluded = EXCLUDED_MESSAGES.get(module, set())
for function, messages in functions.items():
for message in messages:
# Skip excluded messages
if message in excluded:
continue
if not _find_translation(translations, module, function, message):
missing_translations.append(MessageLocation(module, function, message))
if missing_translations:
missing_str = f"\nPlease add the following missing translations for language [{lang}]:\n"
def make_inner_dict() -> defaultdict[str, set[str]]:
return defaultdict(set)
by_module:defaultdict[str, defaultdict[str, set[str]]] = defaultdict(make_inner_dict)
for loc in missing_translations:
assert isinstance(loc.module, str), "Module must be a string"
assert isinstance(loc.function, str), "Function must be a string"
assert isinstance(loc.message, str), "Message must be a string"
by_module[loc.module][loc.function].add(loc.message)
# There is a type error here, but it's not a problem
for module, functions in sorted(by_module.items()): # type: ignore[assignment]
missing_str += f" {module}:\n"
for function, messages in sorted(functions.items()):
missing_str += f" {function}:\n"
for message in sorted(messages):
missing_str += f' "{message}"\n'
raise AssertionError(missing_str)
@pytest.mark.parametrize("lang", _get_available_languages())
def test_no_obsolete_translations(lang:str) -> None:
"""
Test that all translations in each language YAML file are actually used in the code.
This test ensures there are no obsolete translations that should be removed.
The translations file has the structure:
module:
function:
"original message": "translated message"
"""
messages_by_file = _get_all_log_messages(exclude_debug = False)
translations = _get_translations_for_language(lang)
# ignore values that are not in code
del translations["kleinanzeigen_bot/utils/loggers.py"]["format"]["CRITICAL"]
del translations["kleinanzeigen_bot/utils/loggers.py"]["format"]["ERROR"]
del translations["kleinanzeigen_bot/utils/loggers.py"]["format"]["WARNING"]
obsolete_items:list[tuple[str, str, str]] = []
for module, module_trans in translations.items():
if not isinstance(module_trans, dict):
continue
# Skip known needed modules
if module in KNOWN_NEEDED_MODULES:
continue
for function, function_trans in module_trans.items():
if not isinstance(function_trans, dict):
continue
for original_message in function_trans:
# Check if this message exists in the code
message_exists = _message_exists_in_code(messages_by_file, module, function, original_message)
if not message_exists:
obsolete_items.append((module, function, original_message))
# Fail the test if obsolete translations are found
if obsolete_items:
obsolete_str = f"\nObsolete translations found for language [{lang}]:\n"
# Group by module and function for better readability
by_module:defaultdict[str, defaultdict[str, list[str]]] = defaultdict(lambda: defaultdict(list))
for module, function, message in obsolete_items:
by_module[module][function].append(message)
for module, functions in sorted(by_module.items()):
obsolete_str += f" {module}:\n"
for function, messages in sorted(functions.items()):
obsolete_str += f" {function}:\n"
for message in sorted(messages):
obsolete_str += f' "{message}": "{translations[module][function][message]}"\n'
raise AssertionError(obsolete_str)
def test_translation_files_exist() -> None:
"""Test that at least one translation file exists."""
languages = _get_available_languages()
if not languages:
raise AssertionError("No translation files found! Expected at least one translations.*.yaml file.")

View File

@@ -0,0 +1,856 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import json
import logging
from datetime import datetime, timedelta, timezone, tzinfo
from typing import TYPE_CHECKING, Any, cast
from unittest.mock import MagicMock, patch
import pytest
import requests
if TYPE_CHECKING:
from pathlib import Path
from pytest_mock import MockerFixture
from kleinanzeigen_bot.model import update_check_state as update_check_state_module
from kleinanzeigen_bot.model.config_model import Config
from kleinanzeigen_bot.model.update_check_state import UpdateCheckState
from kleinanzeigen_bot.update_checker import UpdateChecker
def _freeze_update_state_datetime(monkeypatch:pytest.MonkeyPatch, fixed_now:datetime) -> None:
"""Patch UpdateCheckState to return a deterministic datetime.now/utcnow."""
class FixedDateTime(datetime):
@classmethod
def now(cls, tz:tzinfo | None = None) -> "FixedDateTime":
base = fixed_now.replace(tzinfo = None) if tz is None else fixed_now.astimezone(tz)
return cls(
base.year,
base.month,
base.day,
base.hour,
base.minute,
base.second,
base.microsecond,
tzinfo = base.tzinfo
)
@classmethod
def utcnow(cls) -> "FixedDateTime":
base = fixed_now.astimezone(timezone.utc).replace(tzinfo = None)
return cls(
base.year,
base.month,
base.day,
base.hour,
base.minute,
base.second,
base.microsecond
)
datetime_module = getattr(update_check_state_module, "datetime")
monkeypatch.setattr(datetime_module, "datetime", FixedDateTime)
@pytest.fixture
def config() -> Config:
return Config.model_validate({
"update_check": {
"enabled": True,
"channel": "latest",
"interval": "7d"
}
})
@pytest.fixture
def state_file(tmp_path:Path) -> Path:
return tmp_path / "update_check_state.json"
class TestUpdateChecker:
"""Tests for the update checker functionality."""
def test_get_local_version(self, config:Config, state_file:Path) -> None:
"""Test that the local version is correctly retrieved."""
checker = UpdateChecker(config, state_file)
assert checker.get_local_version() is not None
def test_get_commit_hash(self, config:Config, state_file:Path) -> None:
"""Test that the commit hash is correctly extracted from the version string."""
checker = UpdateChecker(config, state_file)
assert checker._get_commit_hash("2025+fb00f11") == "fb00f11"
assert checker._get_commit_hash("2025") is None
def test_resolve_commitish(self, config:Config, state_file:Path) -> None:
"""Test that a commit-ish is resolved to a full hash and date."""
checker = UpdateChecker(config, state_file)
with patch(
"requests.get",
return_value = MagicMock(json = lambda: {"sha": "e7a3d46", "commit": {"author": {"date": "2025-05-18T00:00:00Z"}}})
):
commit_hash, commit_date = checker._resolve_commitish("latest")
assert commit_hash == "e7a3d46"
assert commit_date == datetime(2025, 5, 18, tzinfo = timezone.utc)
def test_request_timeout_uses_config(self, config:Config, state_file:Path, mocker:"MockerFixture") -> None:
"""Ensure HTTP calls honor the timeout configuration."""
config.timeouts.multiplier = 1.5
checker = UpdateChecker(config, state_file)
mock_response = MagicMock(json = lambda: {"sha": "abc", "commit": {"author": {"date": "2025-05-18T00:00:00Z"}}})
mock_get = mocker.patch("requests.get", return_value = mock_response)
checker._resolve_commitish("latest")
expected_timeout = config.timeouts.effective("update_check")
assert mock_get.call_args.kwargs["timeout"] == expected_timeout
def test_resolve_commitish_no_commit(self, config:Config, state_file:Path, mocker:"MockerFixture") -> None:
"""Test resolving a commit-ish when the API returns no commit data."""
checker = UpdateChecker(config, state_file)
mocker.patch("requests.get", return_value = mocker.Mock(json = lambda: {"sha": "abc"}))
commit_hash, commit_date = checker._resolve_commitish("sha")
assert commit_hash == "abc"
assert commit_date is None
def test_resolve_commitish_logs_warning_on_exception(
self,
config:Config,
state_file:Path,
caplog:pytest.LogCaptureFixture
) -> None:
"""Test resolving a commit-ish logs a warning when the request fails."""
caplog.set_level("WARNING", logger = "kleinanzeigen_bot.update_checker")
checker = UpdateChecker(config, state_file)
with patch("requests.get", side_effect = Exception("boom")):
commit_hash, commit_date = checker._resolve_commitish("sha")
assert commit_hash is None
assert commit_date is None
assert any("Could not resolve commit 'sha': boom" in r.getMessage() for r in caplog.records)
def test_commits_match_short_hash(self, config:Config, state_file:Path) -> None:
"""Test that short commit hashes are treated as matching prefixes."""
checker = UpdateChecker(config, state_file)
assert checker._commits_match("abc1234", "abc1234def5678") is True
def test_check_for_updates_disabled(self, config:Config, state_file:Path) -> None:
"""Test that the update checker does not check for updates if disabled."""
config.update_check.enabled = False
checker = UpdateChecker(config, state_file)
with patch("requests.get") as mock_get:
checker.check_for_updates()
mock_get.assert_not_called()
def test_check_for_updates_no_local_version(self, config:Config, state_file:Path) -> None:
"""Test that the update checker handles the case where the local version cannot be determined."""
checker = UpdateChecker(config, state_file)
with patch.object(UpdateCheckState, "should_check", return_value = True), \
patch.object(UpdateChecker, "get_local_version", return_value = None):
checker.check_for_updates() # Should not raise exception
def test_check_for_updates_logs_missing_local_version(
self,
config:Config,
state_file:Path,
caplog:pytest.LogCaptureFixture
) -> None:
"""Test that the update checker logs a warning when the local version is missing."""
caplog.set_level("WARNING", logger = "kleinanzeigen_bot.update_checker")
checker = UpdateChecker(config, state_file)
with patch.object(UpdateCheckState, "should_check", return_value = True), \
patch.object(UpdateChecker, "get_local_version", return_value = None):
checker.check_for_updates()
assert any("Could not determine local version." in r.getMessage() for r in caplog.records)
def test_check_for_updates_no_commit_hash(self, config:Config, state_file:Path) -> None:
"""Test that the update checker handles the case where the commit hash cannot be extracted."""
checker = UpdateChecker(config, state_file)
with patch.object(UpdateChecker, "get_local_version", return_value = "2025"):
checker.check_for_updates() # Should not raise exception
def test_check_for_updates_no_releases(self, config:Config, state_file:Path) -> None:
"""Test that the update checker handles the case where no releases are found."""
checker = UpdateChecker(config, state_file)
with patch("requests.get", return_value = MagicMock(json = list)):
checker.check_for_updates() # Should not raise exception
def test_check_for_updates_api_error(self, config:Config, state_file:Path) -> None:
"""Test that the update checker handles API errors gracefully."""
checker = UpdateChecker(config, state_file)
with patch("requests.get", side_effect = Exception("API Error")):
checker.check_for_updates() # Should not raise exception
def test_check_for_updates_latest_prerelease_warning(
self,
config:Config,
state_file:Path,
mocker:"MockerFixture",
caplog:pytest.LogCaptureFixture
) -> None:
"""Test that the update checker warns when latest points to a prerelease."""
caplog.set_level("WARNING", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(json = lambda: {"tag_name": "latest", "prerelease": True})
)
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
expected = "Latest release from GitHub is a prerelease, but 'latest' channel expects a stable release."
assert any(expected in r.getMessage() for r in caplog.records)
def test_check_for_updates_ahead(self, config:Config, state_file:Path, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test that the update checker correctly identifies when the local version is ahead of the latest release."""
caplog.set_level("INFO", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(
UpdateChecker,
"_resolve_commitish",
side_effect = [
("fb00f11", datetime(2025, 5, 18, tzinfo = timezone.utc)),
("e7a3d46", datetime(2025, 5, 16, tzinfo = timezone.utc))
]
)
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(
json = lambda: {"tag_name": "latest", "prerelease": False}
)
)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
print("LOG RECORDS:")
for r in caplog.records:
print(f"{r.levelname}: {r.getMessage()}")
expected = (
"You are on a different commit than the release for channel 'latest' (tag: latest). This may mean you are ahead, behind, or on a different branch. "
"Local commit: fb00f11 (2025-05-18 00:00:00 UTC), Release commit: e7a3d46 (2025-05-16 00:00:00 UTC)"
)
assert any(expected in r.getMessage() for r in caplog.records)
def test_check_for_updates_preview(self, config:Config, state_file:Path, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test that the update checker correctly handles preview releases."""
caplog.set_level("INFO", logger = "kleinanzeigen_bot.update_checker")
config.update_check.channel = "preview"
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(
UpdateChecker,
"_resolve_commitish",
side_effect = [
("fb00f11", datetime(2025, 5, 18, tzinfo = timezone.utc)),
("e7a3d46", datetime(2025, 5, 16, tzinfo = timezone.utc))
]
)
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(
json = lambda: [{"tag_name": "preview", "prerelease": True, "draft": False}]
)
)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
print("LOG RECORDS:")
for r in caplog.records:
print(f"{r.levelname}: {r.getMessage()}")
expected = (
"You are on a different commit than the release for channel 'preview' (tag: preview). "
"This may mean you are ahead, behind, or on a different branch. "
"Local commit: fb00f11 (2025-05-18 00:00:00 UTC), Release commit: e7a3d46 (2025-05-16 00:00:00 UTC)"
)
assert any(expected in r.getMessage() for r in caplog.records)
def test_check_for_updates_preview_missing_prerelease(
self,
config:Config,
state_file:Path,
mocker:"MockerFixture",
caplog:pytest.LogCaptureFixture
) -> None:
"""Test that the update checker warns when no preview prerelease is available."""
caplog.set_level("WARNING", logger = "kleinanzeigen_bot.update_checker")
config.update_check.channel = "preview"
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(json = lambda: [{"tag_name": "v1", "prerelease": False, "draft": False}])
)
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
assert any("No prerelease found for 'preview' channel." in r.getMessage() for r in caplog.records)
def test_check_for_updates_behind(self, config:Config, state_file:Path, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test that the update checker correctly identifies when the local version is behind the latest release."""
caplog.set_level("INFO", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(
UpdateChecker,
"_resolve_commitish",
side_effect = [
("fb00f11", datetime(2025, 5, 16, tzinfo = timezone.utc)),
("e7a3d46", datetime(2025, 5, 18, tzinfo = timezone.utc))
]
)
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(
json = lambda: {"tag_name": "latest", "prerelease": False}
)
)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
print("LOG RECORDS:")
for r in caplog.records:
print(f"{r.levelname}: {r.getMessage()}")
expected = "A new version is available: e7a3d46 from 2025-05-18 00:00:00 UTC (current: 2025+fb00f11 from 2025-05-16 00:00:00 UTC, channel: latest)"
assert any(expected in r.getMessage() for r in caplog.records)
def test_check_for_updates_logs_release_notes(
self,
config:Config,
state_file:Path,
mocker:"MockerFixture",
caplog:pytest.LogCaptureFixture
) -> None:
"""Test that release notes are logged when present."""
caplog.set_level("INFO", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(
UpdateChecker,
"_resolve_commitish",
side_effect = [
("fb00f11", datetime(2025, 5, 16, tzinfo = timezone.utc)),
("e7a3d46", datetime(2025, 5, 18, tzinfo = timezone.utc))
]
)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(
json = lambda: {"tag_name": "latest", "prerelease": False, "body": "Release notes here"}
)
)
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
assert any("Release notes:\nRelease notes here" in r.getMessage() for r in caplog.records)
def test_check_for_updates_same(self, config:Config, state_file:Path, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test that the update checker correctly identifies when the local version is the same as the latest release."""
caplog.set_level("INFO", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(
UpdateChecker,
"_resolve_commitish",
side_effect = [
("fb00f11", datetime(2025, 5, 18, tzinfo = timezone.utc)),
("fb00f11", datetime(2025, 5, 18, tzinfo = timezone.utc))
]
)
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(
json = lambda: {"tag_name": "latest", "prerelease": False}
)
)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
print("LOG RECORDS:")
for r in caplog.records:
print(f"{r.levelname}: {r.getMessage()}")
expected = "You are on the latest version: 2025+fb00f11 (compared to fb00f11 in channel latest)"
assert any(expected in r.getMessage() for r in caplog.records)
def test_check_for_updates_unknown_channel(
self,
config:Config,
state_file:Path,
mocker:"MockerFixture",
caplog:pytest.LogCaptureFixture
) -> None:
"""Test that the update checker warns on unknown update channels."""
caplog.set_level("WARNING", logger = "kleinanzeigen_bot.update_checker")
cast(Any, config.update_check).channel = "unknown"
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mock_get = mocker.patch("requests.get")
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
mock_get.assert_not_called()
assert any("Unknown update channel: unknown" in r.getMessage() for r in caplog.records)
def test_check_for_updates_respects_interval_gate(
self,
config:Config,
state_file:Path,
caplog:pytest.LogCaptureFixture
) -> None:
"""Ensure the interval guard short-circuits update checks without touching the network."""
caplog.set_level(logging.WARNING)
with patch.object(UpdateCheckState, "should_check", return_value = False) as should_check_mock, \
patch.object(UpdateCheckState, "update_last_check") as update_last_check_mock, \
patch("requests.get") as mock_get:
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
should_check_mock.assert_called_once()
mock_get.assert_not_called()
update_last_check_mock.assert_not_called()
assert all("Could not determine local version" not in message for message in caplog.messages)
def test_update_check_state_empty_file(self, state_file:Path) -> None:
"""Test that loading an empty state file returns a new state."""
state_file.touch() # Create empty file
state = UpdateCheckState.load(state_file)
assert state.last_check is None
def test_update_check_state_invalid_data(self, state_file:Path) -> None:
"""Test that loading invalid state data returns a new state."""
state_file.write_text("invalid json", encoding = "utf-8")
state = UpdateCheckState.load(state_file)
assert state.last_check is None
def test_update_check_state_missing_last_check(self, state_file:Path) -> None:
"""Test that loading state data without last_check returns a new state."""
state_file.write_text("{}", encoding = "utf-8")
state = UpdateCheckState.load(state_file)
assert state.last_check is None
def test_update_check_state_save_error(self, state_file:Path) -> None:
"""Test that saving state handles errors gracefully."""
state = UpdateCheckState()
state.last_check = datetime.now(timezone.utc)
# Make the file read-only to cause a save error
state_file.touch()
state_file.chmod(0o444)
# Should not raise an exception
state.save(state_file)
def test_update_check_state_interval_units(self, monkeypatch:pytest.MonkeyPatch) -> None:
"""Test that different interval units are handled correctly."""
state = UpdateCheckState()
fixed_now = datetime(2025, 1, 15, 8, 0, tzinfo = timezone.utc)
_freeze_update_state_datetime(monkeypatch, fixed_now)
now = fixed_now
# Test seconds (should always be too short, fallback to 7d, only 2 days elapsed, so should_check is False)
state.last_check = now - timedelta(seconds = 30)
assert state.should_check("60s") is False
assert state.should_check("20s") is False
# Test minutes (should always be too short)
state.last_check = now - timedelta(minutes = 30)
assert state.should_check("60m") is False
assert state.should_check("20m") is False
# Test hours (should always be too short)
state.last_check = now - timedelta(hours = 2)
assert state.should_check("4h") is False
assert state.should_check("1h") is False
# Test days
state.last_check = now - timedelta(days = 3)
assert state.should_check("7d") is False
assert state.should_check("2d") is True
state.last_check = now - timedelta(days = 3)
assert state.should_check("3d") is False
state.last_check = now - timedelta(days = 3, seconds = 1)
assert state.should_check("3d") is True
# Test multi-day intervals (was weeks)
state.last_check = now - timedelta(days = 14)
assert state.should_check("14d") is False
state.last_check = now - timedelta(days = 14, seconds = 1)
assert state.should_check("14d") is True
# Test invalid unit (should fallback to 7d, 14 days elapsed, so should_check is True)
state.last_check = now - timedelta(days = 14)
assert state.should_check("1x") is True
# If fallback interval has not elapsed, should_check is False
state.last_check = now - timedelta(days = 6)
assert state.should_check("1x") is False
# Test truly unknown unit (case _)
state.last_check = now - timedelta(days = 14)
assert state.should_check("1z") is True
state.last_check = now - timedelta(days = 6)
assert state.should_check("1z") is False
def test_update_check_state_interval_validation(self, monkeypatch:pytest.MonkeyPatch) -> None:
"""Test that interval validation works correctly."""
state = UpdateCheckState()
fixed_now = datetime(2025, 1, 1, 12, 0, tzinfo = timezone.utc)
_freeze_update_state_datetime(monkeypatch, fixed_now)
now = fixed_now
state.last_check = now - timedelta(days = 1)
# Test minimum value (1d)
assert state.should_check("12h") is False # Too short, fallback to 7d, only 1 day elapsed
assert state.should_check("1d") is False # Minimum allowed
assert state.should_check("2d") is False # Valid, but only 1 day elapsed
# Test maximum value (30d)
assert state.should_check("31d") is False # Too long, fallback to 7d, only 1 day elapsed
assert state.should_check("60d") is False # Too long, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 30)
assert state.should_check("30d") is False # Exactly 30 days, should_check is False
state.last_check = now - timedelta(days = 30, seconds = 1)
assert state.should_check("30d") is True # Should check if just over interval
state.last_check = now - timedelta(days = 21)
assert state.should_check("21d") is False # Exactly 21 days, should_check is False
state.last_check = now - timedelta(days = 21, seconds = 1)
assert state.should_check("21d") is True # Should check if just over interval
state.last_check = now - timedelta(days = 7)
assert state.should_check("7d") is False # 7 days, should_check is False
state.last_check = now - timedelta(days = 7, seconds = 1)
assert state.should_check("7d") is True # Should check if just over interval
# Test negative values
state.last_check = now - timedelta(days = 1)
assert state.should_check("-1d") is False # Negative value, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("-1d") is True # Negative value, fallback to 7d, 8 days elapsed
# Test zero value
state.last_check = now - timedelta(days = 1)
assert state.should_check("0d") is False # Zero value, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("0d") is True # Zero value, fallback to 7d, 8 days elapsed
# Test invalid formats
state.last_check = now - timedelta(days = 1)
assert state.should_check("invalid") is False # Invalid format, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("invalid") is True # Invalid format, fallback to 7d, 8 days elapsed
state.last_check = now - timedelta(days = 1)
assert state.should_check("1") is False # Missing unit, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("1") is True # Missing unit, fallback to 7d, 8 days elapsed
state.last_check = now - timedelta(days = 1)
assert state.should_check("d") is False # Missing value, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("d") is True # Missing value, fallback to 7d, 8 days elapsed
# Test unit conversions (all sub-day intervals are too short)
state.last_check = now - timedelta(days = 1)
assert state.should_check("24h") is False # 1 day in hours, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("24h") is True # 1 day in hours, fallback to 7d, 8 days elapsed
state.last_check = now - timedelta(days = 1)
assert state.should_check("1440m") is False # 1 day in minutes, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("1440m") is True # 1 day in minutes, fallback to 7d, 8 days elapsed
state.last_check = now - timedelta(days = 1)
assert state.should_check("86400s") is False # 1 day in seconds, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("86400s") is True # 1 day in seconds, fallback to 7d, 8 days elapsed
def test_update_check_state_invalid_date(self, state_file:Path) -> None:
"""Test that loading a state file with an invalid date string for last_check returns a new state (triggers ValueError)."""
state_file.write_text(json.dumps({"last_check": "not-a-date"}), encoding = "utf-8")
state = UpdateCheckState.load(state_file)
assert state.last_check is None
def test_update_check_state_save_permission_error(self, mocker:"MockerFixture", state_file:Path) -> None:
"""Test that save handles PermissionError from dicts.save_dict."""
state = UpdateCheckState()
state.last_check = datetime.now(timezone.utc)
mocker.patch("kleinanzeigen_bot.utils.dicts.save_dict", side_effect = PermissionError)
# Should not raise
state.save(state_file)
def test_resolve_commitish_no_author(self, config:Config, state_file:Path, mocker:"MockerFixture") -> None:
"""Test resolving a commit-ish when the API returns no author key."""
checker = UpdateChecker(config, state_file)
mocker.patch("requests.get", return_value = mocker.Mock(json = lambda: {"sha": "abc", "commit": {}}))
commit_hash, commit_date = checker._resolve_commitish("sha")
assert commit_hash == "abc"
assert commit_date is None
def test_resolve_commitish_no_date(self, config:Config, state_file:Path, mocker:"MockerFixture") -> None:
"""Test resolving a commit-ish when the API returns no date key."""
checker = UpdateChecker(config, state_file)
mocker.patch("requests.get", return_value = mocker.Mock(json = lambda: {"sha": "abc", "commit": {"author": {}}}))
commit_hash, commit_date = checker._resolve_commitish("sha")
assert commit_hash == "abc"
assert commit_date is None
def test_resolve_commitish_list_instead_of_dict(self, config:Config, state_file:Path, mocker:"MockerFixture") -> None:
"""Test resolving a commit-ish when the API returns a list instead of dict."""
checker = UpdateChecker(config, state_file)
mocker.patch("requests.get", return_value = mocker.Mock(json = list))
commit_hash, commit_date = checker._resolve_commitish("sha")
assert commit_hash is None
assert commit_date is None
def test_check_for_updates_missing_release_commitish(self, config:Config, state_file:Path, mocker:"MockerFixture") -> None:
"""Test check_for_updates handles missing release commit-ish."""
checker = UpdateChecker(config, state_file)
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
mocker.patch(
"requests.get",
return_value = mocker.Mock(json = lambda: {"prerelease": False})
)
checker.check_for_updates() # Should not raise
def test_check_for_updates_no_releases_empty(self, config:Config, state_file:Path, mocker:"MockerFixture") -> None:
"""Test check_for_updates handles no releases found (API returns empty list)."""
checker = UpdateChecker(config, state_file)
mocker.patch("requests.get", return_value = mocker.Mock(json = list))
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker.check_for_updates() # Should not raise
def test_check_for_updates_no_commit_hash_extracted(self, config:Config, state_file:Path, mocker:"MockerFixture") -> None:
"""Test check_for_updates handles no commit hash extracted."""
checker = UpdateChecker(config, state_file)
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025")
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker.check_for_updates() # Should not raise
def test_check_for_updates_no_commit_dates(self, config:Config, state_file:Path, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test check_for_updates logs warning if commit dates cannot be determined."""
caplog.set_level("WARNING", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(UpdateChecker, "_resolve_commitish", return_value = (None, None))
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
# Patch requests.get to avoid any real HTTP requests
mocker.patch(
"requests.get",
return_value = mocker.Mock(
json = lambda: {"tag_name": "latest", "prerelease": False}
)
)
checker = UpdateChecker(config, state_file)
checker.check_for_updates()
assert any("Could not determine commit dates for comparison." in r.getMessage() for r in caplog.records)
def test_update_check_state_version_tracking(self, state_file:Path) -> None:
"""Test that version tracking works correctly."""
# Create a state with version 0 (old format)
state_file.write_text(json.dumps({
"last_check": datetime.now(timezone.utc).isoformat()
}), encoding = "utf-8")
# Load the state - should migrate to version 1
state = UpdateCheckState.load(state_file)
assert state.version == 1
# Save the state
state.save(state_file)
# Load again - should keep version 1
state = UpdateCheckState.load(state_file)
assert state.version == 1
def test_update_check_state_migration(self, state_file:Path) -> None:
"""Test that state migration works correctly."""
# Create a state with version 0 (old format)
old_time = datetime.now(timezone.utc)
state_file.write_text(json.dumps({
"last_check": old_time.isoformat()
}), encoding = "utf-8")
# Load the state - should migrate to version 1
state = UpdateCheckState.load(state_file)
assert state.version == 1
assert state.last_check == old_time
# Save the state
state.save(state_file)
# Verify the saved file has the new version
with open(state_file, "r", encoding = "utf-8") as f:
data = json.load(f)
assert data["version"] == 1
assert data["last_check"] == old_time.isoformat()
def test_update_check_state_save_errors(self, state_file:Path, mocker:"MockerFixture") -> None:
"""Test that save errors are handled gracefully."""
state = UpdateCheckState()
state.last_check = datetime.now(timezone.utc)
# Test permission error
mocker.patch("kleinanzeigen_bot.utils.dicts.save_dict", side_effect = PermissionError)
state.save(state_file) # Should not raise
# Test other errors
mocker.patch("kleinanzeigen_bot.utils.dicts.save_dict", side_effect = Exception("Test error"))
state.save(state_file) # Should not raise
def test_update_check_state_load_errors(self, state_file:Path) -> None:
"""Test that load errors are handled gracefully."""
# Test invalid JSON
state_file.write_text("invalid json", encoding = "utf-8")
state = UpdateCheckState.load(state_file)
assert state.version == 1
assert state.last_check is None
# Test invalid date format
state_file.write_text(json.dumps({
"version": 1,
"last_check": "invalid-date"
}), encoding = "utf-8")
state = UpdateCheckState.load(state_file)
assert state.version == 1
assert state.last_check is None
def test_update_check_state_timezone_handling(self, state_file:Path) -> None:
"""Test that timezone handling works correctly."""
# Test loading timestamp without timezone (should assume UTC)
state_file.write_text(json.dumps({
"version": 1,
"last_check": "2024-03-20T12:00:00"
}), encoding = "utf-8")
state = UpdateCheckState.load(state_file)
assert state.last_check is not None
assert state.last_check.tzinfo == timezone.utc
assert state.last_check.hour == 12
# Test loading timestamp with different timezone (should convert to UTC)
state_file.write_text(json.dumps({
"version": 1,
"last_check": "2024-03-20T12:00:00+02:00" # 2 hours ahead of UTC
}), encoding = "utf-8")
state = UpdateCheckState.load(state_file)
assert state.last_check is not None
assert state.last_check.tzinfo == timezone.utc
assert state.last_check.hour == 10 # Converted to UTC
# Test saving timestamp (should always be in UTC)
state = UpdateCheckState()
state.last_check = datetime(2024, 3, 20, 12, 0, tzinfo = timezone(timedelta(hours = 2)))
state.save(state_file)
with open(state_file, "r", encoding = "utf-8") as f:
data = json.load(f)
assert data["last_check"] == "2024-03-20T10:00:00+00:00" # Converted to UTC
def test_update_check_state_missing_file(self, state_file:Path) -> None:
"""Test that loading a missing state file returns a new state and should_check returns True."""
# Ensure the file doesn't exist
if state_file.exists():
state_file.unlink()
# Load state from non-existent file
state = UpdateCheckState.load(state_file)
assert state.last_check is None
assert state.version == 1
# Verify should_check returns True for any interval
assert state.should_check("7d") is True
assert state.should_check("1d") is True
assert state.should_check("4w") is True
# No longer check _time_since_last_check (method removed)
def test_should_check_fallback_to_default_interval(self, caplog:pytest.LogCaptureFixture) -> None:
"""Test that should_check falls back to default interval and logs a warning for invalid/too short/too long/zero intervals and unsupported units."""
state = UpdateCheckState()
now = datetime.now(timezone.utc)
state.last_check = now - timedelta(days = 2)
# Invalid format (unsupported unit)
caplog.clear()
assert state.should_check("notaninterval", channel = "latest") is False # 2 days since last check, default 7d
assert any("Invalid interval format or unsupported unit" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 7d" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check("notaninterval", channel = "preview") is True # 2 days since last check, default 1d
assert any("Invalid interval format or unsupported unit" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 1d" in r.getMessage() for r in caplog.records)
# Explicit zero interval
for zero in ["0d", "0h", "0m", "0s", "0"]:
caplog.clear()
assert state.should_check(zero, channel = "latest") is False
assert any("Interval is zero" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 7d" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check(zero, channel = "preview") is True
assert any("Interval is zero" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 1d" in r.getMessage() for r in caplog.records)
# Too short
caplog.clear()
assert state.should_check("12h", channel = "latest") is False # 2 days since last check, default 7d
assert any("Interval too short" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 7d" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check("12h", channel = "preview") is True # 2 days since last check, default 1d
assert any("Interval too short" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 1d" in r.getMessage() for r in caplog.records)
# Too long
caplog.clear()
assert state.should_check("60d", channel = "latest") is False # 2 days since last check, default 7d
assert any("Interval too long" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 7d" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check("60d", channel = "preview") is True # 2 days since last check, default 1d
assert any("Interval too long" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 1d" in r.getMessage() for r in caplog.records)
# Valid interval, no fallback
caplog.clear()
assert state.should_check("7d", channel = "latest") is False
assert not any("Falling back to default interval" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check("1d", channel = "preview") is True
assert not any("Falling back to default interval" in r.getMessage() for r in caplog.records)

View File

@@ -0,0 +1,267 @@
# SPDX-FileCopyrightText: © Sebastian Thomschke and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
import asyncio
import decimal
import sys
from datetime import datetime, timedelta, timezone
import pytest
from sanitize_filename import sanitize
from kleinanzeigen_bot.utils import misc
from kleinanzeigen_bot.utils.misc import sanitize_folder_name
def test_now_returns_utc_datetime() -> None:
dt = misc.now()
assert dt.tzinfo is not None
assert dt.tzinfo.utcoffset(dt) == timedelta(0)
def test_is_frozen_default() -> None:
assert misc.is_frozen() is False
def test_is_frozen_true(monkeypatch:pytest.MonkeyPatch) -> None:
monkeypatch.setattr(sys, "frozen", True, raising = False)
assert misc.is_frozen() is True
def test_ainput_is_coroutine() -> None:
assert asyncio.iscoroutinefunction(misc.ainput)
def test_parse_decimal_valid_inputs() -> None:
assert misc.parse_decimal(5) == decimal.Decimal("5")
assert misc.parse_decimal(5.5) == decimal.Decimal("5.5")
assert misc.parse_decimal("5.5") == decimal.Decimal("5.5")
assert misc.parse_decimal("5,5") == decimal.Decimal("5.5")
assert misc.parse_decimal("1.005,5") == decimal.Decimal("1005.5")
assert misc.parse_decimal("1,005.5") == decimal.Decimal("1005.5")
def test_parse_decimal_invalid_input() -> None:
with pytest.raises(decimal.DecimalException):
misc.parse_decimal("not_a_number")
def test_parse_datetime_none_returns_none() -> None:
assert misc.parse_datetime(None) is None
def test_parse_datetime_from_datetime() -> None:
dt = datetime(2020, 1, 1, 0, 0, tzinfo = timezone.utc)
assert misc.parse_datetime(dt, add_timezone_if_missing = False) == dt
def test_parse_datetime_from_string() -> None:
dt_str = "2020-01-01T00:00:00"
result = misc.parse_datetime(dt_str, add_timezone_if_missing = False)
assert result == datetime(2020, 1, 1, 0, 0) # noqa: DTZ001
def test_parse_duration_various_inputs() -> None:
assert misc.parse_duration("1h 30m") == timedelta(hours = 1, minutes = 30)
assert misc.parse_duration("2d 4h 15m 10s") == timedelta(days = 2, hours = 4, minutes = 15, seconds = 10)
assert misc.parse_duration("45m") == timedelta(minutes = 45)
assert misc.parse_duration("3d") == timedelta(days = 3)
assert misc.parse_duration("5h 5h") == timedelta(hours = 10)
assert misc.parse_duration("invalid input") == timedelta(0)
def test_format_timedelta_examples() -> None:
assert misc.format_timedelta(timedelta(seconds = 90)) == "1 minute, 30 seconds"
assert misc.format_timedelta(timedelta(hours = 1)) == "1 hour"
assert misc.format_timedelta(timedelta(days = 2, hours = 5)) == "2 days, 5 hours"
assert misc.format_timedelta(timedelta(0)) == "0 seconds"
class Dummy:
def __init__(self, contact:object) -> None:
self.contact = contact
def test_get_attr_object_and_dict() -> None:
assert misc.get_attr(Dummy({"email": "user@example.com"}), "contact.email") == "user@example.com"
assert misc.get_attr(Dummy({"email": "user@example.com"}), "contact.foo") is None
assert misc.get_attr(Dummy({"email": None}), "contact.email", default = "n/a") == "n/a"
assert misc.get_attr(Dummy(None), "contact.email", default = "n/a") == "n/a"
assert misc.get_attr({"contact": {"email": "data@example.com"}}, "contact.email") == "data@example.com"
assert misc.get_attr({"contact": {"email": "user@example.com"}}, "contact.foo") is None
assert misc.get_attr({"contact": {"email": None}}, "contact.email", default = "n/a") == "n/a"
assert misc.get_attr({}, "contact.email", default = "none") == "none"
def test_ensure_negative_timeout() -> None:
with pytest.raises(AssertionError, match = r"\[timeout\] must be >= 0"):
misc.ensure(lambda: True, "Should fail", timeout = -1)
def test_ensure_negative_poll_frequency() -> None:
with pytest.raises(AssertionError, match = r"\[poll_frequency\] must be >= 0"):
misc.ensure(lambda: True, "Should fail", poll_frequency = -1)
def test_ensure_callable_condition_becomes_true(monkeypatch:pytest.MonkeyPatch) -> None:
# Should return before timeout if condition becomes True
state = {"called": 0}
def cond() -> bool:
state["called"] += 1
return state["called"] > 2
misc.ensure(cond, "Should not fail", timeout = 1, poll_frequency = 0.01)
def test_ensure_callable_condition_timeout() -> None:
# Should raise AssertionError after timeout if condition never True
with pytest.raises(AssertionError):
misc.ensure(lambda: False, "Timeout fail", timeout = 0.05, poll_frequency = 0.01)
def test_ensure_non_callable_truthy_and_falsy() -> None:
# Truthy values should not raise
misc.ensure(True, "Should not fail for True")
misc.ensure("Some Value", "Should not fail for non-empty string")
misc.ensure(123, "Should not fail for positive int")
misc.ensure(-123, "Should not fail for negative int")
# Falsy values should raise AssertionError
with pytest.raises(AssertionError):
misc.ensure(False, "Should fail for False")
with pytest.raises(AssertionError):
misc.ensure(0, "Should fail for 0")
with pytest.raises(AssertionError):
misc.ensure("", "Should fail for empty string")
with pytest.raises(AssertionError):
misc.ensure(None, "Should fail for None")
# --- Test sanitize_folder_name function ---
@pytest.mark.parametrize(
("test_input", "expected_output", "description"),
[
# Basic sanitization
("My Ad Title!", "My Ad Title!", "Basic sanitization"),
# Unicode normalization - sanitize-filename converts to NFD, then we normalize to NFC (issue #728)
("café", "café", "Unicode NFC → NFD (by sanitize) → NFC (by normalize)"),
("caf\u00e9", "café", "Unicode NFC (escaped) → NFD → NFC"),
# Edge cases
("", "untitled", "Empty string"),
(" ", "untitled", "Whitespace only"),
("___", "___", "Multiple underscores (not collapsed)"),
# Control characters (removed by sanitize-filename)
("Ad\x00with\x1fcontrol", "Adwithcontrol", "Control characters removed"),
# Multiple consecutive underscores (sanitize-filename doesn't collapse them)
("Ad___with___multiple___underscores", "Ad___with___multiple___underscores", "Multiple underscores preserved"),
# Special characters (removed by sanitize-filename)
('file<with>invalid:chars"|?*', "filewithinvalidchars", "Special characters removed"),
("file\\with\\backslashes", "filewithbackslashes", "Backslashes removed"),
("file/with/slashes", "filewithslashes", "Forward slashes removed"),
# Path traversal attempts (handled by sanitize-filename)
("Title with ../../etc/passwd", "Title with ....etcpasswd", "Path traversal attempt"),
("Title with C:\\Windows\\System32\\cmd.exe", "Title with CWindowsSystem32cmd.exe", "Windows path traversal"),
# XSS attempts (handled by sanitize-filename)
('Title with <script>alert("xss")</script>', "Title with scriptalert(xss)script", "XSS attempt"),
],
)
def test_sanitize_folder_name_basic(test_input:str, expected_output:str, description:str) -> None:
"""Test sanitize_folder_name function with various inputs."""
result = sanitize_folder_name(test_input)
assert result == expected_output, f"Failed for '{test_input}': {description}"
@pytest.mark.parametrize(
("test_input", "max_length", "expected_output", "description"),
[
# Length truncation
("Very long advertisement title that exceeds the maximum length and should be truncated", 50,
"Very long advertisement title that exceeds the", "Length truncation"),
# Word boundary truncation
("Short words but very long title", 20, "Short words but", "Word boundary truncation"),
# Edge case: no word boundary found
("VeryLongWordWithoutSpaces", 10, "VeryLongWo", "No word boundary truncation"),
# Test default max_length (100)
("This is a reasonable advertisement title that fits within the default limit", 100,
"This is a reasonable advertisement title that fits within the default limit", "Default max_length"),
],
)
def test_sanitize_folder_name_truncation(test_input:str, max_length:int, expected_output:str, description:str) -> None:
"""Test sanitize_folder_name function with length truncation."""
result = sanitize_folder_name(test_input, max_length = max_length)
assert len(result) <= max_length, f"Result exceeds max_length for '{test_input}': {description}"
assert result == expected_output, f"Failed for '{test_input}' with max_length={max_length}: {description}"
# --- Test sanitize-filename behavior directly (since it's consistent across platforms) ---
@pytest.mark.parametrize(
("test_input", "expected_output"),
[
# Test sanitize-filename behavior (consistent across platforms)
("test/file", "testfile"),
("test\\file", "testfile"),
("test<file", "testfile"),
("test>file", "testfile"),
('test"file', "testfile"),
("test|file", "testfile"),
("test?file", "testfile"),
("test*file", "testfile"),
("test:file", "testfile"),
("CON", "__CON"),
("PRN", "__PRN"),
("AUX", "__AUX"),
("NUL", "__NUL"),
("COM1", "__COM1"),
("LPT1", "__LPT1"),
("file/with/slashes", "filewithslashes"),
("file\\with\\backslashes", "filewithbackslashes"),
('file<with>invalid:chars"|?*', "filewithinvalidchars"),
("file\x00with\x1fcontrol", "filewithcontrol"),
("file___with___underscores", "file___with___underscores"),
],
)
def test_sanitize_filename_behavior(test_input:str, expected_output:str) -> None:
"""Test sanitize-filename behavior directly (consistent across platforms)."""
result = sanitize(test_input)
assert result == expected_output, f"sanitize-filename behavior mismatch for '{test_input}'"
# --- Test sanitize_folder_name cross-platform consistency ---
@pytest.mark.parametrize(
"test_input",
[
"normal_filename",
"filename with spaces",
"filename_with_underscores",
"filename-with-dashes",
"filename.with.dots",
"filename123",
"café_filename",
"filename\x00with\x1fcontrol", # Control characters
],
)
def test_sanitize_folder_name_cross_platform_consistency(
monkeypatch:pytest.MonkeyPatch,
test_input:str
) -> None:
"""Test that sanitize_folder_name produces consistent results across platforms for safe inputs."""
platforms = ["Windows", "Darwin", "Linux"]
results = []
for platform in platforms:
monkeypatch.setattr("sys.platform", platform.lower())
result = sanitize_folder_name(test_input)
results.append(result)
# All platforms should produce the same result for safe inputs
assert len(set(results)) == 1, f"Cross-platform inconsistency for '{test_input}': {results}"

Some files were not shown because too many files have changed in this diff Show More