feat: update check (#561)

feat(update-check): add robust update check with interval support, state management, and CLI integration

- Implement version and interval-based update checks with configurable settings
- Add CLI command `kleinanzeigen-bot update-check` for manual checks
- Introduce state file with versioning, UTC timestamps, and migration logic
- Validate and normalize intervals (1d–4w) with fallback for invalid values
- Ensure correct handling of timezones and elapsed checks
- Improve error handling, logging, and internationalization (i18n)
- Add comprehensive test coverage for config, interval logic, migration, and CLI
- Align default config, translations, and schema with new functionality
- Improve help command UX by avoiding config/log loading for `--help`
- Update documentation and README with full feature overview
This commit is contained in:
Jens Bergmann
2025-06-27 07:52:40 +02:00
committed by GitHub
parent 4d4f3b4093
commit 5430f5cdc6
13 changed files with 1358 additions and 26 deletions

View File

@@ -296,6 +296,17 @@ browser:
user_data_dir: "" # see https://github.com/chromium/chromium/blob/main/docs/user_data_dir.md
profile_name: ""
# update check configuration
update_check:
enabled: true # Enable/disable update checks
channel: latest # One of: latest, prerelease
interval: 7d # Check interval (e.g. 7d for 7 days)
# If the interval is invalid, too short (<1d), or too long (>30d),
# the bot logs a warning and uses a default interval for this run:
# - 1d for 'prerelease' channel
# - 7d for 'latest' channel
# The config file is not changed automatically; please fix your config to avoid repeated warnings.
# login credentials
login:
username: ""

96
docs/update-check.md Normal file
View File

@@ -0,0 +1,96 @@
# Update Check Feature
## Overview
The update check feature automatically checks for newer versions of the bot on GitHub. It supports two channels:
- `latest`: Only final releases
- `prerelease`: Includes pre-releases
## Configuration
```yaml
update_check:
enabled: true # Enable/disable update checks
channel: latest # One of: latest, prerelease
interval: 7d # Check interval (e.g. 7d for 7 days)
```
### Interval Format
The interval is specified as a number followed by a unit:
- `s`: seconds
- `m`: minutes
- `h`: hours
- `d`: days
- `w`: weeks
Examples:
- `7d`: Check every 7 days
- `12h`: Check every 12 hours
- `1w`: Check every week
Validation rules:
- Minimum interval: 1 day (`1d`)
- Maximum interval: 4 weeks (`4w`)
- Value must be positive
- Only supported units are allowed
## State File
The update check state is stored in `.temp/update_check_state.json`. The file format is:
```json
{
"version": 1,
"last_check": "2024-03-20T12:00:00+00:00"
}
```
### Fields
- `version`: Current state file format version (integer)
- `last_check`: ISO 8601 timestamp of the last check (UTC)
### Migration
The state file supports version migration:
- Version 0 to 1: Added version field
- Future versions will be migrated automatically
### Timezone Handling
All timestamps are stored in UTC:
- When loading:
- Timestamps without timezone are assumed to be UTC
- Timestamps with timezone are converted to UTC
- When saving:
- All timestamps are converted to UTC before saving
- Timezone information is preserved in ISO 8601 format
### Edge Cases
The following edge cases are handled:
- Missing state file: Creates new state file
- Corrupted state file: Creates new state file
- Invalid timestamp format: Logs warning, uses current time
- Permission errors: Logs warning, continues without saving
- Invalid interval format: Logs warning, performs check
- Interval too short/long: Logs warning, performs check
## Error Handling
The update check feature handles various error scenarios:
- Network errors: Logs error, continues without check
- GitHub API errors: Logs error, continues without check
- Version parsing errors: Logs error, continues without check
- State file errors: Logs error, creates new state file
- Permission errors: Logs error, continues without saving
## Logging
The feature logs various events:
- Check results (new version available, up to date, etc.)
- State file operations (load, save, migration)
- Error conditions (network, API, parsing, etc.)
- Interval validation warnings
- Timezone conversion information

30
pdm.lock generated
View File

@@ -5,7 +5,7 @@
groups = ["default", "dev"]
strategy = ["inherit_metadata"]
lock_version = "4.5.0"
content_hash = "sha256:4c861bebeac9e92661923a7e8d04a695c2185a5d0f85179fb858febd2503fdaf"
content_hash = "sha256:25eeef987c3fa08a52036fd696587f2fb89c6474225d7c9108e5d0281aa54d26"
[[metadata.targets]]
requires_python = ">=3.10,<3.14"
@@ -1147,6 +1147,20 @@ files = [
{file = "pytest_cov-6.2.1.tar.gz", hash = "sha256:25cc6cc0a5358204b8108ecedc51a9b57b34cc6b8c967cc2c01a4e00d8a67da2"},
]
[[package]]
name = "pytest-mock"
version = "3.14.0"
requires_python = ">=3.8"
summary = "Thin-wrapper around the mock package for easier use with pytest"
groups = ["dev"]
dependencies = [
"pytest>=6.2.5",
]
files = [
{file = "pytest-mock-3.14.0.tar.gz", hash = "sha256:2719255a1efeceadbc056d6bf3df3d1c5015530fb40cf347c0f9afac88410bd0"},
{file = "pytest_mock-3.14.0-py3-none-any.whl", hash = "sha256:0b72c38033392a5f4621342fe11e9219ac11ec9d375f8e2a0c164539e0d70f6f"},
]
[[package]]
name = "pytest-rerunfailures"
version = "15.1"
@@ -1384,6 +1398,20 @@ files = [
{file = "tomli-2.2.1.tar.gz", hash = "sha256:cd45e1dc79c835ce60f7404ec8119f2eb06d38b1deba146f07ced3bbc44505ff"},
]
[[package]]
name = "types-requests"
version = "2.32.0.20250515"
requires_python = ">=3.9"
summary = "Typing stubs for requests"
groups = ["dev"]
dependencies = [
"urllib3>=2",
]
files = [
{file = "types_requests-2.32.0.20250515-py3-none-any.whl", hash = "sha256:f8eba93b3a892beee32643ff836993f15a785816acca21ea0ffa006f05ef0fb2"},
{file = "types_requests-2.32.0.20250515.tar.gz", hash = "sha256:09c8b63c11318cb2460813871aaa48b671002e59fda67ca909e9883777787581"},
]
[[package]]
name = "typing-extensions"
version = "4.14.0"

View File

@@ -47,23 +47,20 @@ dependencies = [
[dependency-groups] # https://peps.python.org/pep-0735/
dev = [
# security
"pip-audit",
# testing:
"pytest>=8.3.4",
"pytest-asyncio>=0.25.3",
"pytest-rerunfailures",
"pytest-cov>=6.0.0",
# linting:
"ruff",
"mypy",
"basedpyright",
# formatting
"autopep8",
"yamlfix",
# packaging:
"pyinstaller",
"platformdirs", # required by pyinstaller
"platformdirs",
"types-requests>=2.32.0.20250515",
"pytest-mock>=3.14.0",
]
[project.urls]

View File

@@ -344,6 +344,32 @@
},
"title": "PublishingConfig",
"type": "object"
},
"UpdateCheckConfig": {
"description": "Configuration for update checking functionality.",
"properties": {
"enabled": {
"default": true,
"title": "Enabled",
"type": "boolean"
},
"channel": {
"default": "latest",
"enum": [
"latest",
"preview"
],
"title": "Channel",
"type": "string"
},
"interval": {
"default": "7d",
"title": "Interval",
"type": "string"
}
},
"title": "UpdateCheckConfig",
"type": "object"
}
},
"properties": {
@@ -384,6 +410,10 @@
},
"captcha": {
"$ref": "#/$defs/CaptchaConfig"
},
"update_check": {
"$ref": "#/$defs/UpdateCheckConfig",
"description": "Update check configuration"
}
},
"title": "Config",

View File

@@ -15,6 +15,7 @@ from . import extract, resources
from ._version import __version__
from .model.ad_model import MAX_DESCRIPTION_LENGTH, Ad, AdPartial
from .model.config_model import Config
from .update_checker import UpdateChecker
from .utils import dicts, error_handlers, loggers, misc
from .utils.exceptions import CaptchaEncountered
from .utils.files import abspath
@@ -75,18 +76,30 @@ class KleinanzeigenBot(WebScrapingMixin):
match self.command:
case "help":
self.show_help()
return
case "version":
print(self.get_version())
case "verify":
self.configure_file_logging()
self.load_config()
# Check for updates on startup
checker = UpdateChecker(self.config)
checker.check_for_updates()
self.load_ads()
LOG.info("############################################")
LOG.info("DONE: No configuration errors found.")
LOG.info("############################################")
case "update-check":
self.configure_file_logging()
self.load_config()
checker = UpdateChecker(self.config)
checker.check_for_updates(skip_interval_check = True)
case "update-content-hash":
self.configure_file_logging()
self.load_config()
# Check for updates on startup
checker = UpdateChecker(self.config)
checker.check_for_updates()
self.ads_selector = "all"
if ads := self.load_ads(exclude_ads_with_id = False):
self.update_content_hashes(ads)
@@ -97,6 +110,9 @@ class KleinanzeigenBot(WebScrapingMixin):
case "publish":
self.configure_file_logging()
self.load_config()
# Check for updates on startup
checker = UpdateChecker(self.config)
checker.check_for_updates()
if not (self.ads_selector in {"all", "new", "due", "changed"} or
any(selector in self.ads_selector.split(",") for selector in ("all", "new", "due", "changed")) or
@@ -134,6 +150,9 @@ class KleinanzeigenBot(WebScrapingMixin):
case "delete":
self.configure_file_logging()
self.load_config()
# Check for updates on startup
checker = UpdateChecker(self.config)
checker.check_for_updates()
if ads := self.load_ads():
await self.create_browser_session()
await self.login()
@@ -149,6 +168,9 @@ class KleinanzeigenBot(WebScrapingMixin):
LOG.warning('You provided no ads selector. Defaulting to "new".')
self.ads_selector = "new"
self.load_config()
# Check for updates on startup
checker = UpdateChecker(self.config)
checker.check_for_updates()
await self.create_browser_session()
await self.login()
await self.download_ads()
@@ -177,6 +199,7 @@ class KleinanzeigenBot(WebScrapingMixin):
delete - Löscht Anzeigen
update - Aktualisiert bestehende Anzeigen
download - Lädt eine oder mehrere Anzeigen herunter
update-check - Prüft auf verfügbare Updates
update-content-hash - Berechnet den content_hash aller Anzeigen anhand der aktuellen ad_defaults neu;
nach Änderungen an den config.yaml/ad_defaults verhindert es, dass alle Anzeigen als
"geändert" gelten und neu veröffentlicht werden.
@@ -220,7 +243,8 @@ class KleinanzeigenBot(WebScrapingMixin):
delete - deletes ads
update - updates published ads
download - downloads one or multiple ads
update-content-hash recalculates each ads content_hash based on the current ad_defaults;
update-check - checks for available updates
update-content-hash recalculates each ad's content_hash based on the current ad_defaults;
use this after changing config.yaml/ad_defaults to avoid every ad being marked "changed" and republished
--
help - displays this help (default command)
@@ -498,8 +522,8 @@ class KleinanzeigenBot(WebScrapingMixin):
if not os.path.exists(self.config_file_path):
LOG.warning("Config file %s does not exist. Creating it with default values...", self.config_file_path)
default_config = Config.model_construct()
default_config.login.username = ""
default_config.login.password = ""
default_config.login.username = "changeme" # noqa: S105 placeholder for default config, not a real username
default_config.login.password = "changeme" # noqa: S105 placeholder for default config, not a real password
dicts.save_dict(self.config_file_path, default_config.model_dump(exclude_none = True, exclude = {
"ad_defaults": {
"description" # deprecated

View File

@@ -9,6 +9,7 @@ from typing import Annotated, Any, List, Literal
from pydantic import AfterValidator, Field, model_validator
from typing_extensions import deprecated
from kleinanzeigen_bot.model.update_check_model import UpdateCheckConfig
from kleinanzeigen_bot.utils import dicts
from kleinanzeigen_bot.utils.misc import get_attr
from kleinanzeigen_bot.utils.pydantics import ContextualModel
@@ -142,6 +143,7 @@ Example:
browser:BrowserConfig = Field(default_factory = BrowserConfig, description = "Browser configuration")
login:LoginConfig = Field(default_factory = LoginConfig.model_construct, description = "Login credentials")
captcha:CaptchaConfig = Field(default_factory = CaptchaConfig)
update_check:UpdateCheckConfig = Field(default_factory = UpdateCheckConfig, description = "Update check configuration")
def with_values(self, values:dict[str, Any]) -> Config:
return Config.model_validate(

View File

@@ -0,0 +1,27 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
from typing import Literal
from kleinanzeigen_bot.utils.pydantics import ContextualModel
class UpdateCheckConfig(ContextualModel):
"""Configuration for update checking functionality.
Attributes:
enabled: Whether update checking is enabled.
channel: Which release channel to check ('latest' for stable, 'preview' for prereleases).
interval: How often to check for updates (e.g. '7d', '1d').
If the interval is invalid, too short (<1d), or too long (>30d),
the bot will log a warning and use a default interval for this run:
- 1d for 'preview' channel
- 7d for 'latest' channel
The config file is not changed automatically; please fix your config to avoid repeated warnings.
"""
enabled:bool = True
channel:Literal["latest", "preview"] = "latest"
interval:str = "7d" # Default interval of 7 days

View File

@@ -0,0 +1,194 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import datetime
import json
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from pathlib import Path
from kleinanzeigen_bot.utils import dicts, loggers, misc
from kleinanzeigen_bot.utils.pydantics import ContextualModel
LOG = loggers.get_logger(__name__)
# Current version of the state file format
CURRENT_STATE_VERSION = 1
# Maximum allowed interval in days
MAX_INTERVAL_DAYS = 30
class UpdateCheckState(ContextualModel):
"""State for update checking functionality."""
version:int = CURRENT_STATE_VERSION
last_check:datetime.datetime | None = None
@classmethod
def _parse_timestamp(cls, timestamp_str:str) -> datetime.datetime | None:
"""Parse a timestamp string and ensure it's in UTC.
Args:
timestamp_str: The timestamp string to parse.
Returns:
The parsed timestamp in UTC, or None if parsing fails.
"""
try:
timestamp = datetime.datetime.fromisoformat(timestamp_str)
if timestamp.tzinfo is None:
# If no timezone info, assume UTC
timestamp = timestamp.replace(tzinfo = datetime.timezone.utc)
elif timestamp.tzinfo != datetime.timezone.utc:
# Convert to UTC if in a different timezone
timestamp = timestamp.astimezone(datetime.timezone.utc)
return timestamp
except ValueError as e:
LOG.warning("Invalid timestamp format in state file: %s", e)
return None
@classmethod
def load(cls, state_file:Path) -> UpdateCheckState:
"""Load the update check state from a file.
Args:
state_file: The path to the state file.
Returns:
The loaded state.
"""
if not state_file.exists():
return cls()
if state_file.stat().st_size == 0:
return cls()
try:
data = dicts.load_dict(str(state_file))
if not data:
return cls()
# Handle version migration
version = data.get("version", 0)
if version < CURRENT_STATE_VERSION:
LOG.info("Migrating update check state from version %d to %d", version, CURRENT_STATE_VERSION)
data = cls._migrate_state(data, version)
# Parse last_check timestamp
if "last_check" in data:
data["last_check"] = cls._parse_timestamp(data["last_check"])
return cls.model_validate(data)
except (json.JSONDecodeError, ValueError) as e:
LOG.warning("Failed to load update check state: %s", e)
return cls()
@classmethod
def _migrate_state(cls, data:dict[str, Any], from_version:int) -> dict[str, Any]:
"""Migrate state data from an older version to the current version.
Args:
data: The state data to migrate.
from_version: The version of the state data.
Returns:
The migrated state data.
"""
# Version 0 to 1: Add version field
if from_version == 0:
data["version"] = CURRENT_STATE_VERSION
LOG.debug("Migrated state from version 0 to 1: Added version field")
return data
def save(self, state_file:Path) -> None:
"""Save the update check state to a file.
Args:
state_file: The path to the state file.
"""
try:
data = self.model_dump()
if data["last_check"]:
# Ensure timestamp is in UTC before saving
if data["last_check"].tzinfo != datetime.timezone.utc:
data["last_check"] = data["last_check"].astimezone(datetime.timezone.utc)
data["last_check"] = data["last_check"].isoformat()
dicts.save_dict(str(state_file), data)
except PermissionError:
LOG.warning("Permission denied when saving update check state to %s", state_file)
except Exception as e:
LOG.warning("Failed to save update check state: %s", e)
def update_last_check(self) -> None:
"""Update the last check time to now in UTC."""
self.last_check = datetime.datetime.now(datetime.timezone.utc)
def _validate_update_interval(self, interval:str) -> tuple[datetime.timedelta, bool, str]:
"""
Validate the update check interval string.
Returns (timedelta, is_valid, reason).
"""
td = misc.parse_duration(interval)
# Accept explicit zero (e.g. "0d", "0h", "0m", "0s", "0") as invalid, but distinguish from typos
if td.total_seconds() == 0:
if interval.strip() in {"0d", "0h", "0m", "0s", "0"}:
return td, False, "Interval is zero, which is not allowed."
return td, False, "Invalid interval format or unsupported unit."
if td.total_seconds() < 0:
return td, False, "Negative interval is not allowed."
return td, True, ""
def should_check(self, interval:str, channel:str = "latest") -> bool:
"""
Determine if an update check should be performed based on the provided interval.
Args:
interval: The interval string (e.g. '7d', '1d 12h', etc.)
channel: The update channel ('latest' or 'preview') for fallback default interval.
Returns:
bool: True if an update check should be performed, False otherwise.
Notes:
- If interval is invalid, negative, zero, or above max, falls back to default interval for the channel.
- Only returns True if more than the interval has passed since last_check.
- Always compares in UTC.
"""
fallback = False
td = None
reason = ""
td, is_valid, reason = self._validate_update_interval(interval)
total_days = td.total_seconds() / 86400 if td else 0
epsilon = 1e-6
if not is_valid:
if reason == "Interval is zero, which is not allowed.":
LOG.warning("Interval is zero: %s. Minimum interval is 1d. Using default interval for this run.", interval)
elif reason == "Invalid interval format or unsupported unit.":
LOG.warning("Invalid interval format or unsupported unit: %s. Using default interval for this run.", interval)
elif reason == "Negative interval is not allowed.":
LOG.warning("Negative interval: %s. Minimum interval is 1d. Using default interval for this run.", interval)
fallback = True
elif total_days > MAX_INTERVAL_DAYS + epsilon:
LOG.warning("Interval too long: %s. Maximum interval is 30d. Using default interval for this run.", interval)
fallback = True
elif total_days < 1 - epsilon:
LOG.warning("Interval too short: %s. Minimum interval is 1d. Using default interval for this run.", interval)
fallback = True
if fallback:
# Fallback to default interval based on channel
if channel == "preview":
td = misc.parse_duration("1d")
LOG.warning("Falling back to default interval: 1d (preview channel). Please fix your config to avoid this warning.")
else:
td = misc.parse_duration("7d")
LOG.warning("Falling back to default interval: 7d (latest channel). Please fix your config to avoid this warning.")
if not self.last_check:
return True
now = datetime.datetime.now(datetime.timezone.utc)
elapsed = now - self.last_check
# Compare using integer seconds to avoid microsecond-level flakiness
return int(elapsed.total_seconds()) > int(td.total_seconds())

View File

@@ -393,3 +393,51 @@ kleinanzeigen_bot/utils/web_scraping_mixin.py:
web_request:
" -> HTTP %s [%s]...": " -> HTTP %s [%s]..."
#################################################
kleinanzeigen_bot/update_checker.py:
#################################################
_get_commit_date:
"Could not get commit date: %s": "Konnte Commit-Datum nicht ermitteln: %s"
_get_release_commit:
"Could not get release commit: %s": "Konnte Release-Commit nicht ermitteln: %s"
check_for_updates:
"A new version is available: %s from %s (current: %s from %s, channel: %s)": "Eine neue Version ist verfügbar: %s vom %s (aktuell: %s vom %s, Kanal: %s)"
"Could not determine commit dates for comparison.": "Konnte Commit-Daten für den Vergleich nicht ermitteln."
"Could not determine local commit hash.": "Konnte lokalen Commit-Hash nicht ermitteln."
"Could not determine local version.": "Konnte lokale Version nicht ermitteln."
"Could not determine release commit hash.": "Konnte Release-Commit-Hash nicht ermitteln."
"Could not get releases: %s": "Konnte Releases nicht abrufen: %s"
"Failed to get commit dates: %s": "Fehler beim Abrufen der Commit-Daten: %s"
"Failed to get release commit: %s": "Fehler beim Abrufen des Release-Commits: %s"
? "Release notes:\n%s"
: "Release-Notizen:\n%s"
"You are on the latest version: %s (compared to %s in channel %s)": "Sie verwenden die neueste Version: %s (verglichen mit %s im Kanal %s)"
"Latest release from GitHub is a prerelease, but 'latest' channel expects a stable release.": "Die neueste GitHub-Version ist eine Vorabversion, aber der 'latest'-Kanal erwartet eine stabile Version."
"No prerelease found for 'preview' channel.": "Keine Vorabversion für den 'preview'-Kanal gefunden."
"Unknown update channel: %s": "Unbekannter Update-Kanal: %s"
? "You are on a different commit than the release for channel '%s' (tag: %s). This may mean you are ahead, behind, or on a different branch. Local commit: %s (%s), Release commit: %s (%s)"
: "Sie befinden sich auf einem anderen Commit als das Release für Kanal '%s' (Tag: %s). Dies kann bedeuten, dass Sie voraus, hinterher oder auf einem anderen Branch sind. Lokaler Commit: %s (%s), Release-Commit: %s (%s)"
#################################################
kleinanzeigen_bot/model/update_check_state.py:
#################################################
_parse_timestamp:
"Invalid timestamp format in state file: %s": "Ungültiges Zeitstempel-Format in der Statusdatei: %s"
load:
"Failed to load update check state: %s": "Fehler beim Laden des Update-Prüfstatus: %s"
"Migrating update check state from version %d to %d": "Migriere Update-Prüfstatus von Version %d zu %d"
save:
"Failed to save update check state: %s": "Fehler beim Speichern des Update-Prüfstatus: %s"
"Permission denied when saving update check state to %s": "Keine Berechtigung zum Speichern des Update-Prüfstatus in %s"
should_check:
"Falling back to default interval: 1d (preview channel). Please fix your config to avoid this warning.": "Falle auf das Standardintervall zurück: 1 Tag (Vorschaukanal). Bitte korrigieren Sie Ihre Konfiguration, um diese Warnung zu vermeiden."
"Falling back to default interval: 7d (latest channel). Please fix your config to avoid this warning.": "Falle auf das Standardintervall zurück: 7 Tage (Stabiler Kanal). Bitte korrigieren Sie Ihre Konfiguration, um diese Warnung zu vermeiden."
"Interval is zero: %s. Minimum interval is 1d. Using default interval for this run.": "Intervall ist null: %s. Das Mindestintervall beträgt 1 Tag. Es wird das Standardintervall für diesen Durchlauf verwendet."
"Interval too long: %s. Maximum interval is 30d. Using default interval for this run.": "Intervall zu lang: %s. Das maximale Intervall beträgt 30 Tage. Es wird das Standardintervall für diesen Durchlauf verwendet."
"Interval too short: %s. Minimum interval is 1d. Using default interval for this run.": "Intervall zu kurz: %s. Das Mindestintervall beträgt 1 Tag. Es wird das Standardintervall für diesen Durchlauf verwendet."
"Invalid interval format or unsupported unit: %s. Using default interval for this run.": "Ungültiges Intervallformat oder nicht unterstützte Einheit: %s. Es wird das Standardintervall für diesen Durchlauf verwendet."
"Negative interval: %s. Minimum interval is 1d. Using default interval for this run.": "Negatives Intervall: %s. Das Mindestintervall beträgt 1 Tag. Es wird das Standardintervall für diesen Durchlauf verwendet."

View File

@@ -0,0 +1,233 @@
# SPDX-FileCopyrightText: © Jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import logging
from datetime import datetime
from pathlib import Path
from typing import TYPE_CHECKING
import colorama
import requests
if TYPE_CHECKING:
from kleinanzeigen_bot.model.config_model import Config
try:
from kleinanzeigen_bot._version import __version__
except ImportError:
__version__ = "unknown"
from kleinanzeigen_bot.model.update_check_state import UpdateCheckState
logger = logging.getLogger(__name__)
colorama.init()
class UpdateChecker:
"""Checks for updates to the bot."""
def __init__(self, config:"Config") -> None:
"""Initialize the update checker.
Args:
config: The bot configuration.
"""
self.config = config
self.state_file = Path(".temp") / "update_check_state.json"
self.state_file.parent.mkdir(exist_ok = True) # Ensure .temp directory exists
self.state = UpdateCheckState.load(self.state_file)
def get_local_version(self) -> str | None:
"""Get the local version of the bot.
Returns:
The local version string, or None if it cannot be determined.
"""
return __version__
def _get_commit_hash(self, version:str) -> str | None:
"""Extract the commit hash from a version string.
Args:
version: The version string to extract the commit hash from.
Returns:
The commit hash, or None if it cannot be extracted.
"""
if "+" in version:
return version.split("+")[1]
return None
def _get_release_commit(self, tag_name:str) -> str | None:
"""Get the commit hash for a release tag.
Args:
tag_name: The release tag name (e.g. 'latest').
Returns:
The commit hash, or None if it cannot be determined.
"""
try:
response = requests.get(
f"https://api.github.com/repos/Second-Hand-Friends/kleinanzeigen-bot/releases/tags/{tag_name}",
timeout = 10
)
response.raise_for_status()
data = response.json()
if isinstance(data, dict) and "target_commitish" in data:
return str(data["target_commitish"])
return None
except Exception as e:
logger.warning("Could not get release commit: %s", e)
return None
def _get_commit_date(self, commit:str) -> datetime | None:
"""Get the commit date for a commit hash.
Args:
commit: The commit hash.
Returns:
The commit date, or None if it cannot be determined.
"""
try:
response = requests.get(
f"https://api.github.com/repos/Second-Hand-Friends/kleinanzeigen-bot/commits/{commit}",
timeout = 10
)
response.raise_for_status()
data = response.json()
if isinstance(data, dict) and "commit" in data and "author" in data["commit"] and "date" in data["commit"]["author"]:
return datetime.fromisoformat(data["commit"]["author"]["date"].replace("Z", "+00:00"))
return None
except Exception as e:
logger.warning("Could not get commit date: %s", e)
return None
def _get_short_commit_hash(self, commit:str) -> str:
"""Get the short version of a commit hash.
Args:
commit: The full commit hash.
Returns:
The short commit hash (first 7 characters).
"""
return commit[:7]
def check_for_updates(self, *, skip_interval_check:bool = False) -> None:
"""Check for updates to the bot.
Args:
skip_interval_check: If True, bypass the interval check and force an update check.
"""
if not self.config.update_check.enabled:
return
# Check if we should perform an update check based on the interval
if not skip_interval_check and not self.state.should_check(self.config.update_check.interval, self.config.update_check.channel):
return
local_version = self.get_local_version()
if not local_version:
logger.warning("Could not determine local version.")
return
local_commit = self._get_commit_hash(local_version)
if not local_commit:
logger.warning("Could not determine local commit hash.")
return
# --- Fetch release info from GitHub using correct endpoint per channel ---
try:
if self.config.update_check.channel == "latest":
# Use /releases/latest endpoint for stable releases
response = requests.get(
"https://api.github.com/repos/Second-Hand-Friends/kleinanzeigen-bot/releases/latest",
timeout = 10
)
response.raise_for_status()
release = response.json()
# Defensive: ensure it's not a prerelease
if release.get("prerelease", False):
logger.warning("Latest release from GitHub is a prerelease, but 'latest' channel expects a stable release.")
return
elif self.config.update_check.channel == "preview":
# Use /releases endpoint and select the most recent prerelease
response = requests.get(
"https://api.github.com/repos/Second-Hand-Friends/kleinanzeigen-bot/releases",
timeout = 10
)
response.raise_for_status()
releases = response.json()
# Find the most recent prerelease
release = next((r for r in releases if r.get("prerelease", False)), None)
if not release:
logger.warning("No prerelease found for 'preview' channel.")
return
else:
logger.warning("Unknown update channel: %s", self.config.update_check.channel)
return
except Exception as e:
logger.warning("Could not get releases: %s", e)
return
# Get release commit
try:
release_commit = self._get_release_commit(release["tag_name"])
except Exception as e:
logger.warning("Failed to get release commit: %s", e)
return
if not release_commit:
logger.warning("Could not determine release commit hash.")
return
# Get commit dates
try:
local_commit_date = self._get_commit_date(local_commit)
release_commit_date = self._get_commit_date(release_commit)
except Exception as e:
logger.warning("Failed to get commit dates: %s", e)
return
if not local_commit_date or not release_commit_date:
logger.warning("Could not determine commit dates for comparison.")
return
if local_commit == release_commit:
logger.info(
"You are on the latest version: %s (compared to %s in channel %s)",
local_version,
self._get_short_commit_hash(release_commit),
self.config.update_check.channel
)
# We cannot reliably determine ahead/behind without git. Use commit dates as a weak heuristic, but clarify in the log.
elif local_commit_date < release_commit_date:
logger.warning(
"A new version is available: %s from %s (current: %s from %s, channel: %s)",
self._get_short_commit_hash(release_commit),
release_commit_date.strftime("%Y-%m-%d %H:%M:%S"),
local_version,
local_commit_date.strftime("%Y-%m-%d %H:%M:%S"),
self.config.update_check.channel
)
if release.get("body"):
logger.info("Release notes:\n%s", release["body"])
else:
logger.info(
"You are on a different commit than the release for channel '%s' (tag: %s). This may mean you are ahead, behind, or on a different branch. "
"Local commit: %s (%s), Release commit: %s (%s)",
self.config.update_check.channel,
release.get("tag_name", "unknown"),
self._get_short_commit_hash(local_commit),
local_commit_date.strftime("%Y-%m-%d %H:%M:%S"),
self._get_short_commit_hash(release_commit),
release_commit_date.strftime("%Y-%m-%d %H:%M:%S")
)
# Update the last check time
self.state.update_last_check()
self.state.save(self.state_file)

View File

@@ -297,13 +297,11 @@ class TestKleinanzeigenBotConfiguration:
test_bot.config_file_path = str(config_path)
with patch.object(LOG, "warning") as mock_warning:
with pytest.raises(ValidationError) as exc_info:
test_bot.load_config()
mock_warning.assert_called_once()
assert config_path.exists()
assert "login.username" in str(exc_info.value)
assert "login.password" in str(exc_info.value)
assert test_bot.config.login.username == "changeme" # noqa: S105 placeholder for default config, not a real username
assert test_bot.config.login.password == "changeme" # noqa: S105 placeholder for default config, not a real password
def test_load_config_validates_required_fields(self, test_bot:KleinanzeigenBot, test_data_dir:str) -> None:
"""Verify that config validation checks required fields."""

View File

@@ -0,0 +1,644 @@
# SPDX-FileCopyrightText: © jens Bergmann and contributors
# SPDX-License-Identifier: AGPL-3.0-or-later
# SPDX-ArtifactOfProjectHomePage: https://github.com/Second-Hand-Friends/kleinanzeigen-bot/
from __future__ import annotations
import json
from datetime import datetime, timedelta, timezone
from typing import TYPE_CHECKING
from unittest.mock import MagicMock, patch
if TYPE_CHECKING:
from pathlib import Path
import pytest
import requests
if TYPE_CHECKING:
from pytest_mock import MockerFixture
from kleinanzeigen_bot.model.config_model import Config
from kleinanzeigen_bot.model.update_check_state import UpdateCheckState
from kleinanzeigen_bot.update_checker import UpdateChecker
@pytest.fixture
def config() -> Config:
return Config.model_validate({
"update_check": {
"enabled": True,
"channel": "latest",
"interval": "7d"
}
})
@pytest.fixture
def state_file(tmp_path:Path) -> Path:
return tmp_path / "update_check_state.json"
class TestUpdateChecker:
"""Tests for the update checker functionality."""
def test_get_local_version(self, config:Config) -> None:
"""Test that the local version is correctly retrieved."""
checker = UpdateChecker(config)
assert checker.get_local_version() is not None
def test_get_commit_hash(self, config:Config) -> None:
"""Test that the commit hash is correctly extracted from the version string."""
checker = UpdateChecker(config)
assert checker._get_commit_hash("2025+fb00f11") == "fb00f11"
assert checker._get_commit_hash("2025") is None
def test_get_release_commit(self, config:Config) -> None:
"""Test that the release commit hash is correctly retrieved from the GitHub API."""
checker = UpdateChecker(config)
with patch("requests.get", return_value = MagicMock(json = lambda: {"target_commitish": "e7a3d46"})):
assert checker._get_release_commit("latest") == "e7a3d46"
def test_get_commit_date(self, config:Config) -> None:
"""Test that the commit date is correctly retrieved from the GitHub API."""
checker = UpdateChecker(config)
with patch("requests.get", return_value = MagicMock(json = lambda: {"commit": {"author": {"date": "2025-05-18T00:00:00Z"}}})):
assert checker._get_commit_date("e7a3d46") == datetime(2025, 5, 18, tzinfo = timezone.utc)
def test_check_for_updates_disabled(self, config:Config) -> None:
"""Test that the update checker does not check for updates if disabled."""
config.update_check.enabled = False
checker = UpdateChecker(config)
with patch("requests.get") as mock_get:
checker.check_for_updates()
mock_get.assert_not_called()
def test_check_for_updates_no_local_version(self, config:Config) -> None:
"""Test that the update checker handles the case where the local version cannot be determined."""
checker = UpdateChecker(config)
with patch.object(UpdateChecker, "get_local_version", return_value = None):
checker.check_for_updates() # Should not raise exception
def test_check_for_updates_no_commit_hash(self, config:Config) -> None:
"""Test that the update checker handles the case where the commit hash cannot be extracted."""
checker = UpdateChecker(config)
with patch.object(UpdateChecker, "get_local_version", return_value = "2025"):
checker.check_for_updates() # Should not raise exception
def test_check_for_updates_no_releases(self, config:Config) -> None:
"""Test that the update checker handles the case where no releases are found."""
checker = UpdateChecker(config)
with patch("requests.get", return_value = MagicMock(json = list)):
checker.check_for_updates() # Should not raise exception
def test_check_for_updates_api_error(self, config:Config) -> None:
"""Test that the update checker handles API errors gracefully."""
checker = UpdateChecker(config)
with patch("requests.get", side_effect = Exception("API Error")):
checker.check_for_updates() # Should not raise exception
def test_check_for_updates_ahead(self, config:Config, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test that the update checker correctly identifies when the local version is ahead of the latest release."""
caplog.set_level("INFO", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(UpdateChecker, "_get_release_commit", return_value = "e7a3d46")
mocker.patch.object(
UpdateChecker,
"_get_commit_date",
side_effect = [
datetime(2025, 5, 18, tzinfo = timezone.utc),
datetime(2025, 5, 16, tzinfo = timezone.utc)
]
)
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(json = lambda: {"tag_name": "latest", "prerelease": False})
)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker = UpdateChecker(config)
checker.check_for_updates()
print("LOG RECORDS:")
for r in caplog.records:
print(f"{r.levelname}: {r.getMessage()}")
expected = (
"You are on a different commit than the release for channel 'latest' (tag: latest). This may mean you are ahead, behind, or on a different branch. "
"Local commit: fb00f11 (2025-05-18 00:00:00), Release commit: e7a3d46 (2025-05-16 00:00:00)"
)
assert any(expected in r.getMessage() for r in caplog.records)
def test_check_for_updates_preview(self, config:Config, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test that the update checker correctly handles preview releases."""
caplog.set_level("INFO", logger = "kleinanzeigen_bot.update_checker")
config.update_check.channel = "preview"
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(UpdateChecker, "_get_release_commit", return_value = "e7a3d46")
mocker.patch.object(
UpdateChecker,
"_get_commit_date",
side_effect = [
datetime(2025, 5, 18, tzinfo = timezone.utc),
datetime(2025, 5, 16, tzinfo = timezone.utc)
]
)
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(json = lambda: [{"tag_name": "preview", "prerelease": True}])
)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker = UpdateChecker(config)
checker.check_for_updates()
print("LOG RECORDS:")
for r in caplog.records:
print(f"{r.levelname}: {r.getMessage()}")
expected = (
"You are on a different commit than the release for channel 'preview' (tag: preview). "
"This may mean you are ahead, behind, or on a different branch. "
"Local commit: fb00f11 (2025-05-18 00:00:00), Release commit: e7a3d46 (2025-05-16 00:00:00)"
)
assert any(expected in r.getMessage() for r in caplog.records)
def test_check_for_updates_behind(self, config:Config, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test that the update checker correctly identifies when the local version is behind the latest release."""
caplog.set_level("INFO", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(UpdateChecker, "_get_release_commit", return_value = "e7a3d46")
mocker.patch.object(
UpdateChecker,
"_get_commit_date",
side_effect = [
datetime(2025, 5, 16, tzinfo = timezone.utc),
datetime(2025, 5, 18, tzinfo = timezone.utc)
]
)
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(json = lambda: {"tag_name": "latest", "prerelease": False})
)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker = UpdateChecker(config)
checker.check_for_updates()
print("LOG RECORDS:")
for r in caplog.records:
print(f"{r.levelname}: {r.getMessage()}")
expected = "A new version is available: e7a3d46 from 2025-05-18 00:00:00 (current: 2025+fb00f11 from 2025-05-16 00:00:00, channel: latest)"
assert any(expected in r.getMessage() for r in caplog.records)
def test_check_for_updates_same(self, config:Config, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test that the update checker correctly identifies when the local version is the same as the latest release."""
caplog.set_level("INFO", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(UpdateChecker, "_get_release_commit", return_value = "fb00f11")
mocker.patch.object(
UpdateChecker,
"_get_commit_date",
return_value = datetime(2025, 5, 18, tzinfo = timezone.utc)
)
mocker.patch.object(
requests,
"get",
return_value = mocker.Mock(json = lambda: {"tag_name": "latest", "prerelease": False})
)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker = UpdateChecker(config)
checker.check_for_updates()
print("LOG RECORDS:")
for r in caplog.records:
print(f"{r.levelname}: {r.getMessage()}")
expected = "You are on the latest version: 2025+fb00f11 (compared to fb00f11 in channel latest)"
assert any(expected in r.getMessage() for r in caplog.records)
def test_update_check_state_empty_file(self, state_file:Path) -> None:
"""Test that loading an empty state file returns a new state."""
state_file.touch() # Create empty file
state = UpdateCheckState.load(state_file)
assert state.last_check is None
def test_update_check_state_invalid_data(self, state_file:Path) -> None:
"""Test that loading invalid state data returns a new state."""
state_file.write_text("invalid json")
state = UpdateCheckState.load(state_file)
assert state.last_check is None
def test_update_check_state_missing_last_check(self, state_file:Path) -> None:
"""Test that loading state data without last_check returns a new state."""
state_file.write_text("{}")
state = UpdateCheckState.load(state_file)
assert state.last_check is None
def test_update_check_state_save_error(self, state_file:Path) -> None:
"""Test that saving state handles errors gracefully."""
state = UpdateCheckState()
state.last_check = datetime.now(timezone.utc)
# Make the file read-only to cause a save error
state_file.touch()
state_file.chmod(0o444)
# Should not raise an exception
state.save(state_file)
def test_update_check_state_interval_units(self) -> None:
"""Test that different interval units are handled correctly."""
state = UpdateCheckState()
now = datetime.now(timezone.utc)
# Test seconds (should always be too short, fallback to 7d, only 2 days elapsed, so should_check is False)
state.last_check = now - timedelta(seconds = 30)
assert state.should_check("60s") is False
assert state.should_check("20s") is False
# Test minutes (should always be too short)
state.last_check = now - timedelta(minutes = 30)
assert state.should_check("60m") is False
assert state.should_check("20m") is False
# Test hours (should always be too short)
state.last_check = now - timedelta(hours = 2)
assert state.should_check("4h") is False
assert state.should_check("1h") is False
# Test days
state.last_check = now - timedelta(days = 3)
assert state.should_check("7d") is False
assert state.should_check("2d") is True
state.last_check = now - timedelta(days = 3)
assert state.should_check("3d") is False
state.last_check = now - timedelta(days = 3, seconds = 1)
assert state.should_check("3d") is True
# Test multi-day intervals (was weeks)
state.last_check = now - timedelta(days = 14)
assert state.should_check("14d") is False
state.last_check = now - timedelta(days = 14, seconds = 1)
assert state.should_check("14d") is True
# Test invalid unit (should fallback to 7d, 14 days elapsed, so should_check is True)
state.last_check = now - timedelta(days = 14)
assert state.should_check("1x") is True
# If fallback interval has not elapsed, should_check is False
state.last_check = now - timedelta(days = 6)
assert state.should_check("1x") is False
# Test truly unknown unit (case _)
state.last_check = now - timedelta(days = 14)
assert state.should_check("1z") is True
state.last_check = now - timedelta(days = 6)
assert state.should_check("1z") is False
def test_update_check_state_interval_validation(self) -> None:
"""Test that interval validation works correctly."""
state = UpdateCheckState()
now = datetime.now(timezone.utc)
state.last_check = now - timedelta(days = 1)
# Test minimum value (1d)
assert state.should_check("12h") is False # Too short, fallback to 7d, only 1 day elapsed
assert state.should_check("1d") is False # Minimum allowed
assert state.should_check("2d") is False # Valid, but only 1 day elapsed
# Test maximum value (30d)
assert state.should_check("31d") is False # Too long, fallback to 7d, only 1 day elapsed
assert state.should_check("60d") is False # Too long, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 30)
assert state.should_check("30d") is False # Exactly 30 days, should_check is False
state.last_check = now - timedelta(days = 30, seconds = 1)
assert state.should_check("30d") is True # Should check if just over interval
state.last_check = now - timedelta(days = 21)
assert state.should_check("21d") is False # Exactly 21 days, should_check is False
state.last_check = now - timedelta(days = 21, seconds = 1)
assert state.should_check("21d") is True # Should check if just over interval
state.last_check = now - timedelta(days = 7)
assert state.should_check("7d") is False # 7 days, should_check is False
state.last_check = now - timedelta(days = 7, seconds = 1)
assert state.should_check("7d") is True # Should check if just over interval
# Test negative values
state.last_check = now - timedelta(days = 1)
assert state.should_check("-1d") is False # Negative value, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("-1d") is True # Negative value, fallback to 7d, 8 days elapsed
# Test zero value
state.last_check = now - timedelta(days = 1)
assert state.should_check("0d") is False # Zero value, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("0d") is True # Zero value, fallback to 7d, 8 days elapsed
# Test invalid formats
state.last_check = now - timedelta(days = 1)
assert state.should_check("invalid") is False # Invalid format, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("invalid") is True # Invalid format, fallback to 7d, 8 days elapsed
state.last_check = now - timedelta(days = 1)
assert state.should_check("1") is False # Missing unit, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("1") is True # Missing unit, fallback to 7d, 8 days elapsed
state.last_check = now - timedelta(days = 1)
assert state.should_check("d") is False # Missing value, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("d") is True # Missing value, fallback to 7d, 8 days elapsed
# Test unit conversions (all sub-day intervals are too short)
state.last_check = now - timedelta(days = 1)
assert state.should_check("24h") is False # 1 day in hours, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("24h") is True # 1 day in hours, fallback to 7d, 8 days elapsed
state.last_check = now - timedelta(days = 1)
assert state.should_check("1440m") is False # 1 day in minutes, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("1440m") is True # 1 day in minutes, fallback to 7d, 8 days elapsed
state.last_check = now - timedelta(days = 1)
assert state.should_check("86400s") is False # 1 day in seconds, fallback to 7d, only 1 day elapsed
state.last_check = now - timedelta(days = 8)
assert state.should_check("86400s") is True # 1 day in seconds, fallback to 7d, 8 days elapsed
def test_update_check_state_invalid_date(self, state_file:Path) -> None:
"""Test that loading a state file with an invalid date string for last_check returns a new state (triggers ValueError)."""
state_file.write_text(json.dumps({"last_check": "not-a-date"}))
state = UpdateCheckState.load(state_file)
assert state.last_check is None
def test_update_check_state_save_permission_error(self, mocker:"MockerFixture", state_file:Path) -> None:
"""Test that save handles PermissionError from dicts.save_dict."""
state = UpdateCheckState()
state.last_check = datetime.now(timezone.utc)
mocker.patch("kleinanzeigen_bot.utils.dicts.save_dict", side_effect = PermissionError)
# Should not raise
state.save(state_file)
def test_get_release_commit_no_sha(self, config:Config, mocker:"MockerFixture") -> None:
"""Test _get_release_commit with API returning no sha key."""
checker = UpdateChecker(config)
mocker.patch("requests.get", return_value = mocker.Mock(json = dict))
assert checker._get_release_commit("latest") is None
def test_get_release_commit_list_instead_of_dict(self, config:Config, mocker:"MockerFixture") -> None:
"""Test _get_release_commit with API returning a list instead of dict."""
checker = UpdateChecker(config)
mocker.patch("requests.get", return_value = mocker.Mock(json = list))
assert checker._get_release_commit("latest") is None
def test_get_commit_date_no_commit(self, config:Config, mocker:"MockerFixture") -> None:
"""Test _get_commit_date with API returning no commit key."""
checker = UpdateChecker(config)
mocker.patch("requests.get", return_value = mocker.Mock(json = dict))
assert checker._get_commit_date("sha") is None
def test_get_commit_date_no_author(self, config:Config, mocker:"MockerFixture") -> None:
"""Test _get_commit_date with API returning no author key."""
checker = UpdateChecker(config)
mocker.patch("requests.get", return_value = mocker.Mock(json = lambda: {"commit": {}}))
assert checker._get_commit_date("sha") is None
def test_get_commit_date_no_date(self, config:Config, mocker:"MockerFixture") -> None:
"""Test _get_commit_date with API returning no date key."""
checker = UpdateChecker(config)
mocker.patch("requests.get", return_value = mocker.Mock(json = lambda: {"commit": {"author": {}}}))
assert checker._get_commit_date("sha") is None
def test_get_commit_date_list_instead_of_dict(self, config:Config, mocker:"MockerFixture") -> None:
"""Test _get_commit_date with API returning a list instead of dict."""
checker = UpdateChecker(config)
mocker.patch("requests.get", return_value = mocker.Mock(json = list))
assert checker._get_commit_date("sha") is None
def test_check_for_updates_release_commit_exception(self, config:Config, mocker:"MockerFixture") -> None:
"""Test check_for_updates handles exception in _get_release_commit."""
checker = UpdateChecker(config)
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(UpdateChecker, "_get_release_commit", side_effect = Exception("fail"))
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker.check_for_updates() # Should not raise
def test_check_for_updates_commit_date_exception(self, config:Config, mocker:"MockerFixture") -> None:
"""Test check_for_updates handles exception in _get_commit_date."""
checker = UpdateChecker(config)
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(UpdateChecker, "_get_release_commit", return_value = "e7a3d46")
mocker.patch.object(UpdateChecker, "_get_commit_date", side_effect = Exception("fail"))
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker.check_for_updates() # Should not raise
def test_check_for_updates_no_releases_empty(self, config:Config, mocker:"MockerFixture") -> None:
"""Test check_for_updates handles no releases found (API returns empty list)."""
checker = UpdateChecker(config)
mocker.patch("requests.get", return_value = mocker.Mock(json = list))
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker.check_for_updates() # Should not raise
def test_check_for_updates_no_commit_hash_extracted(self, config:Config, mocker:"MockerFixture") -> None:
"""Test check_for_updates handles no commit hash extracted."""
checker = UpdateChecker(config)
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025")
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
checker.check_for_updates() # Should not raise
def test_check_for_updates_no_commit_dates(self, config:Config, mocker:"MockerFixture", caplog:pytest.LogCaptureFixture) -> None:
"""Test check_for_updates logs warning if commit dates cannot be determined."""
caplog.set_level("WARNING", logger = "kleinanzeigen_bot.update_checker")
mocker.patch.object(UpdateChecker, "get_local_version", return_value = "2025+fb00f11")
mocker.patch.object(UpdateChecker, "_get_commit_hash", return_value = "fb00f11")
mocker.patch.object(UpdateChecker, "_get_release_commit", return_value = "e7a3d46")
mocker.patch.object(UpdateChecker, "_get_commit_date", return_value = None)
mocker.patch.object(UpdateCheckState, "should_check", return_value = True)
# Patch requests.get to avoid any real HTTP requests
mocker.patch("requests.get", return_value = mocker.Mock(json = lambda: {"tag_name": "latest", "prerelease": False}))
checker = UpdateChecker(config)
checker.check_for_updates()
assert any("Could not determine commit dates for comparison." in r.getMessage() for r in caplog.records)
def test_update_check_state_version_tracking(self, state_file:Path) -> None:
"""Test that version tracking works correctly."""
# Create a state with version 0 (old format)
state_file.write_text(json.dumps({
"last_check": datetime.now(timezone.utc).isoformat()
}))
# Load the state - should migrate to version 1
state = UpdateCheckState.load(state_file)
assert state.version == 1
# Save the state
state.save(state_file)
# Load again - should keep version 1
state = UpdateCheckState.load(state_file)
assert state.version == 1
def test_update_check_state_migration(self, state_file:Path) -> None:
"""Test that state migration works correctly."""
# Create a state with version 0 (old format)
old_time = datetime.now(timezone.utc)
state_file.write_text(json.dumps({
"last_check": old_time.isoformat()
}))
# Load the state - should migrate to version 1
state = UpdateCheckState.load(state_file)
assert state.version == 1
assert state.last_check == old_time
# Save the state
state.save(state_file)
# Verify the saved file has the new version
with open(state_file, "r", encoding = "utf-8") as f:
data = json.load(f)
assert data["version"] == 1
assert data["last_check"] == old_time.isoformat()
def test_update_check_state_save_errors(self, state_file:Path, mocker:"MockerFixture") -> None:
"""Test that save errors are handled gracefully."""
state = UpdateCheckState()
state.last_check = datetime.now(timezone.utc)
# Test permission error
mocker.patch("kleinanzeigen_bot.utils.dicts.save_dict", side_effect = PermissionError)
state.save(state_file) # Should not raise
# Test other errors
mocker.patch("kleinanzeigen_bot.utils.dicts.save_dict", side_effect = Exception("Test error"))
state.save(state_file) # Should not raise
def test_update_check_state_load_errors(self, state_file:Path) -> None:
"""Test that load errors are handled gracefully."""
# Test invalid JSON
state_file.write_text("invalid json")
state = UpdateCheckState.load(state_file)
assert state.version == 1
assert state.last_check is None
# Test invalid date format
state_file.write_text(json.dumps({
"version": 1,
"last_check": "invalid-date"
}))
state = UpdateCheckState.load(state_file)
assert state.version == 1
assert state.last_check is None
def test_update_check_state_timezone_handling(self, state_file:Path) -> None:
"""Test that timezone handling works correctly."""
# Test loading timestamp without timezone (should assume UTC)
state_file.write_text(json.dumps({
"version": 1,
"last_check": "2024-03-20T12:00:00"
}))
state = UpdateCheckState.load(state_file)
assert state.last_check is not None
assert state.last_check.tzinfo == timezone.utc
assert state.last_check.hour == 12
# Test loading timestamp with different timezone (should convert to UTC)
state_file.write_text(json.dumps({
"version": 1,
"last_check": "2024-03-20T12:00:00+02:00" # 2 hours ahead of UTC
}))
state = UpdateCheckState.load(state_file)
assert state.last_check is not None
assert state.last_check.tzinfo == timezone.utc
assert state.last_check.hour == 10 # Converted to UTC
# Test saving timestamp (should always be in UTC)
state = UpdateCheckState()
state.last_check = datetime(2024, 3, 20, 12, 0, tzinfo = timezone(timedelta(hours = 2)))
state.save(state_file)
with open(state_file, "r", encoding = "utf-8") as f:
data = json.load(f)
assert data["last_check"] == "2024-03-20T10:00:00+00:00" # Converted to UTC
def test_update_check_state_missing_file(self, state_file:Path) -> None:
"""Test that loading a missing state file returns a new state and should_check returns True."""
# Ensure the file doesn't exist
if state_file.exists():
state_file.unlink()
# Load state from non-existent file
state = UpdateCheckState.load(state_file)
assert state.last_check is None
assert state.version == 1
# Verify should_check returns True for any interval
assert state.should_check("7d") is True
assert state.should_check("1d") is True
assert state.should_check("4w") is True
# No longer check _time_since_last_check (method removed)
def test_should_check_fallback_to_default_interval(self, caplog:pytest.LogCaptureFixture) -> None:
"""Test that should_check falls back to default interval and logs a warning for invalid/too short/too long/zero intervals and unsupported units."""
state = UpdateCheckState()
now = datetime.now(timezone.utc)
state.last_check = now - timedelta(days = 2)
# Invalid format (unsupported unit)
caplog.clear()
assert state.should_check("notaninterval", channel = "latest") is False # 2 days since last check, default 7d
assert any("Invalid interval format or unsupported unit" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 7d" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check("notaninterval", channel = "preview") is True # 2 days since last check, default 1d
assert any("Invalid interval format or unsupported unit" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 1d" in r.getMessage() for r in caplog.records)
# Explicit zero interval
for zero in ["0d", "0h", "0m", "0s", "0"]:
caplog.clear()
assert state.should_check(zero, channel = "latest") is False
assert any("Interval is zero" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 7d" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check(zero, channel = "preview") is True
assert any("Interval is zero" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 1d" in r.getMessage() for r in caplog.records)
# Too short
caplog.clear()
assert state.should_check("12h", channel = "latest") is False # 2 days since last check, default 7d
assert any("Interval too short" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 7d" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check("12h", channel = "preview") is True # 2 days since last check, default 1d
assert any("Interval too short" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 1d" in r.getMessage() for r in caplog.records)
# Too long
caplog.clear()
assert state.should_check("60d", channel = "latest") is False # 2 days since last check, default 7d
assert any("Interval too long" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 7d" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check("60d", channel = "preview") is True # 2 days since last check, default 1d
assert any("Interval too long" in r.getMessage() for r in caplog.records)
assert any("Falling back to default interval: 1d" in r.getMessage() for r in caplog.records)
# Valid interval, no fallback
caplog.clear()
assert state.should_check("7d", channel = "latest") is False
assert not any("Falling back to default interval" in r.getMessage() for r in caplog.records)
caplog.clear()
assert state.should_check("1d", channel = "preview") is True
assert not any("Falling back to default interval" in r.getMessage() for r in caplog.records)