Compare commits

..

1424 Commits

Author SHA1 Message Date
785c987ba6 Run tests in parallel
Some checks failed
CI / lint (ubuntu-latest, 3.9) (push) Failing after 16s
Contributions / A job to automate contrib in readme (push) Failing after 14s
CI / build-and-test (macos-13, 3.13) (push) Has been cancelled
CI / build-and-test (macos-13, 3.9) (push) Has been cancelled
CI / build-and-test (macos-14, 3.13) (push) Has been cancelled
CI / build-and-test (macos-14, 3.9) (push) Has been cancelled
CI / build-and-test (ubuntu-22.04, 3.13) (push) Has been cancelled
CI / build-and-test (ubuntu-22.04, 3.9) (push) Has been cancelled
CI / build-and-test (windows-latest, 3.13) (push) Has been cancelled
CI / build-and-test (windows-latest, 3.9) (push) Has been cancelled
2025-06-18 21:08:27 -07:00
8ecf70bca2 Require pyicu 2025-06-18 21:08:27 -07:00
eda794ac09 Bump comicinfoxml 2025-06-18 21:08:27 -07:00
c36e4703d0 Use zipremove 2025-06-18 21:08:27 -07:00
818c3768ad Fix isort 2025-06-18 21:08:27 -07:00
5100c9640e Fix 7z 2025-06-18 21:08:27 -07:00
0a0c8f32fe Update build for linux arm64 release 2025-06-18 21:08:27 -07:00
f4e2b5305c Fix enabling original hash widgets 2025-06-18 21:08:27 -07:00
11e2dea0b1 Test python 3.9 and 3.13 publish 3.13 binaries 2025-06-18 21:08:27 -07:00
0c28572fbc Download appimage for the current platform 2025-06-18 21:08:27 -07:00
653e792bfd Switch to PyQt6 2025-06-18 17:24:37 -07:00
94f325a088 Fix error when parsing metadata from the CLI 2025-05-24 11:49:56 -07:00
ebd7fae059 Fix setting the issue to "1" when not searching online 2025-05-24 11:49:38 -07:00
12f1d11ee8 Merge branch 'mizaki/issue_hash_cover' into develop 2025-05-05 00:20:57 -07:00
3d47e6b3b6 Make perception hash more efficient 2025-05-04 17:28:52 -07:00
0f1239f603 Remove probably unnecessary waits in rar code for macOS 2025-05-04 17:28:03 -07:00
66cc901027 Fix python 3.12 deprecation 2025-05-04 15:49:48 -07:00
ca969e12a7 Update quick tag for new api 2025-05-04 15:40:34 -07:00
039fd4598d Remove unnecessary log output 2025-05-04 15:32:45 -07:00
f1b729129e Fix mypy types 2025-05-04 15:32:26 -07:00
0a7bb4d93d Fix ratelimit on direct series/issue lookups 2025-05-04 15:32:00 -07:00
3c062a1cd3 Alter invalid hash test from hash value to kind value 2025-05-04 22:32:09 +01:00
bcc677ab12 Use empty string Kind instead of Hash != 0 for hash checking. Remove redundent or for HashImage.URL value 2025-05-03 22:07:28 +01:00
77ddbf5baa pre-sort filenames fixes #705
Provides consistent ordering for numbers in names
2025-05-02 20:02:24 -07:00
71b32f6702 Update AUTHORS 2025-05-03 02:55:58 +00:00
32dd3a253f docs(contributor): contrib-readme-action has updated readme 2025-05-03 02:55:54 +00:00
dfaa2cc11d Reduce number of requests for quick_tag 2025-05-02 14:33:00 -07:00
2106883c67 Improve ComicCacher performance 2025-05-02 14:12:25 -07:00
3ebc11d95e Merge branch 'emmanuel-ferdman/develop' into develop 2025-05-02 13:50:32 -07:00
c9e368bf3f Speedup ComicArchive access fixes #728
Fix invalid zip test
Removing the check on each file inside of the zip, invalid zip files may still be opened but don't really matter in this case
Cache reading the filename list
Add a list of supported extensions to check first for an archiver
Remove unnecessary calls to rar executable
Fix limiter on integration test
Remove excess processEvents calls
Fix unnecessary calls when inserting into the FileSelectionList
2025-05-02 13:42:01 -07:00
2f64154cd2 Update to latest version of settngs 2025-05-01 18:18:50 -07:00
165388ce1b Show more options to the user if there are multiple bad matches
Fix some error cases in the comicvine talker
Remove leftover pprint statement
2025-04-30 17:32:05 -07:00
fb629891ba Sort files before processing 2025-04-30 17:27:25 -07:00
f0c644f5ec Fix flake8 error 2025-04-30 17:26:56 -07:00
5ee31f45a8 Fix performance when removing tags from cbz files 2025-04-30 17:26:36 -07:00
bfd9fe89dc Update quick-tag for new api 2025-04-25 13:45:28 -07:00
d65ce48882 Resolve bs4 deprecation warnings
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-04-23 23:03:46 -07:00
75bba1814c Remove rapidfuzz and use stdlib difflib
Results are on-par (90% the same) and this removes a dependency
2025-04-23 18:57:28 -07:00
146f160802 Fix tag selection 2025-04-22 21:08:10 -07:00
ad26ee7818 Fix deprecation warning 2025-04-22 21:08:10 -07:00
b5eba8d715 Fix difference_hash 2025-04-22 21:08:10 -07:00
d4bdefa9c1 Simplify zip 2025-04-22 21:04:02 -07:00
506fac03c7 Use ImageHash solely 2025-04-17 23:48:53 +01:00
343be3b973 Upgrade pre-commit 2025-04-13 13:48:42 -07:00
3c6321faa0 Fix assertion about image pixels 2025-04-12 21:15:49 -07:00
161f2ae985 Add all pillow extensions to recognized image extensions Fixes #752 2025-04-12 14:05:07 -07:00
2a8a3ab0c8 Update AUTHORS 2025-04-05 19:10:05 +00:00
65ae288018 docs(contributor): contrib-readme-action has updated readme 2025-04-05 19:10:02 +00:00
1641182ec0 Merge branch 'N-Hertstein/develop' into develop 2025-04-05 12:09:11 -07:00
HSN
2fafd1b064 Fallback to C only and add Logging
Skip falling back to en_US and go straight to C as it is always available.
Add error logging.
2025-04-04 20:03:18 +02:00
HSN
827b7a2173 Remove .UTF-8 from fallback language options
Modify fallback languages from en_US.UTF-8 & C.UTF-8 to en_US & C to avoid errors when UTF-8 is not available.
2025-03-30 12:06:33 +02:00
8aa422fd66 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-03-29 07:07:43 +00:00
HSN
7e3824c769 Change ' to " because test error... 2025-03-29 08:05:15 +01:00
4f8d4803e1 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-03-29 07:00:08 +00:00
HSN
b482b88c37 Add error catching to locale.getlocale()
Add error handling and fallbacks to en_US or C as language locale in case of misconfigured or minimal system.
2025-03-29 07:54:44 +01:00
bd6afb60ba Revert "Add Linux aarch64 runner"
This reverts commit 95c85e906d.
2025-03-22 20:28:02 -07:00
a87368bd09 Fix #741 2025-03-22 20:21:07 -07:00
95c85e906d Add Linux aarch64 runner 2025-03-22 20:15:09 -07:00
3965bfe082 Merge branch 'mizaki/qtutils_image_exception' into develop 2025-03-22 20:02:43 -07:00
25eabaf841 Fix windows being an inferior OS 2025-03-18 21:50:00 -07:00
d6d7e0ec65 Add macOS arm64 build to package.yaml 2025-03-18 21:28:50 -07:00
3b5e9d8f95 Log a warning the first time we can't find rar for writing 2025-03-18 21:25:04 -07:00
3dad7c18f8 Fix removing disabled tags 2025-03-18 21:24:08 -07:00
ea945a6b2a Don't allow cr tags to be disabled if it's the only tags available 2025-03-18 19:57:06 -07:00
575d36b67f Update typing 2025-03-18 19:55:28 -07:00
6a9d4bf648 Update sys.path handling 2025-03-16 14:06:49 -07:00
719eedb8b5 Fix #739 2025-03-16 13:52:45 -07:00
ba2d823993 Exit early if 0 bytes image data 2025-03-04 22:32:29 +00:00
36f5c72a65 Update issue templates 2025-03-02 15:20:46 -08:00
60a2c6168b Fix uploading multiple artifacts 2025-03-02 15:01:20 -08:00
f008763361 Add macos-14 for Apple Silicon binaries. Thanks to @pa-0 for testing 2025-03-02 14:35:17 -08:00
400092dd84 Notify user when no tags are enabled 2025-03-02 13:34:23 -08:00
c5c59f2c76 Merge branch 'original_hash' into develop 2025-03-02 12:46:56 -08:00
c8888cdbad Mark the checksum with the "sum:" prefix in the ScanInforamtion field 2025-03-02 12:44:02 -08:00
5b204501f3 Update pre-commit 2025-03-02 12:36:23 -08:00
5d96bdfda5 Allow printing combined CLI tags 2025-03-02 12:32:40 -08:00
803768b33a Allow recording the original hash 2025-03-02 12:32:40 -08:00
cf3009ca02 Report image_data size in exception message 2025-02-28 17:31:20 +00:00
a0be90bbf5 Add URL to ImageHash and use in issue window 2025-02-28 16:55:56 +00:00
14213dd245 Change failed image loading from logger exception to warning 2025-02-28 14:10:01 +00:00
8837fea957 Merge branch 'mizaki/image_urls_hashes' into develop 2025-02-26 21:28:39 -08:00
085b599bc4 Parametrise cover match test and add ImageHash data 2025-02-23 18:11:40 -08:00
d2499f6bae Add ImageHash support for alternate_urls 2025-02-23 18:11:40 -08:00
c3f5badc7d Use source hashes for cover matching 2025-02-11 01:03:12 +00:00
7e22b4cc22 Update AUTHORS 2025-02-09 01:05:20 +00:00
f9a39aa183 docs(contributor): contrib-readme-action has updated readme 2025-02-09 01:05:17 +00:00
cadac0a79e Merge branch 'mizaki/auto_summary_attrib' into develop 2025-02-08 17:03:10 -08:00
7589dca948 Update readme with winget info 2025-02-08 16:58:59 -08:00
ea37f96abd Merge branch 'kcgthb/online_results_info' into develop 2025-02-08 15:43:18 -08:00
8847518818 Add source info to auto-tag summary window 2025-02-08 22:40:39 +00:00
fbaec93d7d Update comictaggerlib/cli.py
Make sure `results` exists before checking for `online_results`

Co-authored-by: Timmy Welch <timmy@narnian.us>
2025-02-02 11:38:51 -08:00
5ee467465a Fix a cache miss when retrieving multiple issues 2025-01-30 01:39:26 -08:00
7480e28eac only display metadata source info if results were found, to avoid confusion 2025-01-27 17:48:19 -08:00
7998944a71 Import pillow plugins 2025-01-21 19:23:14 -08:00
280606ae11 Remove dependency on Pillow <10 2025-01-21 19:16:01 -08:00
c9de8370c2 Merge branch 'mizaki/gmd_lang_iso' into develop 2025-01-10 16:59:54 -08:00
8de35bdfa1 Fix default dict creating unnecessary keys 2025-01-10 16:25:10 -08:00
5f8a6b25c1 Fix -1 not being false for credit language combobox 2025-01-10 23:46:45 +00:00
01d7612a58 Pass credit language ISO using the widget.data to respect the metadata credit requiring an ISO string. If the string fails to match an ISO, use the raw text. 2025-01-10 00:19:35 +00:00
e8e21eb1b6 Fix tests not being excluded in wheel 2025-01-05 18:41:15 -08:00
8fbb40bb76 Fix language and countries getting modified 2024-12-16 19:13:56 -08:00
04075cc20e Fix credit handling in GUI 2024-12-16 19:12:25 -08:00
92ce2987ea Regenerate settngs 2024-12-07 15:30:44 -08:00
c282ebf845 Switch ubuntu runner to 22.04 and macos to 13 2024-12-07 14:41:22 -08:00
38932f0782 Add language to ComicTagger 2024-12-06 23:18:45 -08:00
bf0a46055a Fix parsing ' in filenames
Fixes #672
2024-12-06 23:18:45 -08:00
0fa329ca75 Add language to Credit in ComicAPI 2024-12-06 23:09:25 -08:00
577e99ae39 Print CLI tags when using the print command 2024-12-06 23:02:10 -08:00
5df9359151 Merge branch 'mizaki/write_md_merge' into develop 2024-10-19 14:00:46 -07:00
119a0881e0 Merge branch 'mizaki/fix_readtags' into develop 2024-10-19 13:59:59 -07:00
f4f732b742 Fix accidental re-ordering of pages when pages.image_index is disabled on a metadata type 2024-10-19 13:58:19 -07:00
a8f269aefa Fix export to CBZ 2024-10-19 10:37:27 -07:00
6930f0cb74 Fix switching unclean read tags 2024-10-17 21:44:09 +01:00
170476a705 Preserve hidden metadata values when reading from GUI form 2024-10-15 13:06:19 +01:00
7448e9828b Sort pages in archive order before writing CR metadata 2024-10-14 16:54:13 -07:00
6d20fe348f Update pre-commit 2024-10-11 21:07:17 -07:00
5b02358bf1 Fix all inputs being disabled when an invalid tag is loaded from settings 2024-10-11 21:04:48 -07:00
78df903de7 pre-commit 2024-09-27 15:08:00 -07:00
4cd70670cc Allow custom paramaters in comicvine url 2024-09-27 15:06:26 -07:00
dcb532d7c9 Add Image Comics to publishers.json 2024-09-27 14:45:33 -07:00
5820c36ea5 Fix CV error handling 2024-09-27 14:39:25 -07:00
c0db1e52ae Make cleanup_html produce text that is more compliant with markdown 2024-09-22 16:26:15 -07:00
e46656323c Fix clearing invalid tags 2024-09-22 16:24:28 -07:00
e96de650bf Fix label names for standard location links 2024-09-21 17:06:17 -07:00
b421a0edaa Add links to standard locations 2024-09-21 15:57:09 -07:00
a9fdafdb93 Format message better 2024-09-21 15:39:37 -07:00
a4a6d54d7e Merge branch 'cv-cache' into develop 2024-09-20 15:02:20 -07:00
9358431146 Add a notice about Metron/GCD changes on PyInstaller builds 2024-09-20 14:45:06 -07:00
a60eda1602 Typo 2024-09-20 13:57:14 -07:00
c796ad7c7a Enable debug logging for pyrate-limiter 2024-09-20 13:52:25 -07:00
63718882a5 Update pyinstaller package to not include metron or gcd by default
This makes it so that users using pyinstaller can update metron and gcd without waiting for a new ComicTagger release
2024-09-19 19:23:41 -07:00
89dfec2363 ComicVine improvements
Add more logging
Add a 10 second timeout to all requests
Log unhandled exceptions
2024-09-19 19:13:03 -07:00
39a4a37d7c Add tests 2024-09-19 19:03:30 -07:00
25e5134577 Cache more ComicVine lookups 2024-09-19 17:31:06 -07:00
a7f1d566ab Merge branch 'plugin-isolation' into develop 2024-09-19 16:26:35 -07:00
234d9e49fe Fix test 2024-09-17 15:32:01 -07:00
6ea9230382 Allow .whl files 2024-09-17 14:27:01 -07:00
1803a37591 Handle None values when doing conversions and catch indexing errors 2024-09-17 09:20:11 -07:00
c50de9bed7 Fix plugin folder 2024-09-16 16:52:51 -07:00
6a97ace933 Only support zip local plugins 2024-09-16 16:46:42 -07:00
f56d58bf45 Fix reading plugin files 2024-09-16 16:13:11 -07:00
4c9096a11b Implement the most basic local plugin isolation possible
Remove modules belonging to local plugins after loading
Remove sys.path entry after loading

This means that multiple local plugins can be installed with the same import path and should work correctly
This does not allow loading a local plugin that has the same import path as an installed plugin
2024-09-15 17:09:33 -07:00
c9c0c99a2a Increase rate limits on CV to cover the 200 requests/Hr restriction
Add twitter's alternative to HTTP code 429
2024-09-12 13:56:57 -07:00
58f71cf6d9 Remove archived tags from tests 2024-09-12 13:17:06 -07:00
befffc98b1 Catch all exceptions when parsing metadata from the CLI 2024-09-12 13:11:30 -07:00
006f3cbd1f Remove comet and cbl tags 2024-09-12 12:09:07 -07:00
582224abec Fixes for quick-tag 2024-09-12 11:51:38 -07:00
acb59f9e83 Fix saving settings 2024-08-24 12:19:11 -07:00
fab30f3f29 Add experimental quick-tag 2024-08-18 19:16:55 -07:00
2cb6caea8d Ignore update with incomplete data when complete data is already cached 2024-08-16 17:05:28 -07:00
ffdf7d71e1 Fix tests 2024-08-16 12:50:14 -07:00
db3d5d6a01 Merge branch 'jxl' into develop 2024-08-09 16:34:25 -07:00
8709ef301d Fix failing test 2024-08-03 23:11:31 -07:00
b8728c5eed Improve performance when re-tagging file based tags in zip archives 2024-08-03 14:41:04 -07:00
0ba81f9f86 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-08-03 21:27:25 +00:00
8c85a60f67 Add pillow-jxl-plugin as an optional dependency 2024-08-03 14:15:00 -07:00
d089c4bb6a Merge branch 'mizaki/modify_cb_delegate' into develop 2024-08-03 14:04:49 -07:00
8ace830d5e Remove double import 2024-07-31 22:13:15 +01:00
893728cbef Merge remote-tracking branch 'origin/pre-commit-ci-update-config' into develop 2024-07-30 18:35:06 -07:00
d4a90e8934 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-07-29 17:21:53 +00:00
a529b14459 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/asottile/pyupgrade: v3.16.0 → v3.17.0](https://github.com/asottile/pyupgrade/compare/v3.16.0...v3.17.0)
2024-07-29 17:21:31 +00:00
3227105558 Merge branch 'pre-commit-ci-update-config' into develop 2024-07-27 19:40:22 -07:00
d62dff49b4 Fix overlay tests 2024-07-27 19:39:15 -07:00
2d4d10e31d Add comment on a python oddity 2024-07-27 19:26:09 -07:00
0048901a61 Remove unused attributes 2024-07-27 19:23:37 -07:00
a7a9d38428 Make ImageMetadata a dataclass 2024-07-27 19:23:37 -07:00
219ede2d5d Improve StrEnum
Return the actual string for __str__
Allow case insensitive conversion
2024-07-27 16:45:22 -07:00
e96cb8ad15 Add button to autodetect double pages
A page is marked as a double page if it's as least as wide as tall.

Closes: #578
Co-authored-by: Sven Hesse <drmccoy@drmccoy.de>
2024-07-27 16:39:34 -07:00
0a4aef1a1b Add back apply_archive_info_to_metadata when writing tags 2024-07-27 16:24:29 -07:00
63832606b1 Add ability to auto-detect double pages
Co-authored-by: Sven Hesse <drmccoy@drmccoy.de>
2024-07-27 16:24:29 -07:00
f10ceb3216 Fix duplicate items in credits and pages when merging metadata 2024-07-27 15:45:03 -07:00
d8adbbecdd Fix inadequate checks on page attributes 2024-07-27 15:43:38 -07:00
f043da6b62 Enable navigation with left and right arrow keys in the page browser 2024-07-27 15:42:20 -07:00
77e551e582 Add Auto-Tag back to the toolbar 2024-07-27 15:40:48 -07:00
9d389970b8 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/pre-commit/mirrors-mypy: v1.10.0 → v1.11.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.10.0...v1.11.0)
2024-07-22 17:19:27 +00:00
a44e037311 Use custom delegate to unify combobox item style 2024-07-06 22:50:49 +01:00
cc50e373dc Fix missing / in glob 2024-06-30 20:25:18 -07:00
9350a07f50 Enable support for the plaintext keyring 2024-06-30 20:03:47 -07:00
6325a2a707 Pass ACTIONS_* variables because github can't be consistent 2024-06-30 19:36:25 -07:00
ea96c44d84 Pass github actions environment variables 2024-06-30 19:06:15 -07:00
4c8a4dcbd3 Make python 3.9 compatible 2024-06-29 21:04:32 -07:00
bd53678442 Copy oidc-exchange.py from pypa/gh-action-pypi-publish 2024-06-29 20:51:27 -07:00
c370baa6a2 re-add pyinstaller to release 2024-06-29 19:22:33 -07:00
45c604b332 Remove source tar.gz from github release 2024-06-29 19:03:36 -07:00
64db58ed3d Fix dmg creation 2024-06-29 18:54:13 -07:00
ab8f4a3702 Merge branch 'mizaki/placeholder_text' into develop 2024-06-29 18:44:14 -07:00
c28dc19df6 Improve filename parsing 2024-06-29 18:43:40 -07:00
56d8c507e2 Use a directory that isn't deleted 2024-06-29 17:15:13 -07:00
10a1554e73 Fix release again
Place binaries in dist/binary to make pypa/gh-action-pypi-publish happy
Don't run the formatter and qrc generator during release as it causes issues with setuptools_scm
2024-06-29 16:04:27 -07:00
c8017c4269 Fix release (maybe) 2024-06-23 19:22:00 -07:00
3cb4dca63f Limit PyPI publishing to linux 2024-06-23 18:44:53 -07:00
beeb6336e9 Fix default value of --skip-existing-tags 2024-06-23 15:20:51 -07:00
8cb1140614 Fix rename of read_all_tags Fixes #659
Fix --skip-existing-tags Fixes #658
2024-06-23 15:09:11 -07:00
f243e8c39e Fix publishing to PyPI 2024-06-23 15:09:11 -07:00
890750819a Fix combobox placeholder text not showing when using pip PyQt5 with pip wheels on Windows or Linux 2024-06-23 18:25:26 +01:00
20806f95a2 Remove lint from release code 2024-06-23 01:33:48 -07:00
13646a306d Sync macos dependency code 2024-06-23 01:12:26 -07:00
3082aae124 bump MacOS version 2024-06-23 00:38:17 -07:00
76a92c8431 Fix test 2024-06-23 00:04:33 -07:00
385a46fc16 Simplify regexes and use logger.warning 2024-06-22 20:41:15 -07:00
e452fa153b Fix issues from static analysis 2024-06-22 20:21:01 -07:00
3fd1c13ecb Fixes for metadata parsing and printing 2024-06-22 20:19:02 -07:00
76f23d4a02 Fix tags in GUI 2024-06-22 19:15:57 -07:00
5f1ddee7ce Update build system 2024-06-22 18:22:28 -07:00
9803c9ad09 Fix Remove HTML tables checkbox 2024-06-22 14:12:18 -07:00
42448fa250 Update settngs
Fix renamed settings attributes
Add --parse-filename back
Fix conversions in fileranamer
2024-06-21 21:01:11 -07:00
6b0dca2f51 Remove unnecessary issueidentifier methods 2024-06-21 20:07:55 -07:00
6ab3a89a35 Improvements to filerenamer and filename parsing 2024-06-21 20:07:07 -07:00
3389c72a63 Merge branch 'help-messages' into develop 2024-06-21 19:53:30 -07:00
59aae5b122 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/asottile/pyupgrade: v3.15.2 → v3.16.0](https://github.com/asottile/pyupgrade/compare/v3.15.2...v3.16.0)
- [github.com/PyCQA/flake8: 7.0.0 → 7.1.0](https://github.com/PyCQA/flake8/compare/7.0.0...7.1.0)
2024-06-21 19:31:52 -07:00
063b04c543 Add tooltips for clearing tags and applying CBL transforms 2024-06-21 19:18:52 -07:00
77d340d04d Set buddies 2024-06-21 19:18:52 -07:00
69a9566f42 Update all references of saved 'matadata' to 'tags' 2024-06-20 16:47:10 -07:00
24002c66e7 Move action definitions into ui file 2024-06-14 15:35:01 -07:00
bf87a76fdf Make the splitter visible 2024-06-09 18:19:51 -07:00
d0312e050b Fix page handling 2024-06-09 13:40:42 -07:00
44b4857fc3 Remove unneeded checks in _enable_widget 2024-06-09 13:18:27 -07:00
6132af3bb5 Support niquests 2024-06-09 13:09:26 -07:00
c91c7edd73 Re-generate SettngsNS 2024-06-09 12:55:12 -07:00
6f9fbc73d8 Fix showing a fullscreen page on double click on MacOS 2024-06-09 12:54:31 -07:00
888720b544 Allow blurring images fixes #637
for people that don't accidentally read the entire comic when editing the metadata
2024-06-09 12:52:55 -07:00
5e6682566f Allow results to include comics in the following year fixes #638 2024-06-08 19:17:42 -07:00
6351afb36c Add an option to prefer filename metadata on the CLI fixes #630 2024-06-08 19:12:38 -07:00
898ccef5c0 Set the working directory for rar commands 2024-06-08 15:00:25 -07:00
0198eb9e2b Fix saving merge settings 2024-06-06 20:41:21 -07:00
0457e19913 Merge descriptions 2024-06-06 19:01:41 -07:00
e5925b8ebc Merge branch 'mizaki/add_table_html_gui' into develop 2024-06-03 16:17:19 -07:00
710760dc91 Show the located rar exe 2024-06-03 16:08:09 -07:00
5d7e348a0e Fix remove tags menu option Fixes #650 2024-06-03 13:06:49 -07:00
979a54e2b8 Fix lexing a dot '.' as a symbol
Fixes #652
2024-06-03 13:06:49 -07:00
afc0aa4a78 missed rename 2024-06-02 00:16:55 +01:00
a552f05b23 Add remove HTML tables back 2024-06-01 19:40:13 +01:00
7bbc3f3e2c Merge branch 'mizaki/cli_interactive_fix' into develop 2024-05-31 18:19:50 -07:00
a4941a93f0 Use combined md with -i on CLI 2024-06-01 00:45:59 +01:00
d82cd95849 Fix typo in protofolius_issue_number_scheme
Fixes #648
2024-05-26 13:22:55 -07:00
5010ca60e9 Remove reduce_widget_font_size 2024-05-21 20:32:01 -07:00
419461c905 Improve Merge descriptions in settings window 2024-05-21 20:29:19 -07:00
32b570ee5b Improve help messages
Include default values
2024-05-21 19:57:47 -07:00
9849b5f6f9 Note newline delimited fields 2024-05-21 19:57:47 -07:00
706c46f2bb Fix the prompt on save button in settings 2024-05-21 19:57:47 -07:00
d1986a5d53 Update settings window 2024-05-21 19:57:47 -07:00
e864e2db48 Re-arrange settings 2024-05-21 19:57:47 -07:00
af9c8afad7 Update search/identify help message 2024-05-21 19:57:47 -07:00
4e5d8885c6 Improve help messages 2024-05-21 19:57:47 -07:00
215a4680f4 Merge branch 'mizaki/fix_autotag_overlaystyles' into develop 2024-05-21 18:28:20 -07:00
f712952b87 Fix typing issues 2024-05-21 18:22:30 -07:00
14f2599ba1 fix auto tag window 2024-05-21 23:48:46 +01:00
2897611006 Fix defaults for arguments
Bump settngs
2024-05-19 14:17:07 -07:00
250d777159 Remove combine overlay. Alter help messages in settings window and add lists message 2024-05-11 22:25:46 +01:00
6c3b63abd9 Add option for merge lists and fix saving overlay options in settings window 2024-05-11 22:08:49 +01:00
bada694fd4 Rebase corrections 2024-05-11 16:44:44 +01:00
a40438d38c Separate list merge into a separate option (lordwelch) 2024-05-11 16:42:24 +01:00
3d443e0908 lordwelch rewrite 2024-05-11 02:04:43 +01:00
b761763c4c Rename CBL option to Metadata 2024-05-11 02:02:01 +01:00
71b79bdc91 Move some overlay test data to testing/comicdata.py 2024-05-11 02:02:01 +01:00
2faac18597 norm_fold out of loop for add_credit. Explicit overlay mode for CLI metadata. 2024-05-11 02:02:01 +01:00
e9a592df50 GUI overlay settings moved to internal namespace and CLI args added 2024-05-11 02:02:01 +01:00
94b94b76dc Change settings menu overlay descriptions 2024-05-11 02:02:01 +01:00
62240bf2f4 Add OverlayMode options for read style and data source 2024-05-11 02:02:01 +01:00
ffb4efbcd7 GUI overlay settings moved to internal namespace and CLI args added 2024-05-11 02:01:59 +01:00
b2f95faac4 Change settings menu overlay descriptions 2024-05-11 01:56:10 +01:00
93be16f7eb Remove data to test empty string->None for series and issue as an empty string will never make it to genericmetadata now 2024-05-11 01:56:10 +01:00
8b0683f67c Add OverlayMode options for read style and data source 2024-05-11 01:56:06 +01:00
851339d4e3 Merge branch 'mizaki/multi_read' into develop 2024-05-10 16:25:07 -07:00
5cf54ab511 Reverse load styles only in taggerwindow and comment reverse 2024-05-11 00:11:08 +01:00
384ac5e33a Don't save in priority order 2024-05-10 19:23:27 +01:00
7271caccc9 Size combobox dropdown with extra space for move item arrows. Add same sizeHint as QComboBox for unified height 2024-05-10 17:14:39 +01:00
0c9e846bfb Force MacOS to use CE_CheckBox 2024-05-09 20:54:12 +01:00
a2a57b6da0 Add tooltip support for items (and arrows). Simplify and measure arrow images 2024-05-09 01:58:43 +01:00
35ec334c28 Merge pull request #640 from comictagger/pre-commit-ci-update-config
[pre-commit.ci] pre-commit autoupdate
2024-05-07 17:35:25 -07:00
7383b18924 Warn on read style failure in rename window 2024-05-07 22:08:27 +01:00
e0f1f7c356 Rename ItemDelegate. Remove table checkbox 2024-05-06 20:53:00 +01:00
6b8b961ff7 Report and/or log overlay tag style read errors 2024-05-06 19:40:38 +01:00
4c6a1d3215 Use custom combobox with item delegate 2024-05-06 16:33:30 +01:00
64dbf9e981 Add -t to --type-read and duplicate read styles to modify styles on the CLI if modify if empty 2024-05-04 21:14:00 +01:00
27e3803414 Reverse read styles on load. Missed conversion to overlay_ca_read_style 2024-05-04 20:59:18 +01:00
591b6bcc44 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/psf/black: 24.4.0 → 24.4.2](https://github.com/psf/black/compare/24.4.0...24.4.2)
- [github.com/pre-commit/mirrors-mypy: v1.9.0 → v1.10.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.9.0...v1.10.0)
2024-04-29 17:22:16 +00:00
6ac2e32612 Parse numeric characters as numbers fixes #639 2024-04-29 10:20:43 -07:00
887c383229 Fix an infinite loop issue parsing numbers outside of 0-9 fixes #639 2024-04-29 10:20:25 -07:00
64c909facb Have last overlayed style labelled as 1 (human logical) 2024-04-29 00:55:54 +01:00
23ceda33bd fix self.load_data_styles name 2024-04-29 00:55:54 +01:00
7e63070f13 Change ComicArchive type to set from list 2024-04-29 00:55:51 +01:00
247ee01d6e Copy tags will copy use overlayed result of all read styles 2024-04-29 00:53:01 +01:00
f61b91acd6 Revert and change multi-read styles if dirty 2024-04-29 00:53:01 +01:00
6951113717 Change load_cache calls from load and saves style conbined list to combined set 2024-04-29 00:53:01 +01:00
73269c7c9d Add overlay_ca_read_style method to prevent duplicated code 2024-04-29 00:52:59 +01:00
f00cd1568c Clear cache on autotag rather than reloading 2024-04-29 00:51:44 +01:00
f9d79ead9d Remove answered comment 2024-04-29 00:51:44 +01:00
c01d6aaa3a Add up and down png 2024-04-29 00:51:44 +01:00
dd8767ad81 Revert copy tag status tip 2024-04-29 00:51:44 +01:00
0bbdaa96cf Split command line `--type arg into --type-modify for modify styles and --type-read for read styles 2024-04-29 00:51:40 +01:00
96bbbe51e7 More load_data_styles to list fixes 2024-04-29 00:46:03 +01:00
16088aec72 Covert missed self.load_data_styles to list 2024-04-29 00:45:15 +01:00
199167c50b Change click event handling for QTableWidget. Needs testing on MacOS and Windows 2024-04-29 00:45:15 +01:00
9359cd877d Switch to using list for storing read styles 2024-04-29 00:45:15 +01:00
003b68b3d3 Renamewindow 2024-04-29 00:45:09 +01:00
29dc7ad830 Use multi-read styles. Table combo box style improvements. Tooltips 2024-04-29 00:40:41 +01:00
770cce5ac0 Add TableComboBox 2024-04-29 00:40:41 +01:00
235e62814f Update pre-commit 2024-04-28 13:57:53 -07:00
cd2d40a379 Merge pull request #633 from comictagger/pre-commit-ci-update-config
[pre-commit.ci] pre-commit autoupdate
2024-04-28 13:55:16 -07:00
d63123b77b Add tests for prepare_metadata 2024-04-28 13:53:41 -07:00
8b4bf8d51f Allow preserving the original filename when moving 2024-04-27 19:25:33 -07:00
d98f815ce0 Add a button to attempt to identify a scanner page 2024-04-27 18:10:49 -07:00
787f3e8ea1 Enabled bulk edits in the page list editor 2024-04-27 17:28:59 -07:00
064795fac9 Fix prepare_metadata 2024-04-27 16:43:51 -07:00
9208a80ab0 Improve typing 2024-04-27 15:45:05 -07:00
a681abb854 Consolidate preparing metadata for save 2024-04-27 15:29:34 -07:00
996397b9d5 Fix select all 2024-04-23 23:54:33 -04:00
8fb180390d [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/psf/black: 24.3.0 → 24.4.0](https://github.com/psf/black/compare/24.3.0...24.4.0)
2024-04-15 17:19:43 +00:00
c311b8e351 Use comicapi for all urllib3 items 2024-04-12 14:39:34 -07:00
af059b8775 Merge branch 'metadataOverride' into develop 2024-04-12 14:12:27 -07:00
de3a9352ea Allow reading cli metadata from a file 2024-04-12 14:10:21 -07:00
d104ae1e8e Update help message for the -m option 2024-04-11 15:46:29 -07:00
88c2980e5d [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/pre-commit/pre-commit-hooks: v4.5.0 → v4.6.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.5.0...v4.6.0)
2024-04-08 17:23:46 +00:00
8bcd51f49b Improve commandline metadata override
Change parse_metadata_from_string to yaml syntax
Add a special value to remove existing values when metadata is overlayed
2024-04-06 12:03:01 -07:00
de084ffff9 Fix string value of GenericMetadata 2024-04-06 12:02:21 -07:00
eb6c2ed72b [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/asottile/pyupgrade: v3.15.1 → v3.15.2](https://github.com/asottile/pyupgrade/compare/v3.15.1...v3.15.2)
- [github.com/PyCQA/autoflake: v2.3.0 → v2.3.1](https://github.com/PyCQA/autoflake/compare/v2.3.0...v2.3.1)
- [github.com/psf/black: 24.2.0 → 24.3.0](https://github.com/psf/black/compare/24.2.0...24.3.0)
2024-03-25 17:15:40 +00:00
c99b691041 pre-commit 2024-03-17 14:03:05 -07:00
48fd1c2897 Force plain text on TextEdits 2024-03-16 11:52:14 -07:00
37c809db2a Fix crash when no comics are found in the IssueIdentifier 2024-03-16 11:52:14 -07:00
51db3e1249 Allow ignoring errors that happen the gui 2024-03-16 11:52:14 -07:00
c99f3fa083 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/pre-commit/mirrors-mypy: v1.8.0 → v1.9.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.8.0...v1.9.0)
2024-03-12 20:00:49 +00:00
6f3a5a8860 Set the shell to bash 2024-03-09 19:49:59 -08:00
ebd99cb144 Set PKG_CONFIG_PATH as actions/setup-python@v5 overrides it 2024-03-09 18:06:30 -08:00
b1a9b0b016 Only upgrade icu4c and pkg-config 2024-03-09 14:47:47 -08:00
0929a6678b Update icu4c paths and upgrade packages on macOS 2024-03-09 14:45:49 -08:00
69824412ce Update GH Actions 2024-03-09 14:07:11 -08:00
0d9756f8b0 Pin minimum version for comicinfoxml 2024-03-09 13:51:35 -08:00
244cd9101d Remove commented code 2024-03-09 13:46:51 -08:00
3df263858d Merge branch 'web-links' into develop 2024-03-09 13:42:29 -08:00
b45c39043b Merge branch 'comicfn2dict' into develop 2024-03-09 13:10:27 -08:00
9eae71fb62 Disable checkboxes when the complicated parser is not used 2024-03-09 13:07:49 -08:00
9a95adf47d Bump comicfn2dict 2024-03-09 13:02:02 -08:00
956c383e5f Fix py7zr 2024-03-05 15:13:03 -08:00
5155762711 Add comicfn2dict as an alternative filename parser 2024-03-03 21:47:31 -08:00
ea43eccd78 Merge branch 'ii-rework' into develop 2024-03-01 15:39:01 -08:00
ff2547e7f2 Disable buttons for add/remove weblink 2024-03-01 15:26:56 -08:00
163cf44751 Open the editor when adding a now web link 2024-02-26 19:04:33 -08:00
14ce8a759f Mark all QTextEdit's as plain text only 2024-02-26 15:57:00 -08:00
22d92e1ded Move result determination out of _cover_matching 2024-02-26 15:38:13 -08:00
3c3700838b Select item on add and set the dirty flag on change 2024-02-25 08:26:29 -08:00
05423c8270 Use a QListWidget for web_links
Fix web_link in md_attributes
2024-02-24 22:31:45 -08:00
d277eb332b Add an option to disable prompt on save Fixes #422 2024-02-24 19:56:32 -08:00
dcad32ade0 Fix settngs generation 2024-02-24 19:55:28 -08:00
dd0b637566 Bump settngs 2024-02-24 19:01:10 -08:00
bad8b85874 Fix tests 2024-02-24 18:30:41 -08:00
938f760a37 Remove IssueIdentifier.search 2024-02-23 20:50:17 -08:00
f382c2f814 Update Tests 2024-02-23 20:47:22 -08:00
4e75731024 Re-write IssueIdentifier.search as IssueIdentifier.identify 2024-02-23 20:47:04 -08:00
920a0ed1af Implement better migration of changed settings should fix #609 2024-02-23 15:45:18 -08:00
9eb50da744 Fix setting rar info in the settings window Fixes #596
Look in all drive letters for rar executable
2024-02-23 15:45:18 -08:00
2e2d886cb2 Bump settngs 2024-02-22 14:52:26 -08:00
5738433c2b Fix fileselectionlist
Remove the custom widgetitem
Set a minimum size for the columns
Use a space " " a and nbsp "\xa0" for the check column to allow sorting
2024-02-22 14:30:15 -08:00
4a33dbde46 Fix PyInstaller packaging 2024-02-22 14:30:15 -08:00
10a48634bd Update talker dependencies 2024-02-19 12:29:36 -08:00
2492d96fb3 Merge branch 'pre-commit-ci-update-config' into develop 2024-02-19 12:08:43 -08:00
87248503b4 Allow 7z again 2024-02-19 11:57:30 -08:00
7705e7ea1f [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/asottile/pyupgrade: v3.15.0 → v3.15.1](https://github.com/asottile/pyupgrade/compare/v3.15.0...v3.15.1)
- [github.com/PyCQA/autoflake: v2.2.1 → v2.3.0](https://github.com/PyCQA/autoflake/compare/v2.2.1...v2.3.0)
- [github.com/psf/black: 24.1.1 → 24.2.0](https://github.com/psf/black/compare/24.1.1...24.2.0)
2024-02-19 17:19:25 +00:00
54b0630891 Allow 7z for rar decompression on Windows 2024-02-18 21:57:51 -08:00
27e70b966f Export translator_synonyms 2024-02-18 21:39:27 -08:00
ad8b92743c Remove unused variable 2024-02-18 18:01:51 -08:00
22b44c87ca Merge branch 'mizaki/autotag_source' into develop 2024-02-18 18:00:09 -08:00
2eca743f20 Fix #602
Tests were not made correctly to catch the change in 2c3a2566cc
This has now been corrected
2024-02-18 17:31:00 -08:00
bb4be306cc Fix fileselectionlist columns 2024-02-18 17:28:55 -08:00
768ef0b6bc Fix rar exe handling 2024-02-18 01:40:49 -08:00
b2d3869488 Update filerenaming for web_links
Ensure the j specifier in MetadataFormatter converts to str before joining
Add a web_link variable to the filerenamer
2024-02-17 17:42:07 -08:00
44e9a47a8b Support multiple web_links 2024-02-17 17:42:07 -08:00
215587d9a4 Move path under progress bar 2024-02-17 18:38:51 +00:00
7430e59b64 Add attributation to auto tag window 2024-02-17 18:36:49 +00:00
09490b8ebf Merge branch 'lordwelch-local-plugins' into develop 2024-02-12 17:40:09 -08:00
1e4a3b2484 Merge branch 'mizaki-meta_multi' into develop 2024-02-12 17:29:45 -08:00
b9bf3be4b2 Add short metadata style names 2024-02-12 20:57:32 +00:00
a1e4cec94f Log file path to plugin when it fails to load and remove debug statements 2024-02-11 13:18:03 -08:00
65e857af8b Move cache reset and load outside of loop. continue if it's impossible to use metadata 2024-02-11 19:32:12 +00:00
8887d48b3e Save metadata styles with one result per archive 2024-02-11 13:57:34 +00:00
e14714e26b Fix the --list-plugins command 2024-02-10 21:25:57 -08:00
8ec16528ab Implement local plugins 2024-02-10 21:00:24 -08:00
e9e619c992 Use CheckableComboBox in ui file 2024-02-11 01:51:47 +00:00
a6b60a4317 Simplify enabled widget check and reset cache before loading, break on failed metadata writing 2024-02-11 00:53:40 +00:00
69615c6c07 Fix hash and test 2024-02-10 15:02:24 -08:00
da6b2b02f4 Implement a replaceWidget helper function 2024-02-10 14:42:47 -08:00
3dfdae4033 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-02-10 01:55:15 +00:00
23021ba632 Add support for saving multiple metadata styles in the GUI
Unwind credit color comprehension

Convert save style from a string setting to a list

Use lordwelch version of Checkable combobox

Improve readbility, fix label alignment in taggerwindow.ui, better report removal of tags and clearer number meanings.

Unwind list comprehension for easier readability
2024-02-10 01:55:15 +00:00
bc335f1686 Forbid nested comprehensions 2024-02-06 18:01:26 -08:00
999d3eb497 Merge branch 'pre-commit-ci-update-config' into develop 2024-02-06 17:08:43 -08:00
bf67c6d270 Add E701 to flake8 ignores for new black version 2024-02-02 14:36:11 -08:00
df762746ec [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-01-29 17:14:26 +00:00
6687e5c6ca [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/psf/black: 23.12.1 → 24.1.1](https://github.com/psf/black/compare/23.12.1...24.1.1)
2024-01-29 17:14:04 +00:00
2becec0fb6 Update help for --overwrite 2024-01-22 17:01:40 -08:00
fbe56f4db9 Remove unnecessary dest arguments in settings 2024-01-22 17:00:59 -08:00
085543321a cbxClearFormBeforePopulating not working 2024-01-22 16:50:15 -08:00
f8c0ca195a Add cbxDisableCR, update cbxSplitWords and cbxClearFormBeforePopulating 2024-01-22 16:49:57 -08:00
dda0cb521a Add more credit synonyms 2024-01-21 15:06:34 -08:00
bb1a83b4ba Fix the rename command 2024-01-21 14:01:11 -08:00
f34e8200dd Fix add_to_path tests 2024-01-20 10:34:40 -08:00
539aac1307 Fix clearing lists via the '-m' option Fixes #587 2024-01-14 13:38:11 -08:00
f75ee58ac0 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/PyCQA/flake8: 6.1.0 → 7.0.0](https://github.com/PyCQA/flake8/compare/6.1.0...7.0.0)
2024-01-08 17:15:56 +00:00
d27621ccd7 Merge branch 'pre-commit-ci-update-config' into develop 2023-12-31 14:45:45 -08:00
1ca585a65c Fix #584 2023-12-31 14:33:27 -08:00
39407286b3 Fix tarfile 2023-12-25 22:59:57 -08:00
6e56872121 Fix running dmgbuild again 2023-12-25 22:50:11 -08:00
888c50d72a Fix running dmgbuild 2023-12-25 22:41:57 -08:00
231b600a0e Switch to tar.gz and dmg archives to reduce space 2023-12-25 22:16:18 -08:00
db00736f58 Fix filename parsing not respecting user settings 2023-12-25 21:57:31 -08:00
5a714e40d9 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/psf/black: 23.12.0 → 23.12.1](https://github.com/psf/black/compare/23.12.0...23.12.1)
- [github.com/pre-commit/mirrors-mypy: v1.7.1 → v1.8.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.7.1...v1.8.0)
2023-12-25 17:15:30 +00:00
230a4b6558 Update namespace 2023-12-24 18:32:52 -08:00
f7bd6ee4f3 Add cix support 2023-12-24 18:32:52 -08:00
1ef6e40c29 Allow the avif extension 2023-12-24 18:32:52 -08:00
7d1bf8525b Merge branch 'metadata-plugin' into develop 2023-12-24 18:32:42 -08:00
59694993ff Fix loading previous existing xml 2023-12-24 18:28:38 -08:00
109d8efc0b Update pyinstaller hook 2023-12-24 18:04:35 -08:00
c8507c08a9 Ensure ComicRack and CoMet metadata preserve unknown xml tags 2023-12-23 23:50:58 -08:00
28be4d9dd7 Improve errors when loading plugins 2023-12-23 23:47:44 -08:00
ceb3b30e5c Always apply the default page list when writing metadata 2023-12-20 21:24:12 -08:00
8dccedc229 Bump metron-talker minimum version 2023-12-19 09:05:56 -08:00
c3a8221d99 Return an empty object if an archive does not have the requested style 2023-12-18 16:59:31 -08:00
ed480720aa Update AUTHORS 2023-12-18 20:38:38 +00:00
f18f961dcd [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/pre-commit/pre-commit-hooks: v4.4.0 → v4.5.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.4.0...v4.5.0)
- [github.com/asottile/setup-cfg-fmt: v2.4.0 → v2.5.0](https://github.com/asottile/setup-cfg-fmt/compare/v2.4.0...v2.5.0)
- [github.com/asottile/pyupgrade: v3.10.1 → v3.15.0](https://github.com/asottile/pyupgrade/compare/v3.10.1...v3.15.0)
- [github.com/PyCQA/isort: 5.12.0 → 5.13.2](https://github.com/PyCQA/isort/compare/5.12.0...5.13.2)
- [github.com/psf/black: 23.7.0 → 23.12.0](https://github.com/psf/black/compare/23.7.0...23.12.0)
- [github.com/pre-commit/mirrors-mypy: v1.5.1 → v1.7.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.5.1...v1.7.1)
2023-12-18 17:17:28 +00:00
df781f67e3 Fix assigning black_and_white value 2023-12-18 02:46:53 -08:00
addddaf44e List metadata styles when listing plugins 2023-12-18 02:37:40 -08:00
4660b14453 Fixup metadata handling 2023-12-18 02:37:40 -08:00
9c231d7e11 Add better page info handling
Rename set_default_page_list to apply_default_page_list and apply
 during read_metadata
Add a filename attribute to the ImageMetadata class
Mark image_index as required
Always sort the page name list, a comic application will never need the
 unsorted list of names
Assign the first result from get_cover_page_index_list to coverImage in
 CoMet tags
Allow an Archiver to be passed to the ComicArchive constructor
2023-12-18 02:37:34 -08:00
989470772f Make widget disabling more consistent 2023-12-18 01:24:30 -08:00
8b7443945b Use ids for metadata type in file selection list
Removed unnecessary FileInfo class
2023-12-17 22:01:47 -08:00
da373764e0 Let the original ComicRack metadata be disabled
Ensure metadata styles can be overridden by other plugins
2023-12-17 21:47:44 -08:00
fd868d9596 Add supports_credit_role to metadata plugins 2023-12-17 21:47:44 -08:00
ae5e246180 Add plugin support for metadata 2023-12-17 21:47:43 -08:00
04b3b6b4ab Do not normalize series_name when a literal search is requested 2023-12-17 19:14:38 -08:00
564ce24988 Bump settngs to 0.9.2 2023-12-17 18:30:01 -08:00
3b2e763d7d Merge branch 'json-output' into develop 2023-12-17 18:28:53 -08:00
50859d07c4 Set the return code to 3 if any results are not successful 2023-12-17 18:17:19 -08:00
04bf7f484e Ensure IssueIdentifier output goes to the right place 2023-12-17 18:10:18 -08:00
4c1247f49c Print the summary even if quiet mode is enabled 2023-12-17 18:03:25 -08:00
17a8513efc Disable json output in interactive mode 2023-12-17 17:56:12 -08:00
7ada13bcc3 Remove unnecessary print statements 2023-12-17 17:35:21 -08:00
5b1c92e7b8 Fix a crash when fetching images during auto-tag in the gui 2023-12-17 16:25:21 -08:00
45643cc594 Add integration tests 2023-12-17 16:24:32 -08:00
ab6b970063 Create an Action tuple for determining the current command 2023-12-17 16:16:21 -08:00
9571020217 Upgrade settngs to 0.9.1 2023-12-17 16:15:26 -08:00
bb67ab009e Ensure that all output goes through a logger before output to the user
Adds an option to output json for CLI options
2023-12-17 15:51:43 -08:00
f3b235ae14 Move pyupgrade above autoflake to reduce runs of pre-commit required 2023-12-16 17:28:41 -08:00
0de95777b4 Handle multiple options sharing a dest 2023-12-16 17:06:27 -08:00
9d36ed0dc6 Update AUTHORS 2023-12-16 17:50:55 +00:00
e0eec002fa docs(contributor): contrib-readme-action has updated readme 2023-12-16 17:50:51 +00:00
79779b7a46 Merge branch 'DrMcCoy/fix_crash_shortcut_pagetype' into develop 2023-12-16 09:49:09 -08:00
df24ad0008 Fix crash when using shortcut to set page type
QListWidget has no rowCount() method, it has count() instead.
2023-12-16 17:16:31 +01:00
651c5aed37 Add packaging dependency 2023-12-13 09:53:41 -08:00
3c83dbd038 Merge branch 'mizaki/talkers_version_check' into develop 2023-12-13 09:52:20 -08:00
fc6e0c3db3 Parse ct version only once 2023-12-12 23:47:47 +00:00
c5cfd3ebdc Add a link to the log folder from the log window 2023-12-01 19:48:16 -08:00
cead69f8e3 Merge branch 'mizaki/settings_encoder' into develop 2023-12-01 19:43:18 -08:00
4d2b9e1157 Warn on bad min ct required verion and use anyway. Use clearer log messages 2023-12-01 14:09:17 +00:00
f977e70562 Rename min ct required var. Use a minimum version only check instead of full spec 2023-12-01 01:23:46 +00:00
12dd06c558 Add CT verion check against talker requirements 2023-11-30 01:50:28 +00:00
70541cc9ee Encode pathlib.Path for the settings file. Validate types from the JSON settings file after loaded. JSON.decoder not used due to its limitation with context. 2023-11-28 23:21:04 +00:00
d37c7a680d Update dependencies 2023-11-28 15:08:26 -08:00
1ff6f1768b Use importlib.resources instead of __file__ 2023-11-25 12:32:50 -08:00
99325f40cf Merge branch 'mizaki/cleanup_html' into develop 2023-11-23 16:12:02 -08:00
65948cd9cd Merge branch 'bump-settngs' into develop 2023-11-23 16:06:01 -08:00
305eb1dec5 Enable stricter mypy configuration 2023-11-23 16:05:16 -08:00
9aad872ae6 Merge branch 'uigenerator' into develop 2023-11-23 15:19:20 -08:00
a478a35f66 Simplify setting values on Qt widgets
Add explanatory comments
2023-11-23 15:18:59 -08:00
128cab077c Replace pycountry with isocodes
isocodes is updated more often and doesn't depend on deprecated packages
2023-11-23 14:21:21 -08:00
9dc6f8914f Upgrade settings to 0.8.0 2023-11-19 23:14:40 -08:00
57873136b6 Use isinstance for type check 2023-11-14 15:18:48 -08:00
987f3fc564 cleanup_html improvements 2023-11-13 01:41:26 +00:00
10776dbb07 Fix flake8 issues 2023-11-09 18:23:57 -08:00
2d3f68167c Merge branch 'progress-dialog' into develop 2023-11-09 16:57:02 -08:00
770f64b746 Merge branch 'mizaki-talker_file_picker' into develop 2023-11-09 16:53:15 -08:00
235c12bd53 Convert types back to their declared types in talkeruigenerator 2023-11-09 16:52:41 -08:00
10b19606e0 Fix GenericMetadata __str__ 2023-11-05 21:36:29 -08:00
a7d1084a4d Remove flake8-warnings 2023-11-05 13:27:31 -08:00
21575a9fb8 Fix saving CBI when credits are empty 2023-11-05 13:27:14 -08:00
2258d70d7b Add file picker to talkers options. Requires type of pathlib.Path 2023-11-01 02:01:54 +00:00
b23c3195e3 Merge branch 'lexNumbers' into develop 2023-10-27 23:50:05 -07:00
bd9b3522d8 Improve edge cases
Lex `'` as a symbol
Lex multiple symbols as a single item
Prefer `$` at the start of a number
Simplify issue number parsing
2023-10-27 23:26:40 -07:00
78060dff61 Rework parse_series 2023-10-27 23:26:40 -07:00
4a29040c74 Add format to the filename parser result 2023-10-27 23:13:12 -07:00
496f3f0e75 fix reset after space 2023-10-23 22:05:42 -07:00
f03b2e58cf Improve lexing numbers
lex currency amounts as text
lex a '.' followed by a number as a number if there is a preceding space
2023-10-23 21:13:31 -07:00
29ddc3779a Ensure FilenameInfo is always filled out fixes #556 2023-10-23 21:08:55 -07:00
7842109ca2 Pin chardet version 2023-10-22 16:01:46 -07:00
7527dc4fd8 FIX: A hamming distance of 0 is a perfect match. Adjust to 100 for empty URLs 2023-10-12 22:34:16 +01:00
8dfd38a15c Merge branch 'rar-cwd' into develop 2023-10-12 01:31:57 -07:00
6227edb0a3 Set rar cwd to reduce errors 2023-10-12 01:30:32 -07:00
114a0bb615 Fix parsing '&' with the "complicated" filename parser 2023-10-12 01:26:31 -07:00
abfd97d915 Merge branch 'protofolius_issue_scheme' into develop 2023-10-11 17:05:27 -07:00
582b8cc57b Add more parseable filenames 2023-10-11 17:03:07 -07:00
97a24d8d52 Change dialog modality and only center dialog when it is created 2023-10-08 11:59:57 -07:00
edb087abde Handle errors when reading zip comments fixes #548 2023-10-07 11:49:57 -07:00
96c5c4aa28 Fix pyinstaller build
Fix exception when PyQt is not installed
2023-10-07 11:49:30 -07:00
4b93262d5f Merge branch 'mizaki-window_sorting' into develop 2023-10-06 20:14:35 -07:00
78a890f900 Fix parsing a month name in the series fixes #542 2023-10-06 20:06:39 -07:00
5bdbe7d181 Always update rows even if None 2023-10-05 22:14:45 +01:00
f250d2c5c3 Merge branch 'mizaki-gmd_list_set' into develop 2023-10-04 20:16:33 -07:00
b6d5fe7013 Improve rar error messages 2023-10-04 19:08:17 -07:00
80f3dd7ce4 Restore issue number sorting 2023-09-30 23:19:10 +01:00
0c63f77e53 Restore issue count and year sorting 2023-09-30 23:05:06 +01:00
5688cdea89 Merge branch 'mizaki-gentalker_password' into develop 2023-09-26 17:05:20 -07:00
2949626f6d Merge branch 'mizaki-remove_series_genres' into develop 2023-09-26 17:04:45 -07:00
319aa582e5 Remove ignoring default for setting generation combobox 2023-09-25 00:55:50 +01:00
058651cc29 Change metadata lists to sets. Changed CV talker to reflect and tidied 2023-09-24 14:33:57 +01:00
5874f3bcaf Remove genres from ComicSeries as it is no longer required with the new cache system 2023-09-22 23:15:04 +01:00
c6522865ab Use casefold 2023-09-21 16:05:13 +01:00
5684694055 Generate password box for any settings dest name that end in password 2023-09-21 01:47:08 +01:00
360a9e6308 Merge branch 'mizaki-talker_gen_combo' into develop 2023-09-17 16:39:33 -07:00
015959bd97 Merge branch 'mizaki-talker_setting_logo_blurb' into develop 2023-09-17 16:35:13 -07:00
8feade923a Don't capitalise and therefore no need to use data on the combobox 2023-09-17 20:54:20 +01:00
df3e7912b3 Add talker information in setting window 2023-09-17 18:26:06 +01:00
919561099e Finish removing the script option 2023-09-17 08:36:00 -07:00
e7cc05679f Bump metron-talker version 2023-09-17 08:09:43 -07:00
99461c54f1 Fix a crash when setting the page type with no comic selected 2023-09-15 21:03:41 -07:00
56f172e7b5 Add combo box support to talker settings generator 2023-09-15 23:46:13 +01:00
ddd98ee86d Add metron-talker as an optional dependency 2023-09-15 15:13:14 -07:00
1d25179171 Allow unsetting metadata fields on the commandline fixes #528 2023-09-14 11:30:05 -07:00
7efef0bb44 Merge branch 'mizaki-on_change_windows' into develop 2023-09-14 11:20:01 -07:00
366e9cf6e8 Move update into own function. Add title missing to trigger issue update. 2023-09-13 21:35:52 +01:00
57abe22515 Merge branch 'mizaki-fix_auto_id' into develop 2023-09-12 15:16:16 -07:00
c7a49b3643 Fix crash with series and issue window if the year is None. Closes #523 2023-09-10 13:42:17 +01:00
1125788bb7 Update series and issue rows after calling for more information. Closes #512 2023-09-10 13:31:20 +01:00
034a25a813 Fix auto-identify crash 2023-09-07 14:44:30 +01:00
f72c0c8224 Fix call to check_api 2023-09-06 04:56:30 -04:00
f6be7919d7 Implement support for protofolius's permission scheme 2023-09-06 04:50:05 -04:00
0a2340b6dc Remove the --script commandline option 2023-09-06 03:00:27 -04:00
bf2b4ab268 Rename check_api_key to check_status
Parameter is changed to a settings dict so that a Talker can retrieve any info it needs
Change issue_id type annotation to str
2023-09-06 02:59:59 -04:00
40bd3d5bb8 Fix generation and saving of talker settings fixes #515 #514 2023-09-05 14:43:17 -04:00
61d2a8b833 Fix issue padding validation fixes #513 2023-09-05 14:42:03 -04:00
b04dad8015 Stop deleting self.progialog in the series selection window 2023-09-05 14:41:07 -04:00
3ade47a7e0 Convert bytes to str when printing raw tags. Fixes #510 2023-09-05 04:05:20 -04:00
5bc44650d6 Change --only-set-cv-key to --only-save-config 2023-09-05 03:56:56 -04:00
8b1bcd93e6 Add a combobox to select a metadata source in the main window Fixes #508 2023-09-05 03:55:18 -04:00
d70a98ed29 Fix --darkmode 2023-09-05 03:55:18 -04:00
05e6eaf88e Update setting group names
Make group names presentable to users and add builtin plugins during namespace generation.
Revamp talkeruigenerator.py to use generated group and setting names and remove as much hard-coded strings as possible
Add a --list-plugins commandline option
2023-09-05 03:55:12 -04:00
90eb1c3980 Fix date display in the issue selection window 2023-09-05 03:14:55 -04:00
7a63474769 Fix cbr tests and update pre-commit 2023-09-04 19:56:18 -05:00
0f07fc3153 Use a dictionary instead of a list in the issue/series selection windows
List lookups were done by row number which became inaccurate if any sorting was done

Fixes #507
2023-09-03 15:18:56 -07:00
e832b19f2f Fix attribute names 2023-09-03 15:12:06 -07:00
9499aeae10 PyrateLimter version 2 only for now. 2023-08-30 23:23:19 +01:00
f72ebdb149 Simplify ComicCacher to store a single binary data field and ID(s)
If the ComicCacher is to be a generic cache for talkers it must assume
 very little. Current assumptions:
 - There are issues that can be queried individually by an "Issue ID" and they have a relation to a single series
 - There are series that can be queried individually by an "Series ID" and they have a relation to zero or more issues
 - There are Searches that can be queried by the search term and they have a relation to zero or more series

Each series and issue have a boolean `complete` attribute which is up to the talker to decide what it means.
Data is returned as a tuple ([series, complete] or [issue, complete]) or a list of tuples
An issue consists of an ID, an series ID and a binary data attribute which is up to the talker to determine what it means.
An series consists of in ID and a binary data attribute which is up to the talker to determine what it means.

The data attribute is binary to allow for compression and efficient storage of binary data (e.g. pickle) it is suggested to store it as json or similar text format encoded with utf-8. If the talker is using a website API it is suggested to store the raw response from the server.

All caches automatically expire 7 days after insertion.
2023-08-05 03:02:12 -07:00
ea84031b87 Add more 4-digit issue number tests 2023-08-04 21:04:21 -07:00
611c40fe0b Add test for split 2023-08-03 01:06:10 -07:00
2c3a2566cc Convert ComicIssue into GenericMetadata
I could not find a good reason for ComicIssue to exist other than that
 it had more attributes than GenericMetadata, so it has been replaced.
New attributes for GenericMetadata:
  series_id:        a string uniquely identifying the series to tag_origin
  series_aliases:   alternate series names that are not the canonical name
  title_aliases:    alternate issue titles that are not the canonical name
  alternate_images: a list of urls to alternate cover images

Updated attributes for GenericMetadata:
  genre        -> genres:        str -> list[str]
  comments     -> description:   str -> str
  story_arc    -> story_arcs:    str -> list[str]
  series_group -> series_groups: str -> list[str]
  character    -> characters:    str -> list[str]
  team         -> teams:         str -> list[str]
  location     -> locations:     str -> list[str]
  tag_origin   -> tag_origin:    str -> TagOrigin (tuple[str, str])

ComicSeries has been relocated to the ComicAPI package, currently has no
 usage within ComicAPI.
CreditMetadata has been renamed to Credit and has replaced Credit from
 ComicTalker.
fetch_series has been added to ComicTalker, this is currently only used
 in the GUI when a series is selected and does not already contain the
 needed fields, this function should always be cached.

A new split function has been added to ComicAPI, all uses of split on
 single characters have been updated to use this

cleanup_html and the corresponding setting are now only used in
 ComicTagger proper, for display we want any html directly from the
 upstream. When applying the metadata we then strip the description of
 any html.

A new conversion has been added to the MetadataFormatter:
  j: joins any lists into a string with ', '. Note this is a valid
     operation on strings as well, it will add ', ' in between every
     character.

parse_settings now assigns the given ComicTaggerPaths object to the
 result ensuring that the correct path is always used.
2023-08-02 09:00:04 -07:00
1b6307f9c2 Merge branch 'mizaki-tidy_ii' into develop 2023-07-30 16:24:13 -07:00
548ad4a816 Fix folder archiver
Implement supports_comment and is_writable
Fix function call in ComicArchive for supports_comment
Add a menu option to open a folder as an archive
2023-07-29 00:07:25 -07:00
27f71833b3 Generate settngs namespace before formatting 2023-07-28 23:29:39 -07:00
6c07fab985 Fix tests taking forever caused by f90f373d20 2023-07-28 23:25:12 -07:00
4151c0e113 Cleanup sqlite
Remove the import rename
use sqlite3.Row allows retrieving value by name
2023-07-28 23:22:35 -07:00
3119d68ea2 Remove used issue id from get_issue_cover_match_score and fix test 2023-07-18 01:14:32 +01:00
f43f51aa2f Fix #396
Use a QWebEngineView if QtWebEngine is available.
If QtWebEngine is not available replace figure tags with div's to allow
 the QTextEdit to render the rest of the html properly
2023-07-01 23:29:38 -07:00
19986b64d0 Upgrade pre-commit hooks 2023-07-01 23:12:41 -07:00
00200334fb Add filter to SeriesSelectionWindow and IssueSelectionWindow fixes #476 2023-07-01 18:57:33 -07:00
cde980b470 Add LICENSE file 2023-07-01 18:13:38 -07:00
f90f373d20 Merge branch 'mizaki-rate_limit_cv' into develop 2023-07-01 18:04:24 -07:00
c246b96845 Merge branch 'mizaki-vol_to_issue' into develop 2023-07-01 18:02:57 -07:00
053afaa75e Merge branch 'mizaki-phash' into develop 2023-07-01 18:01:26 -07:00
3848aaeda3 Merge branch 'mizaki-issue_count_sort' into develop 2023-07-01 17:56:55 -07:00
16b13a6fe0 Format year and count of issues to 4 digits and do a None check 2023-06-28 01:08:04 +01:00
3f180612d3 Return int instead of hex and revert hamming_distance etc. 2023-06-27 22:44:08 +01:00
37cc66cbae Use requests.status_codes.codes.TOO_MANY_REQUESTS 2023-06-27 17:48:38 +01:00
81b15a5877 Fixes sorting by year and issue count. Removed superfluous if for publisher. Fixes #475 2023-06-27 00:21:28 +01:00
14a4055040 Add Perceptual Hash computation to imagehasher mirroring https://github.com/JohannesBuchner/imagehash but in pure python 2023-06-26 01:54:26 +01:00
2e01672e68 Fix #485
As mentioned in the comment in comictaggerlib/main.py:186
The default value should be None not the empty string.
We also check if the given value is the default or the empty string and
 the setting is unset so the default value is not saved in the settings
 file.
The default_api_url is shown in the GUI Settings Window it is not
 currently show in the cli help.
2023-06-23 17:48:18 -07:00
4a7aae4045 Add tests for fix_url 2023-06-23 17:10:40 -07:00
2187ddece8 Move volume from ComicSeries to ComicIssue 2023-06-23 22:38:15 +01:00
fba5518d06 Create two module limiters and assign class limiter var depending. Add to welcome message limits of default CV API key. 2023-06-23 21:25:02 +01:00
31cf687e2f Reduce startup time 2023-06-22 20:11:40 -07:00
526069dabf Use _guess_type from settngs for more robust type checking 2023-06-22 18:28:43 -07:00
635cb037f1 Merge branch 'mizaki-fix_add_fields' into develop 2023-06-22 17:51:26 -07:00
861584df3a Move rate limit check from defunc API status code 107 to HTTP code 429. Set a limit of 10 request every 10 seconds except for the default API key which is 1,2 (to be finisalised). Remove wait on rate limit option. 2023-06-22 23:50:32 +01:00
a53fda9fec Update linux packages in GitHub Actions 2023-06-21 19:47:41 -07:00
af5a0e50e0 Remove wait on CV rate limit in autotag 2023-06-21 22:32:06 +01:00
7a91acb60c Add pyrate-limiter and apply CV suggested rate limit 2023-06-20 22:28:29 +01:00
3a287504ae Fix setting issue and alternate_number on GenericMetadata
IssueString.as_string always returns a string this is a problem for
  GenericMetadata. When the overlay function is used it checks
  specifically for the value None this allows the -m option to unset
  attributes however the issue attribute would get set to the empty
  string when loading ComicRack tags regardless of if there was a value
  stored in the file. Fixes #465 and #480
2023-06-15 20:26:38 -07:00
82a22d25ea Merge branch 'mizaki-auto_ident_message' into develop 2023-06-11 21:44:05 -07:00
783e10a9a1 Generate a namespace object for typing settngs 2023-06-09 16:20:00 -07:00
e8f13b1f9e fix quoting 2023-06-09 02:08:38 +01:00
4b415b376f Fix tests 2023-06-08 01:26:03 +01:00
122bdf7eb1 Change auto-identfy message to point users to the auto-tag assume 1 option 2023-06-08 01:18:46 +01:00
2afb604ab3 Fix issue_count and add maturity rating 2023-06-08 00:52:24 +01:00
a912c7392b Merge branch 'mizaki-additional_comic_fields' into develop 2023-06-03 10:37:44 -07:00
3b92993ef6 Remove country name code 2023-06-03 00:11:40 +01:00
c3892082f5 Change data to metadata 2023-06-02 00:37:58 +01:00
92e2cb42e8 Replace instances of Comic Vine to use the talker's name 2023-06-01 22:05:14 +01:00
b8065e0f10 Fix #470 re-add notes when using --clear-metadata 2023-05-30 21:36:33 -07:00
a395e5541f Remove invalid comments 2023-05-25 15:00:53 +01:00
d191750231 Remove attempted validation of language and country plus minor changes 2023-05-25 01:32:52 +01:00
e72347656b Add format (1-shot, limited series, etc.) 2023-05-23 00:27:58 +01:00
8e2411a086 Add country functions to utils and try to convert a country name to ISO country name 2023-05-23 00:02:56 +01:00
97e64fa918 Add maturity_rating, language and country to ComicIssue and pass to metadata. 2023-05-18 02:02:21 +01:00
661d758315 Merge branch 'mizaki-talker_parse_key' into develop 2023-05-16 17:33:24 -07:00
364d870fe0 Merge branch 'mizaki-hide_api_token' into develop 2023-05-16 17:30:46 -07:00
2da64fd52d Remove password class from function 2023-05-16 15:20:45 +01:00
057725c5da Create generate_password_textbox 2023-05-16 00:25:12 +01:00
5996bd3588 Add show/hide icon to key field 2023-05-15 23:46:16 +01:00
fdf407898e Bump MacOS version for GitHub Actions 2023-05-15 10:59:23 -06:00
70d544b7bd Add attrib at the end of the CLI file run 2023-05-15 16:46:31 +01:00
c583f63c8c Attribution for metadata provider on command line 2023-05-14 23:39:23 +01:00
d65a120eb5 Add issue_count 2023-05-14 00:50:37 +01:00
60f47546c2 Hide the API key field as a password and add a show/show button 2023-05-13 23:12:29 +01:00
0b77078a93 Retrieve all fields instead of by (many) names 2023-05-12 23:46:34 +01:00
2598fc546a Use new xlate_int and xlate_float 2023-05-12 22:47:36 +01:00
ddf4407b77 Merge branch 'develop' into additional_comic_fields 2023-05-12 22:41:38 +01:00
6cf259191e Add volume and count_of_volumes to ComicSeries 2023-05-12 21:48:45 +01:00
30f1db1c73 Update requirements and Linux build dependencies 2023-04-26 14:46:18 -07:00
ff15bff94c Fix pypi upload 2023-04-25 16:26:05 -07:00
83aabfd9c3 Upgrade pre-commit 2023-04-25 16:11:19 -07:00
d3ff40c249 Only update the image in CoverImageWidget if the url matches the current url
This fixes an issue causing the first issue cover to show when using the auto-identify feature
Fixes #455
2023-04-25 16:00:08 -07:00
c07e1c4168 Add additional typing 2023-04-25 16:00:06 -07:00
1dc93c351d Update settngs to typed version fixes #453 2023-04-25 16:00:04 -07:00
f94c9ef857 Update appimage step
Fix platform case
Remove icu check from appimage step as ComicTagger is not installed
Add appimagetool to allowed commands
Fix appimage paths
2023-04-25 16:00:02 -07:00
14fa70e608 Separate xlate into separate functions based on return type fixes #454 2023-04-25 15:55:27 -07:00
ec65132cf2 Mark mypy as optional 2023-04-23 02:01:41 -07:00
941bbf545f Remove extraneous if 2023-04-23 01:52:56 -07:00
afdb08fa15 Fix package.yaml 2023-04-23 01:49:42 -07:00
c4b7411261 Use tox for building 2023-04-23 01:31:44 -07:00
5b3e9c9026 Switch to rarfile for rar/cbr support 2023-04-23 00:48:13 -07:00
e70c47d12a Make PyICU optional
Update README.md
2023-04-23 00:48:11 -07:00
c1aba269a9 Revert "Make PyICU optional"
This reverts commit bf55037690.
2023-04-22 21:28:14 -07:00
bf55037690 Make PyICU optional
Fix more locale issues
Update README.md
2023-04-18 21:03:50 -07:00
e2dfcc91ce Revert get_recursive_filelist Fixes #449 2023-04-13 20:58:30 -06:00
33796aa475 Fix #447 2023-04-06 10:48:40 -07:00
4218e3558b Add url 2023-03-05 18:58:06 +00:00
271bfac834 Do not fail when talker key is missing 2023-03-03 00:07:49 +00:00
9e86b5e331 Fix tests 2023-03-02 00:23:56 +00:00
c9638ba0d9 Format manga and rating 2023-03-02 00:10:52 +00:00
428879120a Merge branch 'mizaki-talkeruigen_fix' into develop 2023-02-28 11:49:27 -08:00
f0b9bc6c77 Missed name changes from options move 2023-02-28 15:37:52 +00:00
6133b886fb String widget fix-fix 2023-02-28 15:06:59 +00:00
dacd767162 String widget fix 2023-02-28 14:59:58 +00:00
4d90417ecf Update AUTHORS 2023-02-28 06:31:07 +00:00
c3e889279b Fix EOF 2023-02-27 22:30:31 -08:00
9bf998ca9e Remove check_api_url and fix docstrings 2023-02-27 22:29:23 -08:00
5b2a06870a Fix talker settings validation 2023-02-27 22:21:56 -08:00
fca5818874 Merge branch 'mizaki-talker_settings_generator' into develop 2023-02-27 22:20:53 -08:00
eaf0ef2f1b Fix Makefile dependencies
Remove dist/appimage before copy to prevent issues with 2nd run
Add dist/appimagetool target so that the appimage tool is downloaded once
2023-02-27 22:12:12 -08:00
09fb34c5ff Merge branch 'bmfrosty-feature/add-appimage-support' into develop 2023-02-27 22:01:13 -08:00
924467cc57 Add AppImage Support 2023-02-26 22:12:50 -08:00
2611c284b8 Revert "docs(contributor): contrib-readme-action has updated readme"
This reverts commit aba59bdbfe.
2023-02-24 13:23:29 +00:00
b4a3e8c2ee Add missing tool tips to labels
Change metadata select label
Use named tuple for talker tabs
Retrun a string and bool for api check
2023-02-24 00:06:48 +00:00
118429f84c Change source term to metadata
Generate API text field in their own function
API tests return string message of result
Add help to text field lables
2023-02-23 00:42:48 +00:00
8b9332e150 Fix linux build 2023-02-21 20:00:47 -08:00
5b5a483e25 Fix api key test button generation 2023-02-21 00:58:13 +00:00
33ea8da5bc Merge branch 'develop' into talker_settings_generator
# Conflicts:
#	comictaggerlib/settingswindow.py
#	comictalker/talkers/comicvine.py
2023-02-21 00:50:06 +00:00
aba59bdbfe docs(contributor): contrib-readme-action has updated readme 2023-02-21 00:43:46 +00:00
316bd52f21 Use currentData for combo box 2023-02-21 00:42:11 +00:00
59893b1d1c Fix optoin.type ifs 2023-02-21 00:38:13 +00:00
fb83863654 Update plugin settings
Make "runtime" a persistent group, allows normalizing without losing validation
Simplify archiver setting generation
Generate options for setting a url and key for all talkers
Return validated talker settings
Require that the talker id must match the entry point name
Add api_url and api_key as default attributes on talkers
Add default handling of api_url and api_key to register_settings
Update settngs to 0.6.2 to be able to add settings to a group and
  use the display_name attribute
Error if no talkers are loaded
Update talker entry point to comictagger.talker
2023-02-20 16:02:15 -08:00
f131c650fb Merge branch 'mizaki-talker_entry_points' into develop 2023-02-20 14:27:09 -08:00
f439797b03 Use new display_name from settngs. Add source combobox getting and setting and add to sources dict of widgets. 2023-02-20 18:45:39 +00:00
bd5e23f93f Add another test case for format_internal_name 2023-02-20 00:44:51 +00:00
fefb3ce6cd Remove general tab from talker tab and use base tab from settings window. Additional clean up. 2023-02-19 23:33:22 +00:00
a24bd1c719 Generate talker general tab programatically. Move search options to search tab. 2023-02-18 17:16:56 +00:00
02fd8beda8 Use None as parent for api and url message boxes
Rename test_api_key and test_api_url to api_key_btn_connect and api_url_btn_connect
Make separate function to set form values, called in settings_to_form
Change isinstance to is
Call findChildren only once
2023-02-18 01:15:46 +00:00
628dd5e456 Fix actions failure when there are no new contributors 2023-02-17 13:43:41 -08:00
c437532622 Merge branch 'mizaki-cache_role_fix' into develop 2023-02-17 10:21:54 -08:00
0714b94ca1 Restrict contributions updates to only run on pushes to develop 2023-02-17 10:16:21 -08:00
5ecaf89d15 Update AUTHORS 2023-02-17 01:23:54 +00:00
2491999a33 Update copyright statements to ComicTagger Authors 2023-02-16 17:23:13 -08:00
9c7bf2e235 Update AUTHORS 2023-02-17 01:14:29 +00:00
0c1093d58e docs(contributor): contrib-readme-action has updated readme 2023-02-17 01:14:27 +00:00
a41c5a8af5 Automate contributions 2023-02-16 17:13:26 -08:00
b727b1288d Apply credit datatype to person data from cache 2023-02-15 17:05:14 +00:00
73738010b8 Add additional fields to ComicIssue and add a genre field to ComicSeries to allow for filtering of search results from the cache. 2023-02-15 16:48:07 +00:00
2fde11a704 Test for menu generator format_internal_name 2023-02-14 01:47:32 +00:00
6a6a3320cb Move talker settings menu generator to a separate file 2023-02-14 01:32:56 +00:00
83a8d5d5e1 Generate settings tabs for each talker 2023-02-11 01:18:56 +00:00
4b3b9d8691 Entry points for talkers 2023-02-10 21:16:35 +00:00
3422a1093d Merge branch 'mizaki-showcontrols' into develop 2023-02-10 00:31:24 -08:00
4eb9e008ce Update pre-commit 2023-02-10 00:25:20 -08:00
5e86605a46 Fix docstring typos 2023-02-10 00:25:18 -08:00
8146b0c90e Merge branch 'talker-cleanup' into develop 2023-02-10 00:24:48 -08:00
983937cdea Mark internal functions in ComicVineTalker 2023-02-10 00:23:02 -08:00
e5b15abf91 clean up talker 2023-02-10 00:23:00 -08:00
4a5d02119e Merge branch 'settings-consistency' into develop 2023-02-10 00:22:44 -08:00
4b6c9fd066 Fix comicarchive_test.py 2023-02-10 00:14:58 -08:00
79a6cef794 Hide invisible controls to prevent bottom margin on source logos. 2023-02-10 00:43:05 +00:00
43cb68b38b Fix 'Default Preferences' button in the settings window 2023-02-04 11:34:49 -08:00
ad68726e1d Use consistent naming for settings
config: always values
setting: always the definition/description not the value
2023-02-04 11:33:21 -08:00
ba4b779145 Remove legacy settings 2023-02-03 20:14:31 -08:00
d987a811e3 Consolidate plugin code 2023-02-03 20:13:58 -08:00
ee426e6473 Merge branch 'mizaki-talker_settings' into develop 2023-02-03 18:14:26 -08:00
9aa42c1ca7 Add series match threshold back into search_for_series as it is no longer available via the talkers own settings. 2023-02-03 21:38:17 +00:00
d12325b7f8 Simplify parse_settings. Prefix talker_ to group name. Add back setting CV key via commandline. Other small changes as requested. 2023-02-02 00:53:13 +00:00
ce5205902a After merge isort 2023-02-01 23:53:02 +00:00
94aabcdd40 Merge branch 'develop' into talker_settings
# Conflicts:
#	comictaggerlib/ctoptions/__init__.py
#	comictaggerlib/main.py
#	comictalker/talkers/comicvine.py
2023-02-01 23:38:13 +00:00
839a918330 typed talkers var 2023-02-01 23:22:04 +00:00
053295e028 Merge branch 'mizaki-source_logo_url' into develop 2023-02-01 08:03:16 -08:00
c6e3266f60 More verbose attrib string 2023-02-01 15:39:24 +00:00
7c4e5b775b Merge branch 'plugableArchivers' into develop 2023-01-31 19:44:07 -08:00
bc02a9a2a2 Use a persistent setting group for archiver settings 2023-01-31 19:41:19 -08:00
2c5d419ee9 Remove legacy rar settings 2023-01-31 00:32:19 -08:00
46899255c8 Generate settings for an archivers executable 2023-01-30 21:36:47 -08:00
6a650514fa Rename new settings talker methods. Move parse_settings for talkers to earlier and only pass talkers own settings. 2023-01-30 01:59:23 +00:00
0f10e6e848 Create simple dict of talkers with objects. Moved thresh setting back to talkers (general) as it is called outside of talker. 2023-01-26 00:52:02 +00:00
0d69ba3c49 Rename talkers_general to talkers. Moved plugin option register to own file. Due to chicken and egg, first get talker classes then create objects. 2023-01-25 19:10:58 +00:00
d0e3b487eb Mark label for external links. attrib str to be complete. 2023-01-22 17:16:33 +00:00
c80627575a Add docstrings to Archiver 2023-01-21 15:24:27 -08:00
92eb79df71 Fix console_scripts entry point 2023-01-21 00:27:39 -08:00
ad48ad757c Fix plugin order 2023-01-20 19:32:32 -08:00
2de241cdd5 Fix typing 2023-01-20 19:32:06 -08:00
5d66815765 Add attrib string for source. Add logo and URL to issues window. 2023-01-20 00:29:02 +00:00
100e0f2101 Load plugins in init. 2023-01-15 17:38:50 +00:00
55e3b7c7e0 Use name for URL display. Window sizes. 2023-01-13 21:27:40 +00:00
f6698f7f0a Call load_archive_plugins in ComicArchive __init__ 2023-01-12 17:00:11 -08:00
50614d52fc Update PyInstaller hook 2023-01-12 15:47:34 -08:00
712986ee69 Turn comicapi.archivers.* into plugins 2023-01-12 14:45:49 -08:00
2f7e3921ef Separate archivers into their own packages 2023-01-12 14:45:17 -08:00
80f42fdc3f Move log header to execute immediately after the log is configured 2023-01-12 14:43:12 -08:00
725b2c66d3 Use imageWidget for source logo and URL. 2023-01-12 16:58:50 +00:00
5394b9f667 Fix tests. Probably not the correct way to do this? 2023-01-12 15:10:39 +00:00
fad103a7ad Use setting option for talker selection 2023-01-07 00:29:12 +00:00
87cd106b28 Add source logo and URL to series window 2023-01-04 23:51:39 +00:00
2d8c47edca Use new settings system for plugin 2023-01-02 01:04:15 +00:00
0ac5b59a1e Merge branch 'mizaki-rename_namespace_fix' into develop 2022-12-31 20:49:45 -08:00
7c735b3555 Fix rename namespace 2023-01-01 02:07:42 +00:00
9d8cf41cd3 Fix try block parsing credits in ComicCacher 2022-12-31 12:36:32 -08:00
ee3a06db46 Merge branch 'crop-border' into develop 2022-12-31 12:35:29 -08:00
7df2e3fdc0 Automatically crop black borders from covers 2022-12-31 11:52:23 -08:00
20e7de5b5f Fix reference to the user cache directory 2022-12-31 02:26:44 -08:00
f83f72fa12 Improve issue number handling regarding the '#' 2022-12-31 02:15:17 -08:00
fb4786159d Handle issue numbers with more than 3 digits 2022-12-30 21:50:10 -08:00
734b83cade Switch comictalker TypedDicts to dataclasses 2022-12-23 01:58:10 -08:00
746c98ad1c Add temp to .gitignore 2022-12-23 00:09:46 -08:00
9f00af4bba Change issue id and series id to strings 2022-12-23 00:09:19 -08:00
92fa4a874b Improve typing in ComicVineTalker 2022-12-22 10:47:37 -08:00
a33b00d77e Update ComicTalker documentation 2022-12-22 10:47:35 -08:00
a7f6349aa4 Merge branch 'volume-to-series' into develop 2022-12-22 10:45:58 -08:00
d4b4544b2f Replace most instances of volume in ComicVineTalker with series
All remaining uses of the word volume are used directly by the api and
are documented that it refers to the series
2022-12-22 10:30:48 -08:00
521d5634f3 Fix tests 2022-12-22 10:16:32 -08:00
1d9840913a Change all references of volume to series 2022-12-22 10:16:05 -08:00
53a0b23230 Collapse formatting 2022-12-15 20:21:53 -08:00
9004ee1a6b Merge branch 'settings' into develop 2022-12-15 20:17:50 -08:00
440479da8c Update to settngs 0.3.0
Use the namespace instead of a dictionary
Cleanup setting names
2022-12-15 20:10:35 -08:00
e5c3692bb9 Fail if an error occurs when loading settings 2022-12-15 18:58:53 -08:00
103379e548 Split settings out into a separate package 2022-12-14 23:16:54 -08:00
eca421e0f2 Split out settings functions 2022-12-13 08:50:38 -08:00
18566a0592 Fix setting cmdline arguments 2022-12-13 08:50:08 -08:00
48c6372cf4 Fix --no-overwrite 2022-12-10 18:35:41 -08:00
f3917c6e4d Add comments to tests 2022-12-10 18:05:27 -08:00
9bb5225301 Restrict pillow version to <10 until PyQt6 is supported 2022-12-06 17:06:13 -08:00
e9cef87154 Move test cases to the testing package
Add comments to tests
2022-12-06 17:00:21 -08:00
da01dde2b9 Fix color space on CMYK images 2022-12-06 08:38:24 -08:00
53445759f7 Add tests 2022-12-06 00:22:51 -08:00
9aff3ae38e Generalize settings
Add comments and docstrings
Create parent directories when saving
Add merging to normalize_options
Change get_option to return if the value is the default value
2022-12-06 00:22:49 -08:00
0302511f5f Settings tests 2022-12-06 00:22:48 -08:00
028949f216 Make logs use the .log extension 2022-12-06 00:22:46 -08:00
af0d7b878b Set logging level on comictalker 2022-12-06 00:22:44 -08:00
460a5bc4f4 Cleanup 2022-12-06 00:20:29 -08:00
3f6f8540c4 Fix wait_and_retry_on_rate_limit 2022-12-06 00:20:27 -08:00
17d865b72f Refactor cli.py into a class 2022-12-06 00:20:26 -08:00
da21dc110d Update help 2022-12-06 00:20:24 -08:00
3870cd0f53 Update help for --config 2022-12-06 00:20:23 -08:00
ed1df400d8 Add replacement settings 2022-12-06 00:20:21 -08:00
82d737407f Simplify --only-set-cv-key 2022-12-06 00:20:20 -08:00
d0719e7201 Fix log dir 2022-12-06 00:20:18 -08:00
19112ac79b Update Settings 2022-12-06 00:20:01 -08:00
a64d753d77 Fix package selection 2022-12-01 19:54:55 -08:00
970752435c Merge branch 'mizaki-fixii_keys' into develop 2022-11-29 15:15:42 -08:00
b1436ee76e Merge branch 'resize-volume-columns' into develop 2022-11-29 14:28:32 -08:00
8eba44cce4 Increase default size of VolumeSelectionWindow 2022-11-29 14:28:08 -08:00
5fc5a14bd9 Wider catch of series and issue_number being empty 2022-11-29 16:59:05 +00:00
10f36e9868 Allow searching without a comic archive selected 2022-11-28 21:44:01 -08:00
aab7e37bb2 Use contentsRect().width() instead of width 2022-11-28 20:55:50 -08:00
2860093b6f Set the minimum row height to the default on VolumeSelectionWindow 2022-11-28 20:54:24 -08:00
ad7b270650 Automatically resize the row height on the VolumeSelectionWindow 2022-11-28 15:34:15 -08:00
70dcb9768a Better resize columns in the VolumeSelectionWindow 2022-11-28 15:28:47 -08:00
873d976662 keys may be None if there is no comic archive. IssueString.as_string will convert None to empty string so use None comparison before. 2022-11-28 00:56:19 +00:00
fc4eb4f002 Cleanup
Move most options passed in to ComicVineTalker to ComicTalker
Give ComicCacher and ComicTalker a version argument to remove all
  references to comictaggerlib
Update default arguments to reflect what is required to use these classes
2022-11-25 19:22:01 -08:00
129e19ac9d Remove cast from taggerwindow.py 2022-11-25 19:22:00 -08:00
0dede72692 Re-add --only-set-cv-key feature 2022-11-25 19:21:58 -08:00
83ac9f91b5 Make errors loading the ComicVineTalker object explicit 2022-11-25 19:21:57 -08:00
858bc303d8 Stop setting the notes field in map_comic_issue_to_metadata 2022-11-25 19:21:55 -08:00
005d7b72f4 Fix tests 2022-11-25 19:21:54 -08:00
91b863fcb1 Merge branch 'mizaki-infosources' into dev 2022-11-25 19:21:25 -08:00
e5f6a7d1d6 Add warning about settings 2022-11-25 17:09:22 -08:00
e7f937ecd2 Enable version checking 2022-11-25 17:08:26 -08:00
d75f39fe93 Remove logos dir 2022-11-24 23:58:24 +00:00
12d9befc25 Remove unneeded code from fetch_issue_data. 2022-11-24 23:56:12 +00:00
3e8ee864b7 Remove setting options and logo_url. 2022-11-24 23:35:35 +00:00
134c4a60e9 Add some docstrings. 2022-11-24 23:26:48 +00:00
3f9e5457f6 Fix make clean 2022-11-24 09:41:51 -08:00
cc2ef8593c Update pre-commit 2022-11-24 01:25:24 -08:00
c5a5fc8bdb Fix issue with combine_notes 2022-11-24 01:24:15 -08:00
1cbed64299 Fix an issue with normalizing the platform in filerenamer.py 2022-11-23 12:36:19 -08:00
c608ff80a1 Improve typing 2022-11-22 17:11:56 -08:00
52cc692b58 Remove some TODOs. 2022-11-23 00:22:48 +00:00
31894a66ec Remove repair_urls function, taken care of in format results functions. 2022-11-19 21:59:10 +00:00
aa11a47164 HTML table patch 2022-11-18 23:22:39 +00:00
093d20a52b Remove all the cool settings changes. 2022-11-18 23:18:41 +00:00
38c3014222 Use strip().splitlines() in cacher to prevent [''] return. Some clean up. 2022-11-17 15:55:38 +00:00
df87f81698 Remove volume only functions used for testing. 2022-11-13 23:25:08 +00:00
cf12e891b0 Fix CV API test. Fix sending last source details in settings for API test and website link. 2022-11-12 23:13:53 +00:00
76fb565d4e Merge branch 'mizaki-iiemptyurl' into develop 2022-11-11 17:09:45 -08:00
06ffd9f6be Add logo/text button to source tab that links to webpage. 2022-11-12 01:09:17 +00:00
dfef425af3 Better handle missing talkers and default to comic vine. 2022-11-10 17:03:39 +00:00
880b1be401 Return zero score if there is no image url. Fixes #392 2022-11-10 16:15:27 +00:00
04ad588a58 Use source name in tag notes. 2022-11-08 16:33:46 +00:00
6b4abcf061 Update current talker object with new settings. 2022-11-08 16:32:37 +00:00
629b28f258 Small fixes after merge. 2022-11-07 02:03:36 +00:00
c34902449f Merge branch 'develop' into infosources
# Conflicts:
#	comictaggerlib/cli.py
#	comictaggerlib/comicvinetalker.py
#	comictaggerlib/taggerwindow.py
#	tests/comicvinetalker_test.py
#	tests/conftest.py
2022-11-07 01:50:47 +00:00
63e6174cf2 Not all fields are required in ComicVolume and ComicIssue but cacher would fail if any optional field were missing. 2022-11-07 01:38:19 +00:00
9da14e0f95 Fix source switching. Use start year if cover date is missing. 2022-11-07 01:19:03 +00:00
c469fdb25e Make 7zip support optional 2022-11-06 08:27:45 -08:00
67be086638 Move map comic data to utils along with remove html. Alter tests. 2022-11-05 16:49:59 +00:00
a724fd8430 Compensate for a split empty string returning ['']. I don't see a way around this? 2022-11-05 01:21:51 +00:00
685ce014b6 Fix tests for comicvinetalker 2022-11-04 16:27:30 -07:00
62bf1d3808 Update macOS packaging 2022-11-04 16:16:19 -07:00
d55d75cd79 Append notes instead of overwriting them
Add issue_id to GenericMetadata
2022-11-04 15:39:40 -07:00
19e5f10a7b Revert "Revert passing only issue id to fetch_comic_data. Instead send issue id, volume id and issue number. This is because MU will not have the issue number from the API call. Now, if it has been parsed from the file name it will be available for use by the MU talker."
This reverts commit e5e9617052.
2022-11-04 16:16:07 +00:00
e5e9617052 Revert passing only issue id to fetch_comic_data. Instead send issue id, volume id and issue number. This is because MU will not have the issue number from the API call. Now, if it has been parsed from the file name it will be available for use by the MU talker. 2022-11-04 00:52:22 +00:00
b4f6820f56 remove_fetch_alternate_cover_urls.patch 2022-11-03 23:32:35 +00:00
b07aa03c5f Use xlate for all int conversion in CV talker and compare cache issues to expected number. 2022-11-03 22:35:46 +00:00
2f54b1b36b A few minor logging tweaks. 2022-11-03 15:39:13 +00:00
70293a0819 Require PyInstaller >= 5.6.2 2022-11-01 13:51:10 -07:00
8592fdee74 Revert "Install PyInstaller from git until >5.6.1 is available"
This reverts commit 79137a12f8.
2022-11-01 13:49:52 -07:00
075faaea5a Removed TODO's checked and/or fixed. 2022-11-01 16:13:46 +00:00
870dc5e9b6 Move issue_id to first position of fetch_comic_data as most used. 2022-10-30 17:52:55 +00:00
86402af8b1 Merge branch 'develop' into infosources
# Conflicts:
#	comictaggerlib/comicvinetalker.py
2022-10-30 11:39:01 +00:00
d7976cf9d2 Hack tests. 2022-10-30 11:16:03 +00:00
b67765d9aa Merge to develop. 2022-10-30 11:07:53 +00:00
618e15600f Fix retrieving issues from cache when volume is incomplete 2022-10-29 19:21:11 -07:00
8cac2c255f Merge branch 'develop' into infosources
# Conflicts:
#	comictaggerlib/comicvinetalker.py
#	comictaggerlib/coverimagewidget.py
#	comictaggerlib/main.py
#	comictaggerlib/pagebrowser.py
#	comictaggerlib/pagelisteditor.py
#	comictaggerlib/settings.py
#	comictaggerlib/settingswindow.py
2022-10-30 01:31:58 +01:00
4f42fef4fc Return issue id from series search and use issue id for API. 2022-10-30 00:15:05 +01:00
73dd33dc64 Fix tags in GitHub Actions checkout 2022-10-29 13:09:13 -07:00
3774ab0568 Force install PyInstaller from git until >5.6.1 is available 2022-10-29 11:04:46 -07:00
f8807675d6 Cache issue info 2022-10-29 11:02:21 -07:00
79137a12f8 Install PyInstaller from git until >5.6.1 is available 2022-10-29 10:10:37 -07:00
d33d274725 Fix fetching alternate cover urls (fixes #372) 2022-10-29 10:10:35 -07:00
26851475ea Clean up loading cover images. Probably more to do. 2022-10-29 16:41:34 +01:00
a06d88efc0 Fix up full issue cache types. 2022-10-29 01:33:42 +01:00
dcf853515c Tidy CV logger errors. 2022-10-28 22:32:33 +01:00
bf06b94284 Enable cache for full issue information. 2022-10-28 22:15:14 +01:00
561dc28044 Don't proxy talker (really this time). Remove talker custom logging. Move static_options and settings_options to root of class object. Temp hack to keep talker menu genration working until settings revamp. 2022-10-27 23:36:57 +01:00
43ec4848ef Update pre-commit 2022-10-25 21:49:47 -07:00
aad83c8c03 Update PyInstaller usage
Switch to rapidfuzz from thefuzz
Add associations to macOS app bundle
2022-10-25 21:48:01 -07:00
4514ae80d0 Switch to API data for alt images, remove unneeded functions and removed async as new approach needed. See comments about fetch_partial_volume_data 2022-10-26 00:29:30 +01:00
cab69a32be Remove proxying from ComicTalker. Add some checks for talkers. 2022-10-25 00:37:18 +01:00
c5ad75370f Work around having to scrape alt covers from CV. Use cache to get issue page url for scrape. 2022-10-24 16:30:58 +01:00
d23258f359 Change ComicVolume, ComicIssue to image_url and image_thumb_url. Add/change search/volume DB layout to remove duplication of data. Fixup some test. 2022-10-23 22:40:15 +01:00
c9cd58fecb Remove fetch_issue_cover_urls and async_fetch_issue_cover_urls. Reduce API calls by using data already available with coverimagewidget. 2022-10-22 01:43:56 +01:00
58904a927f Set release name properly 2022-10-19 19:27:30 -07:00
fb1616aaa1 Remove CV parse date. Strings names. 2022-10-20 00:32:40 +01:00
4be12d857d Reuse CV test data in comic_issue_result data. Cover possible empty volume data in get_volume_issues_info. 2022-10-19 23:30:11 +01:00
e1ab72ec2a Rename super_url to image_url in comiccacher. Merge fetch_issue_data_by_issue_id into fetch_comic_data. Fill comic volume info in comiccacher:get_volume_issues_info 2022-10-19 19:33:51 +01:00
8a8dea8aa4 Fix autotagstartwindow.ui missed from merge. 2022-10-15 23:36:52 +01:00
43464724bd Convert all start_year to int. 2022-10-15 23:20:50 +01:00
34163fe9d7 Update the comicvine_api fixture in conftest.py to actually return the comicvinetalker. 2022-10-15 02:02:10 +01:00
9aa29f1445 Merge fetch_issue_data and fetch_volume_data to fetch_comic_data. 2022-10-14 01:10:46 +01:00
3ea44b7ca7 Remove fetch_issue_page_url from comictalker etc. 2022-10-12 23:08:47 +01:00
c1c8f4eb6e black 2022-10-12 00:11:57 +01:00
a14c24a78a Fix for issueidentifier_test 2022-10-11 16:52:41 +01:00
18d861a2be More test fixes that may need to be looked at further. 2022-10-09 23:43:52 +01:00
ac15a4dd72 More test fixes. 2022-10-06 01:14:03 +01:00
6a98afb89c After second merge. 2022-10-06 00:34:32 +01:00
21873d3830 Merge branch 'develop' into infosources
# Conflicts:
#	comictaggerlib/autotagstartwindow.py
#	comictaggerlib/cli.py
#	comictalker/talkers/comicvine.py
2022-10-05 01:58:46 +01:00
2daf9b3ed8 Style and typo fixes 2022-10-04 16:15:55 -07:00
a6d55cd21a Update MetadataFormatter
Several custom conversions (the s in {title!s}) have been created
u - str.upper()
l - str.casefold()
S - str.swapcase()
t - str.title()
c - str.Capitalize()

A new syntax has been added '{title+str}' and '{title-str}':
The + indicates an alternate value.
The - indicates a default value.

If the title of a comic is not set then
'{title-str}' will output 'str'
and
'{title+str} will output ''

If the title of a comic is 'hello' then
'{title+str}' will output 'str'
and
'{title-str}' will output 'hello'
2022-10-04 16:15:20 -07:00
d37e4607ee After merge. Testing files still to update. 2022-10-04 23:50:55 +01:00
00e95178cd Initial support for multiple comic information sources 2022-10-04 01:08:14 +01:00
4034123e6d Fix rar tests again 2022-10-02 21:47:07 -07:00
5587bfac31 Fix rar tests 2022-10-02 21:13:26 -07:00
4b6d35fd3a Fix CBL tagging 2022-10-02 19:33:12 -07:00
3cf75cf2ec Update importlib_matadata usage and requirements 2022-09-19 22:54:48 -07:00
30dbe758d4 Fix windows tests 2022-09-19 22:52:45 -07:00
55384790f8 Forcefully raise an OSError on windows 2022-09-17 01:59:15 -07:00
acaf5ed510 Fix issues with renaming
Stop a crash when renaming
Properly handle replacements on linux/macos
2022-09-17 01:28:26 -07:00
d213db3129 Use correct syntax for pips --no-binary flag 2022-09-15 22:09:04 -07:00
6a717377df Automatically set release name from tag message 2022-09-10 22:35:30 -07:00
904561fb8e Merge branch 'pyicu' into develop 2022-09-10 21:48:04 -07:00
be6b71dec7 Put unix specific commands in OS specific blocks 2022-09-10 21:11:48 -07:00
63b654a173 Update ci to install pyicu 2022-09-10 19:51:26 -07:00
bc25acde9f Fix sorting
Switch natsort to use os_sorted
Remove directories when returning a list of files in a comic
Update tests to account for '!cover.jpg'
2022-09-10 19:48:50 -07:00
03677ce4b8 Fix renaming
Make ComicArchive.path always absolute
Fix unique_file not preserving the extension
Fix incorrect output when renaming in CLI mode
Fix handling of platform when renaming
2022-08-19 20:20:37 -07:00
535afcb4c6 Fix replacements 2022-08-19 19:59:58 -07:00
06255f7848 Perform replacements on literal text and format values 2022-08-18 13:48:23 -07:00
00e649bb4c Move colon handling when renaming to the MetadataFormatter class
Fixes #356
2022-08-17 16:16:38 -07:00
078f569ec6 Fix codeblock in README.md 2022-08-14 10:51:08 -07:00
315cf7d920 Merge pull request #355 from Xav83/patch-1
Adds the Chocolatey package as a way to install ComicTagger
2022-08-14 10:47:24 -07:00
e9cc6a16a8 Note that @Xav83 is the maintainer of the chocolatey package
Co-authored-by: Xavier Jouvenot <x.jouvenot@gmail.com>
2022-08-14 10:45:51 -07:00
26eb6985fe Adds the Chocolatey package as a way to install ComicTagger
Adds the Chocolatey package in the list of possibilities to install ComicTagger
2022-08-13 11:52:09 +02:00
be983c61bc Fix #353
The two primary cases fixed are:
Ms. Marvel
spider-man/deadpool

The first issue removed 'Ms.' which is a problem as many comics have
series that the only difference in the title is the
designation/honorific.

The second issue is that the '/' was removed and not replaced with
anything causing a search for 'mandeadpool' which will not show useful
results.

Consequently all designations/honorifics are now untouched
All punctuation is replaced with a space
2022-08-12 07:10:36 -07:00
77a53a6834 Update dependencies
Includes changes from pyupgrade
2022-08-10 20:55:46 -07:00
860a3147d2 Construct URL correctly 2022-08-10 16:33:40 -07:00
8ecb87fa26 Install all optional dependencies in CI 2022-08-08 19:10:57 -07:00
f17f560705 Fix tests on windows
Make the speedup dependency to thefuzz optional it requires a C compiler
2022-08-08 19:03:25 -07:00
aadeb07c49 Fix issues
Refactor add_to_path with tests
Fix type hints for titles_match
Use casefold in get_language
Fix using the recursive flag in cli mode
Add http status code to ComicVine exceptions
Fix parenthesis getting removed when renaming
Add more tests
2022-08-08 18:05:06 -07:00
e07fe9e8d1 Construct URLs more consistently 2022-07-29 22:05:22 -07:00
f2a68d6c8b Fix rename and add test 2022-07-29 22:05:03 -07:00
94be266e17 Handle the 'primary' key missing in get_primary_credit
Fixes #342
Add better exception handling for the formatter
2022-07-27 23:24:34 -07:00
5a19eaf9a0 Fix serializing of sets 2022-07-25 11:22:44 -07:00
28cbbbece7 Fix #334 2022-07-23 10:05:04 -07:00
40314367c9 Improve formatting and consistency 2022-07-18 12:17:13 -07:00
6e7660c3d9 Tests
Add tests for IssueIdentifier
Change tags to a set from a string
Add copy and replace convenience functions on GenericMetadata
Update deprecated resampling code for Pillow
Change comicvine test data to be the same as the test comic
Cleanup tests
2022-07-18 12:06:49 -07:00
99030fae6b Merge branch 'unicode_search' into develop 2022-07-13 23:16:59 -07:00
947dc81c74 use thefuzz
use thefuzz

use thefuzz
2022-07-13 23:11:17 -07:00
c0880c9afe Account for aliases field from CV 2022-07-13 23:11:14 -07:00
e6414fba96 Allow non-ascii in ComicVine searches 2022-07-13 22:45:45 -07:00
a00891f622 Add more tests 2022-07-13 22:27:31 -07:00
9ba8b2876c Ensure homebrew is in the path if it exists 2022-07-12 09:28:51 -07:00
46d3e99d48 Fix tests 2022-07-12 07:43:33 -07:00
d206f5f581 Fixing source_name position 2022-07-12 07:31:42 -07:00
ec83667d77 Adding source_name to add_issue_select_details. 2022-07-12 07:31:42 -07:00
0bbf417133 Tests
Add tests for ComicCacher and ComicVineTalker
Move fixtures to conftest.py
Move test data to testing module
2022-07-11 18:40:12 -07:00
a3e1153283 Improve rar executable handling
Show a message when a CBR/RAR archive is added and rar is not available
Ensure that an empty value for the rar executable becomes 'rar'
2022-07-10 15:21:15 -07:00
ccb461ae76 Improve rename
Implement rename on ComicArchive
Simplify unique_file with pathlib
Fix issues during renaming and simplify with pathlib
Allow exporting as zip to export 7-zip archives
2022-07-09 23:13:18 -07:00
d24b51f94e Apply black formatting and fix mypy issues 2022-07-09 22:56:52 -07:00
def2635ac2 Ignore aspect ratio on background image
Fixes #327
2022-07-07 16:10:12 -07:00
b72fcaa9a9 Add source field to cache DB.
Add source to cache db.

Rename comicvinecacher to comiccacher and update refs.

Fix comment spacing.

Move source_name to end to reduce changes.

Move source_name to end to reduce changes. Fixed.

Fix syntax.

Fix various issues with DB changes.

Move new source_name to bottom.

Remove source_name from CV_.

Revert id to volume_id
2022-07-05 11:29:10 -07:00
3ddfacd89e Fix #325
The aspect ratio mode was missed in b9af606
2022-07-04 18:03:18 -07:00
6eb5fa7ac7 Fix #324
Co-authored-by: Mizaki <jinxybob@hotmail.com>
2022-07-04 15:53:44 -07:00
68efcc74fb Updates
Use casefold in place of lower
Make lint job fail if errors are detected
Use join instead of utils.list_to_string
Simplify get_recursive_filelist with the glob library
Fix handling of un-parseable numbers in xlate
2022-07-01 16:22:01 -07:00
3d84af3746 Convert GenericMetadata to a dataclass
dataclasses allow for simple comparison and object creation

Add more tests
2022-07-01 16:15:43 -07:00
cb5b321539 Update filerenamer
Remove space separated right partition of previous literal text
2022-06-26 01:53:40 -07:00
20ec8c38c2 Fixes
Add importlib_metadata to requirements.txt
Add comments stating origin of new parser
2022-06-23 22:59:09 -07:00
8bdf91ab96 Merge branch 'rating' into develop 2022-06-23 18:13:34 -07:00
fbbd36ab4d make tests and testing proper modules 2022-06-23 13:27:36 -07:00
95643fdace Fix community rating
The user rating control is replaced with critical rating which is now
represented as a float.
utils.xlate has been updated to have an is_float parameter
Metadata is reloaded on save so that changes can be seen
e.g. for CBL tags the critical rating field only stores integers
2022-06-23 13:18:42 -07:00
6c65c2ad56 Make importlib usage compatible with python 3.9 2022-06-23 13:05:27 -07:00
292a69a204 Allow pushes to run CI again 2022-06-10 16:32:21 -07:00
5c6e7d6f3e Allow multiple types to be specified using -t fixes #24 2022-06-10 16:20:58 -07:00
7e033857ba Replace pkg_resources with importlib.metadata 2022-06-10 16:18:58 -07:00
d9c02b0115 Allow changing the ComicVine URL fixes #104 2022-06-10 15:23:58 -07:00
b9af606f87 Improve filename parsing and cover image scaling
Cover image scaling now uses the smooth transformation option in Qt
Filename parsing now identifies a single number as a filename
e.g. '52.cbz' gets parsed as issue: 52 and series: 52
2022-06-09 12:31:57 -07:00
d3c29ae40a Ignore tags on the CI workflow 2022-06-08 09:06:46 -07:00
ff73cbf2f9 Fix small issues
Fix spelling errors
Remove Redundant exception types
Remove dead code
Change the forum link to point to GitHub discussions
2022-06-07 20:22:33 -07:00
3369a24343 Update GitHub Actions
Separate release/packaging and CI
Add an ignore for flake8 on ctversion.py as it is generated
Cleanup unused portions of the makefile
Use 'build' to generate PyPi distribution
Python venv on windows uses the Scripts directory
2022-06-07 19:39:01 -07:00
ce693b55f1 Fix file write semantics for Windows 2022-06-07 12:53:27 -07:00
db37ec7204 Add a literal search option 2022-06-07 12:16:23 -07:00
470b5c0a17 Fix adding files to GUI via running ComicTagger with more filenames
Add flake8-print to ensure all logging uses the logging package
2022-06-06 20:04:51 -07:00
04409a55c7 Handle more exceptions
Handle exceptions during metadata save fixes #309
Handle exceptions during metadata read fixes #126 and #309
2022-06-06 20:04:51 -07:00
bb7fbb4e38 Add pre-commit.ci config 2022-06-06 20:04:34 -07:00
5bb48cf816 fix rar test 2022-06-06 20:04:34 -07:00
b5e6e41043 Add a log window to see the current log 2022-06-06 20:04:34 -07:00
62d927a104 Fix #308
Add null check when loading community_rating
Use iterators instead of while loops
2022-06-05 15:23:20 -07:00
4c9fa4f716 Update template help and default template 2022-06-02 18:32:41 -07:00
e8fa51ad45 Ensure comicapi is as consistent as possible 2022-06-02 18:32:33 -07:00
fd4c453854 Apply pre-commit configuration 2022-06-02 18:32:16 -07:00
c19ed49e05 Move to argparse for argument parsing 2022-06-02 18:28:54 -07:00
36adf91744 Merge branch 'MichaelFitzurka-feature/301-double-page-modified' into develop 2022-05-24 11:45:08 -07:00
8b73a87360 Merge branch 'cleanup' into develop 2022-05-24 11:44:54 -07:00
8c591a8a3b Remove unused imports 2022-05-24 11:44:26 -07:00
c5772c75e5 Cleanup setCheckState
Fix word splitting when auto-tagging
Remove commented code
2022-05-24 11:38:10 -07:00
ff02d25eea Merge branch 'tests' into develop 2022-05-24 11:30:38 -07:00
98a7ee35ee Add tests 2022-05-24 11:30:25 -07:00
59d48619b1 Merge branch 'volume' into develop 2022-05-24 11:30:15 -07:00
10056c4229 Improve volume handling
Include changes by @gramster from #120
During filename parsing set the issue to the volume if there is no issue
2022-05-24 11:27:24 -07:00
7e772abda7 Toggled to Clicked 2022-05-24 10:25:44 -04:00
09ea531a90 Fixing double page always flagging as modified 2022-05-23 09:46:46 -04:00
710d9bf6a5 Fix packaging issues
Add wordninja datafile to pyinstaller
Add publishers.json to the correct package
2022-05-20 00:19:33 -07:00
bb81f921ff Fix Qt typing references to strings 2022-05-19 22:29:46 -07:00
1468b1932f Fix crash on startup
Add publishers.json to pip package
Add exception handling to prevent crash
2022-05-19 20:13:59 -07:00
74d95b6a50 Add typing_extensions 2022-05-19 18:17:22 -07:00
d33fb6ef31 Fix build errors
Add wordninja to requirements.txt
Fix typing to allow unrar-cffi to be optional
2022-05-19 18:08:05 -07:00
4201558483 Merge branch 'wordSplit' into develop 2022-05-19 17:58:45 -07:00
983b3d08f6 Merge branch 'clearMetadata' into develop 2022-05-19 13:39:41 -07:00
eec715551a Allow overwriting existing metadata 2022-05-19 13:28:36 -07:00
d3f552173e Merge branch 'AutoImprint' into develop 2022-05-19 13:28:18 -07:00
3e3dcb03f9 Typed 2022-05-19 13:19:19 -07:00
44b0e70399 Merge branch 'fixComicremoval' into develop 2022-05-16 15:23:15 -07:00
38aedac101 Ensure that comics are properly removed when using remove_archive_list 2022-05-16 15:21:59 -07:00
9a9d97f3bb Fix #291
ComicTagger now accounts for any single unicode numeric value
2022-05-14 01:59:44 -07:00
a4cb8b51a6 Restore test cbz
Add test to ensure that metadata is read correctly
Add tests for IssueString
2022-05-14 01:59:39 -07:00
1bbdebff42 Merge branch 'filenameParser' into develop 2022-05-06 00:33:36 -07:00
783c4e1c5b Merge branch 'uiCleanup' into develop 2022-05-06 00:33:30 -07:00
eb5360a38b Merge branch 'renameFix' into develop 2022-05-06 00:33:24 -07:00
205d337751 Add new filename parser
I created a new, mostly over complicated, filename parser
The new parser works well in many cases and will collect more data than
the original parser but will sometimes give odd results because of how
complicated it has been made e.g.
'100 page giant' will cause issues however '100-page giant' will not

Remove the parse scan info setting as it was not respected in many cases
2022-05-06 00:30:33 -07:00
d469ee82d8 Cleanup ui files
Qt Designer has new defaults since these were originally generated
2022-05-04 00:06:32 -07:00
c464283962 Merge branch 'removeIndent' into develop 2022-04-30 00:01:53 -07:00
48467b14b5 Remove utils.indent, python 3.9 provides a similar function 2022-04-30 00:01:00 -07:00
70df9d0682 Update filerenamer
Fixes an out of range exception during smart cleanup
Enforces field names to be present in format templates
Instead of removing previous text if a replacement is empty only strip
specifically "-_({[#" off the right of the string
2022-04-29 23:45:28 -07:00
049971a78a Merge branch 'removeRenamer' into develop 2022-04-29 23:29:24 -07:00
052e95e53b Remove old file renamer
Use PureWindowsPath objects in templates and tests, this allows both
path separators to be used and compared regardless of platform
2022-04-29 23:27:58 -07:00
fa0c193730 Merge branch 'MichaelFitzurka-feature-258/community-rating' into develop 2022-04-29 23:22:58 -07:00
a98eb2f81b Merge branch 'buildFix' into develop 2022-04-29 23:14:46 -07:00
ae4de0b3e6 Update build settings
Update excluded folders for flake8
Ensure pip install -e is used in both cases to install ComicTagger
Set required python version to 3.9
2022-04-29 23:06:57 -07:00
84b762877f Changes as per comments 2022-04-27 10:15:53 -04:00
2bb7aaeddf Merge branch 'MichaelFitzurka-feature-278/remove-empty-tags' into develop 2022-04-26 04:25:51 -07:00
08434a703e Remove empty versus clearing. 2022-04-22 09:48:47 -04:00
552a319298 Adding CommunityRating. fitxes #258 2022-04-22 09:39:32 -04:00
b9e72bf7a1 Merge branch 'cleanup' into develop 2022-04-20 13:15:44 -07:00
135544c0db Code cleanup 2022-04-20 13:13:03 -07:00
c297fd7fe7 Merge branch 'removeEnum' into develop 2022-04-20 11:44:42 -07:00
168f24b139 Partial revert of 'e616aa8373688fe0ee7394ddad5b409653354271'
Changing PageType to an Enum creates too many issues
2022-04-20 11:41:42 -07:00
89ddea7e9b Update documentation
Add CONTRIBUTING.md
Update install instructions in README
Update Build badge in README
2022-04-19 21:55:34 -07:00
bfe005cb63 Merge branch 'fixSerialization' into develop 2022-04-19 14:55:50 -07:00
48c2e91f7e Fix pip reference 2022-04-19 14:49:14 -07:00
02f365b93f Fix Makefile
make check now uses a venv
make CI uses the environment
Fix rar test
2022-04-19 14:45:36 -07:00
d78c3e3039 Fix serialization errors
Add tests to ensure issue is fixed
Add make check
Add pytest to make CI
2022-04-19 13:16:33 -07:00
f18513fd0e Fix Template help 2022-04-19 00:44:29 -07:00
caa94c4e28 Merge branch 'Renaming' into develop 2022-04-18 22:56:49 -07:00
7037877a77 Add a strict mode to file renaming
Strict renaming removes all reserved names and characters regardless
 of operating system, with out strict mode only for the current
 Operating System
Add more edge cases to smart cleanup
Add more tests for file renaming
2022-04-18 22:55:13 -07:00
6cccf22d54 Allow switching between old and new rename templates
Show a message dialog explaining that there is a new template format
Add a dynamic label to show the effect of a rename
Add tests for FileRenamer
Remove the filename parameter from the determine_name function
2022-04-18 20:12:20 -07:00
ceb2b2861e Merge branch 'filename_tests' into develop 2022-04-18 20:11:06 -07:00
298f50cb45 Merge branch 'configDir' into develop 2022-04-18 20:10:50 -07:00
e616aa8373 Merge branch 'CodeCleanup' into develop 2022-04-18 20:10:08 -07:00
0fe881df59 Code cleanup 2022-04-18 19:40:04 -07:00
f3f48ea958 Add the ability to specify a config directory 2022-04-18 19:08:38 -07:00
9a9d36dc65 Add more tests for parsing filenames 2022-04-18 19:06:09 -07:00
028b728d82 Improve file renaming
Moves to Python format strings for renaming, handles directory
structures, moving of files to a destination directory, sanitizes
file paths with pathvalidate and takes a different approach to
smart filename cleanup using the Python string.Formatter class

Moving to Python format strings means we can point to python
documentation for syntax and all we have to do is document the
properties and types that are attached to the GenericMetadata class.

Switching to pathvalidate allows comictagger to more simply handle both
directories and symbols in filenames.

The only changes to the string.Formatter class is:
1. format_field returns
an empty string if the value is none or an empty string regardless of
the format specifier.
2. _vformat drops the previous literal text if the field value
is an empty string and lstrips the following literal text of closing
special characters.
2022-04-18 18:52:53 -07:00
23f323f52d Add filename tests 2022-04-15 02:46:57 -07:00
49210e67c5 Fix rar_support variable 2022-04-14 16:25:25 -07:00
e519bf79be Merge branch 'MichaelFitzurka-feature/263-pages-keyboard' into develop 2022-04-14 16:23:51 -07:00
4f08610a28 Fix CI 2022-04-14 13:16:51 -07:00
544bdcb4e3 Using shortcuts and actions. 2022-04-14 12:22:53 -04:00
f3095144f5 Merge branch 'feature/149-add-tests' into develop 2022-04-12 15:20:58 -07:00
75f31c7cb2 Merge branch 'fileEncoding' into develop 2022-04-11 18:02:26 -07:00
f7f4e41c95 Catch exception when displaying raw tags 2022-04-11 17:16:07 -07:00
6da177471b Fix #242
Fix file encoding inconsistencies, windows defaults to cp1252, which is
not Unicode compatible.
Add logging for all exceptions in the comicapi package
Ensure that all exceptions are logged and shown to the user
2022-04-11 14:52:41 -07:00
8a74e5b02b Keyboard commands for the Pages tab to make editing easier. 2022-04-10 18:10:09 -04:00
5658f261b0 Merge branch 'MichaelFitzurka-feature/m-age-rating' into develop 2022-04-10 11:05:06 -07:00
6da3bf764e Merge branch 'feature/m-age-rating' of https://github.com/MichaelFitzurka/comictagger into MichaelFitzurka-feature/m-age-rating 2022-04-10 11:04:48 -07:00
5e06d35057 Merge branch 'feature/253-recalc-page-dims' of https://github.com/MichaelFitzurka/comictagger into MichaelFitzurka-feature/253-recalc-page-dims 2022-04-10 11:00:10 -07:00
82bcc876b3 Merge branch 'MichaelFitzurka-feature/183-comment-html-fix' into develop 2022-04-10 10:59:40 -07:00
d7a6882577 Merge branch 'feature/183-comment-html-fix' of https://github.com/MichaelFitzurka/comictagger into MichaelFitzurka-feature/183-comment-html-fix 2022-04-10 10:59:00 -07:00
5e7e1b1513 Merge branch 'MichaelFitzurka-feature/246-dbl-page' into develop 2022-04-10 10:57:46 -07:00
cd9a02c255 Merge branch 'feature/246-dbl-page' of https://github.com/MichaelFitzurka/comictagger into MichaelFitzurka-feature/246-dbl-page 2022-04-10 10:54:49 -07:00
b47f816dd5 Merge branch 'abuchanan920-develop' into develop 2022-04-10 10:50:41 -07:00
d1a649c0ba Adding "M" age rating for 2.0 schema 2022-04-06 11:49:54 -04:00
b7759506fe Menu command to clear out page size,height,width on demand, to then recalculate on save. 2022-04-05 16:23:26 -04:00
97777d61d2 Fixing some HTML to comment translations. 2022-04-05 16:16:27 -04:00
e622b56dae Adding attribs to ImageMetadata class. 2022-04-05 11:23:18 -04:00
a24251e5b4 Merge branch 'comictagger:develop' into develop 2022-04-05 10:38:36 -04:00
d4470a2015 Use more idiomatic regular expression string
Co-authored-by: Timmy Welch <timmy@narnian.us>
2022-04-05 10:37:33 -04:00
d37022b71f Merge branch 'comictagger:develop' into feature/246-dbl-page 2022-04-05 09:59:20 -04:00
5f38241bcb Double Page functionality. 2022-04-05 09:52:59 -04:00
4fb9461491 Stop a crash when the logs folder already exists 2022-04-05 00:58:19 -07:00
c9b5bd625f Fix parsing of filenames that end with an ID such as [__######__] 2022-04-04 22:34:31 -04:00
558072a330 Create the logs folder before attempting to use it 2022-04-04 19:28:38 -07:00
26fa7eeabb Merge branch 'logging' into develop 2022-04-04 19:16:54 -07:00
c50cef568e Add basic logging 2022-04-04 19:10:22 -07:00
2db80399a6 Merge branch 'MichaelFitzurka-feature/247-empty-tags' into develop 2022-04-04 14:16:29 -07:00
4936c31c18 black changed some single quotes to double quotes. 2022-04-04 16:36:46 -04:00
ada88d719f Empty metadata should not assign an empty tag. 2022-04-03 16:50:27 -04:00
1b28623fe3 Bookmark functionality. Fixes #212. 2022-04-03 15:44:20 -04:00
593f568ea7 method renamed to match new changes. 2022-04-03 15:39:03 -04:00
7b4dba35b5 Ensure that tags are overwritten when saving metadata 2022-04-02 15:41:50 -07:00
c95e700025 Merge branch 'CodeCleanup' into develop 2022-04-02 15:36:03 -07:00
e10f7dd7a7 Code cleanup
Remove no longer used google scripts
Remove convenience files from comicataggerlib and import comicapi directly
Add type-hints to facilitate auto-complete tools
Make PyQt5 code more compatible with PyQt6

Implement automatic tooling
isort and black for code formatting
Line length has been set to 120
flake8 for code standards with exceptions:
E203 - Whitespace before ':'  - format compatiblity with black
E501 - Line too long          - flake8 line limit cannot be set
E722 - Do not use bare except - fixing bare except statements is a
                                lot of overhead and there are already
                                many in the codebase

These changes, along with some manual fixes creates much more readable code.
See examples below:

diff --git a/comicapi/comet.py b/comicapi/comet.py
index d1741c5..52dc195 100644
--- a/comicapi/comet.py
+++ b/comicapi/comet.py
@@ -166,7 +166,2 @@ class CoMet:

-            if credit['role'].lower() in set(self.editor_synonyms):
-                ET.SubElement(
-                    root,
-                    'editor').text = "{0}".format(
-                    credit['person'])

@@ -174,2 +169,4 @@ class CoMet:
         self.indent(root)
+            if credit["role"].lower() in set(self.editor_synonyms):
+                ET.SubElement(root, "editor").text = str(credit["person"])

diff --git a/comictaggerlib/autotagmatchwindow.py b/comictaggerlib/autotagmatchwindow.py
index 4338176..9219f01 100644
--- a/comictaggerlib/autotagmatchwindow.py
+++ b/comictaggerlib/autotagmatchwindow.py
@@ -63,4 +63,3 @@ class AutoTagMatchWindow(QtWidgets.QDialog):
             self.skipButton, QtWidgets.QDialogButtonBox.ActionRole)
-        self.buttonBox.button(QtWidgets.QDialogButtonBox.Ok).setText(
-            "Accept and Write Tags")
+        self.buttonBox.button(QtWidgets.QDialogButtonBox.StandardButton.Ok).setText("Accept and Write Tags")

diff --git a/comictaggerlib/cli.py b/comictaggerlib/cli.py
index 688907d..dbd0c2e 100644
--- a/comictaggerlib/cli.py
+++ b/comictaggerlib/cli.py
@@ -293,7 +293,3 @@ def process_file_cli(filename, opts, settings, match_results):
                 if opts.raw:
-                    print((
-                        "{0}".format(
-                            str(
-                                ca.readRawCIX(),
-                                errors='ignore'))))
+                    print(ca.read_raw_cix())
                 else:
2022-04-02 14:21:37 -07:00
84dc148cff Merge branch 'MichaelFitzurka-feature/239-add-web-btn' into develop 2022-04-02 12:57:14 -07:00
14c9609efe Merge branch 'MichaelFitzurka-feature/232-inv-page-type' into develop 2022-04-02 12:57:04 -07:00
2a3620ea21 Replacing requests validation with urlparse. 2022-04-01 09:48:53 -04:00
8c5d4869f9 Updates to comments. 2022-03-31 13:34:40 -04:00
c0aa665347 Adding web link convenience button to open a valid url value in a browser window. 2022-03-31 12:40:43 -04:00
6900368251 Displaying invalid value with Error indicator, that way the user can see what is the invalid value and has the option to leave it or change it. 2022-03-31 10:25:00 -04:00
ac1bdf2f9c Merge branch 'abuchanan920-develop' into develop 2022-03-29 22:29:48 -07:00
c840724c9c Merge branch 'rhaussmann-natsort_fix' into develop 2022-03-29 22:23:00 -07:00
220606a046 Merge branch 'comictagger:develop' into natsort_fix 2022-03-29 09:28:38 -06:00
223269cc2e update requirements 2022-03-29 09:23:05 -06:00
31b96fdbb9 Merge branch 'feature/179-7zip' into develop 2022-03-28 23:29:02 -07:00
908a500e7e One more. 2022-03-26 12:45:33 -04:00
ae20a2eec8 Updates as requested. 2022-03-26 12:42:33 -04:00
287c5f39c1 Merge branch 'comictagger:develop' into feature/179-7zip 2022-03-26 12:27:34 -04:00
cfd2489228 Merge branch 'feature-227-data-src-alt-covers' into develop 2022-03-21 17:52:22 -07:00
86a83021a6 Update to look for images in data-src as well as src. 2022-03-21 15:29:31 -04:00
d7595f5ca1 Merge branch 'comictagger:develop' into feature/179-7zip 2022-03-21 09:27:47 -04:00
5a2bb66d5b Merge branch 'unicodeFix' into develop 2022-03-20 10:43:02 -07:00
5de2ce65a4 Remove print statements
Fixes #223
2022-03-20 10:40:30 -07:00
95d167561d Fix locale for macOS 2022-03-20 02:10:11 -07:00
7d2702c3b6 Update pyinstaller 2022-03-20 02:09:47 -07:00
d0f96b6511 Ensure XML is UTF-8 encoded 2022-03-19 18:17:38 -07:00
ba71e61d87 Added 7zip support thru py7zr.
Tweaked save of archive file and images in comicarchive.
2022-03-18 15:14:42 -04:00
191d72554c Explicitly specify unsigned integer sort to fix comic page order 2022-03-14 13:27:03 -04:00
628251c75b Merge branch 'metadataEdit' into develop 2022-02-21 20:22:28 -08:00
71499c3d7c Merge branch 'bugFixes' into develop
Closes #65,#59,#154,#180,#187,#209
2022-02-21 20:06:44 -08:00
03b8bf4671 Bug fixes
Closes #65,#59,#154,#180,#187,#209
2022-02-21 20:05:07 -08:00
773735bf6e Merge pull request #213 from lordwelch/series_sort
Cleanup settings from #200
2022-01-22 17:29:26 -08:00
b62e291749 Cleanup settings from #200
Rename blacklist to filter to be more accurate
2022-01-22 15:00:22 -08:00
a66b5ea0e3 Series sorting filtering (#200)
Because additional series results are now returned due to #143 the series selection window can with a large number of results that are not usually sorted in a useful way.

I've created 3 settings that can help finding the corect series quickly

use the publisher black list - can be toggled from the series selction screen, as well as a setting for is default behaviour
a setting to make the result initially sorted by start year instead of the default no of issues
a setting to initially put exact and near matches at the top of the list
2022-01-22 14:40:45 -08:00
615650f822 Update xml instead of overwrite 2022-01-05 22:01:00 -08:00
ed16199940 Merge pull request #132 from lordwelch/FixLanguageSort
Sort language correctly
2021-12-15 23:41:40 -08:00
7005bd296e Merge pull request #131 from lordwelch/PageListEditorExtendedSelection
Allow extended selection in the page list editor
2021-12-15 23:40:08 -08:00
cdeca34791 Add experimental word splitting to the filename parser
Adds a global setting as well as a setting that is only in effect
during auto-tagging
2021-12-15 10:58:34 -08:00
aefe778b36 Add publisher and imprint handling
Imprint handling has been added to utils and uses a subclassed dict to
return a tuple for imprint matching, this may not be the best idea but
it works for now.

Add settings option auto_imprint
Add cli flag -a, --auto-import
2021-12-15 10:54:16 -08:00
c6e1dc87dc Allow extended selection in the page list editor 2021-12-15 10:53:01 -08:00
ef37158e57 Sort language correctly 2021-12-15 10:52:25 -08:00
444e67100c Merge pull request #207 from jpcranford/patch-1
Fixed typo
2021-12-15 08:49:15 -08:00
82d054fd05 Fixed typo 2021-12-14 16:52:48 -07:00
f82c024f8d Merge pull request #206 from lordwelch/rarOptionalFix
Fix rarfile import as by default it is optional
2021-12-12 18:49:05 -08:00
da4daa6a8a Fix rarfile import as by default it is optional 2021-12-12 18:46:28 -08:00
6e1e8959c9 Merge pull request #204 from lordwelch/buildSystem
Update build
2021-12-12 18:15:58 -08:00
aedc5bedb4 Update build
Separate dependencies into files and add optional dependencies
Update natsort usage to be compliant with the latest version (#203)
Set PyQt5 to 5.15.3, 5.15.4 has issues with pyinstaller
Add pyproject.toml with setuptools, isort and black configuration
Add optional dependencies (#191)
Update README (#174)
2021-10-23 21:39:58 -07:00
93f5061c8f Add GitHub Actions yaml file (#201)
Upload artifacts this allows easy testing of macOS and Windows binaries
Update unrar-cffi for Python 3.9 wheels
2021-09-29 01:17:04 -07:00
d46e171bd6 Merge pull request #199 from lordwelch/seriesSearch
Improve issue identification
2021-09-26 17:09:54 -07:00
e7fe520660 Improve issue identification
Move title sanitizing code to utils module
Update issue identifier to compare sanitized names
2021-09-26 17:06:30 -07:00
91f288e8f4 Update travis
hold windows to 3.7.9 as unrar-cffi only has windows wheels for 3.7
switch to using builtin python for macOS
remove ssl dlls from comictagger.spec
require pyinstaller=4.3 to allow macOS codesigning
Update python usage
restrict builds to tags and pull requests
2021-09-26 12:51:17 -07:00
d7bd3bb94b Merge pull request #198 from lordwelch/143-regression
Fix regression of #143
2021-09-25 23:01:38 -07:00
9e0b0ac01c Fix regression of #143 2021-09-25 22:59:59 -07:00
03a8d906ea Merge pull request #189 from lordwelch/seriesSearch
Series search
2021-09-21 19:59:26 -07:00
fff28cf6ae Improve searchForSeries
Refactor removearticles to only remove articles
Add normalization on the search string and the series name results

Searching now only compares ASCII a-z and 0-9 and all other characters
are replaced with single space, this is done to both the search string
and the result. This fixes an with names that are separated by a
hyphen (-) in the filename but in the Comic Vine name are separated by a
slash (/) and other similar issues.
2021-08-29 17:35:34 -07:00
9ee95b8d5e Merge pull request #192 from lordwelch/fixes
Fix errors
2021-08-16 17:37:19 -07:00
11bf5a9709 Move to python requests module
Add requests to requirements.txt
Requests is much simpler and fixes all ssl errors.
Comic Vine now requires a unique useragent string
2021-08-11 20:13:53 -07:00
af4b3af14e Cleanup metadata handling
Mainly corrects for consistency in most situations
CoMet is not touched as there is no support in the gui and has an odd requirements on attributes
2021-08-07 21:54:29 -07:00
9bb7fbbc9e Fix errors
Libraries updated and these are no longer needed
2021-08-05 17:21:21 -07:00
beb7c57a6b fix: change accidental overwrite of reserved __dir__ 2019-10-20 00:36:13 +02:00
ce48730bd5 fix: choco install multiple packages breaks with version 2019-10-20 00:25:52 +02:00
806b65db24 freeze windows python version to 3.7.5 2019-10-20 00:20:57 +02:00
cdf9a40227 fix: add setup.py install before testing 2019-10-20 00:08:11 +02:00
0adac47968 add pytest run to travis ci 2019-10-20 00:02:03 +02:00
096a89eab4 add pytest 2019-10-19 23:57:49 +02:00
f877d620af allow for alpha releases in travis 2019-10-06 16:25:31 +02:00
c175e46b15 Increase comicvine search results per request to max (#164) 2019-10-06 07:14:11 -07:00
f0bc669d40 PyPI release (#163) 2019-10-06 07:01:33 -07:00
db3db48e5c Better console handling on Windows (#162) 2019-10-06 05:15:18 -07:00
cec585f8e0 Changed: use unrar-cffi for cbr handling (#151) 2019-10-05 23:59:52 +02:00
d71a48d8d4 Better support for CLI mode on windows (#158) 2019-10-05 23:55:34 +02:00
9e4a560911 Better support for macOS dark mode (#159) 2019-10-05 23:53:56 +02:00
f244255386 update urls to new github comictagger organization 2019-10-05 16:31:12 +02:00
254e2c25ee Brand new README file (#156) 2019-10-05 16:09:04 +02:00
7455cf17c8 fix broken drag & drop on macOS (#142) 2019-09-29 23:02:44 +01:00
d93cb50896 add version info to mac info_plist (#146) 2019-09-29 22:11:42 +01:00
3316cab775 fix travis regex 2019-09-28 17:05:15 +02:00
c01f00f6c3 multi platform build on travis (#145) 2019-09-28 17:01:05 +02:00
06ff25550e use setuptools_scm to handle version 2019-09-28 14:59:36 +02:00
1f7ef44556 remove obsolete download_url (https://git.io/JeZrE) 2019-09-28 14:57:09 +02:00
fabf2b4df6 Merge tag '1.2.0+2' into develop
1.2.0+2
2019-09-25 01:55:29 +02:00
0fbaeb861e Merge branch 'release/1.2.0+2' 2019-09-25 01:55:15 +02:00
3dcc04a318 try to fix appveyor deployment 2019-09-25 01:55:03 +02:00
933e053df3 Merge tag '1.2.0+1' into develop
1.2.0+1
2019-09-25 01:30:32 +02:00
5f22a583e8 Merge branch 'release/1.2.0+1' 2019-09-25 01:30:03 +02:00
3174b49d94 bump version to force appveyor deploy 2019-09-25 01:29:50 +02:00
93ce311359 Release 1.2.0 2019-09-25 00:51:28 +02:00
bc43c5e329 Release 1.2.0 2019-09-25 00:50:50 +02:00
9bf7aa20fb bump version to 1.2.0 2019-09-25 00:49:52 +02:00
5416bb15c3 Appveyor GitHub release (#139) 2019-09-24 23:36:08 +01:00
562a659195 Travis build for macOS build (#100) 2019-09-24 23:30:23 +01:00
1d3d6e2741 bump version 1.1.32-rc1 2019-09-22 12:47:19 +01:00
c9724527b5 Fixed TLS version for the Comic Vine (#135)
* Fixed TLS version for the comicvine

* Fixed TLS version for the Comic Vine - Auto-Identify and Auto-Tag functions
2019-09-22 12:40:59 +01:00
2891209b4e bump version 2019-02-04 20:27:37 +01:00
5b87e19d3e Limit Comic Vine search result queries (#119)
* Tweaked search string based on new comic vine search behavior
Placated Beaufitul Soup by passing the parser

* Limit search results fetching after recent Comic Vine changes.
Also, minor debug comment tweaks.
2019-02-04 20:16:44 +01:00
tlc
674e24fc41 Enable Zip64 (#96) 2018-09-20 00:09:24 +02:00
91f82fd6d3 Python3 and QT5 upgrade (#109)
* Tweaked search string based on new comic vine search behavior
Placated Beaufitul Soup by passing the parser

* First cut at porting to Python 3 and PyQt5

* remove debug print

* tweaked progress dialog handling for issues on ubuntu gui

* Handle bad key more gracefullu

* More integration of unrarlib into settings and rest of app

* Better handling of "personal" unrar lib setting

* PEP 440-compliant version string

* Tuned linux rar help strings

* Got setup working again
* Attempts to build unrar on install
* Some minimal desktop integration on various platforms

* Fix wrong shortfile

* More setup.py enhancements
* Use proper temp file
* Added comment block at top

* Comment out desktop integration attempt for now

* Updated some links and info

* Fixed the html a bit

* Repaired some images that caused libpng to complain

* update readme re:  py3qt5 branch changes

* another note

* #108 feat: try to simplify windows build using only pip and python3

* #108 feat: fix python location on appveyor (try 1)

* #108 feat: use venv (try 2)

* #108 feat: use venv (try 3)

* #108 feat: update to latest pyinstaller develop branch

* #108 feat: update to latest pyinstaller develop branch (again)

* #108: add ssl libraries for windows packaging

* #108: refresh env in win build to pick the right mingw

* #108: change order of win build script operations

* #113: fix subprocess usage in pyinstaller package

* bump version
2018-09-19 22:05:39 +02:00
cf43513d52 feat: add appveyor configuration 2018-01-17 13:35:10 -08:00
a7288a94cc #98 Multiplatform pyinstaller dist (#99)
Multiplatform pyinstaller dist (#98)
2018-01-14 16:41:27 +01:00
d0918c92e4 #87 Update comic vine url and ssl config (#93)
* #87 fix ssl comicvine communication

* handle missing libunrar. update macos makefile. remove version check window. bump version.

* update release notes

* #87 fix ssl context in several places. update comicvine api url.

* fix drag and drop issues on macOS

* bump version to 1.1.16-beta-rc2

* use PNG conversion for Windows build
2017-12-21 15:19:45 +01:00
4ff2061568 Merge pull request #74 from Alkpone/master
Bugs in move2folder.py script
2015-03-22 10:49:21 +01:00
08c402149b Prevent error when no file has been detected
Script raised an unhandled exception:  local variable 'fmt_str' referenced before assignment
Traceback (most recent call last):
  File "/volume1/@appstore/comictagger/comictaggerlib/options.py", line 233, in launch_script
    script.main()
  File "/volume1/@appstore/comictagger/scripts/move2folder.py", line 90, in main
    print >> sys.stderr, fmt_str.format("")
UnboundLocalError: local variable 'fmt_str' referenced before assignment
2015-03-21 14:32:55 +01:00
184dbf0684 Prevent error when running the script
Script raised an unhandled exception:  coercing to Unicode: need string or buffer, NoneType found
Traceback (most recent call last):
  File "/root/comictagger/comictaggerlib/options.py", line 233, in launch_script
    script.main()
  File "scripts/move2folder.py", line 80, in main
    ca = ComicArchive(filename, settings.rar_exe_path)
  File "/root/comictagger/comicapi/comicarchive.py", line 648, in __init__
    with open(fname, 'rb') as fd:
TypeError: coercing to Unicode: need string or buffer, NoneType found
2015-03-21 14:17:05 +01:00
ed0050ba05 fixed typo 2015-03-06 11:26:47 +01:00
68030a1024 updated to unrar 0.3 2015-03-01 16:14:01 +01:00
983ad1fcf4 Merge branch 'fcanc-master' 2015-03-01 15:44:11 +01:00
d959ac0401 Huge code cleanup
- `autopep8 -aa` for general cleanup;
- Changed order of imports, they should be ordered into 3 groups:
1. standard library imports;
2. 3rd party packages;
3. project imports.
- I commented various imports that were reported as unused by my IDE.
If everything goes fine we can consider to delete them;
- The Apache license disclaimers are now comments since triple-quotes
should be used only for docstrings;
- Fix - `utils.centerWindowOnParent` did not resolve, changed to
`centerWindowOnParent`
2015-02-22 03:30:32 +01:00
2a550db02a Merge pull request #1 from davide-romanini/master
Merge davide-romanini commits
2015-02-18 20:44:28 +01:00
6369fa5fda updated readme 2015-02-16 16:34:38 +01:00
d5a13a4206 various fixes after merging comicstream-integr 2015-02-16 16:19:38 +01:00
b2532ce03a Merge branch 'comicstream-integr' 2015-02-16 16:18:00 +01:00
79a67d8c29 Merge pull request #71 from branch 'fcanc-master' 2015-02-16 14:51:57 +01:00
d9bd38674c added new dependencies to requirements.txt. with new unrar needs UNRAR_LIB_PATH to be set to start 2015-02-16 14:27:13 +01:00
a0154aaaae Merge commit '17f74cf2968a4e0aa01d7309afe7e1407b8abef2' into comicstream-integr 2015-02-16 14:09:21 +01:00
17f74cf296 Squashed 'comicapi/' changes from b7d2458..18f87d3
18f87d3 using comicapi subtree classes

git-subtree-dir: comicapi
git-subtree-split: 18f87d35b1b2cf5e135fad353419eda11209a6be
2015-02-16 14:09:21 +01:00
3f112cd578 Merge commit 'f6439049d8d8b5a4709f1b78afbfd289d00e8c25' as 'comicapi' 2015-02-16 13:27:21 +01:00
f6439049d8 Squashed 'comicapi/' content from commit b7d2458
git-subtree-dir: comicapi
git-subtree-split: b7d2458b80467a47be1d1d58b31ffcac62c2743c
2015-02-16 13:27:21 +01:00
2fe818872c removed splitted comicapi 2015-02-16 13:25:35 +01:00
a419969b85 autopep8 -aa
—aggressive, level 2
2015-02-15 12:55:04 +01:00
ee52448f17 autopep8 -a
—aggressive, level 1
2015-02-15 12:44:09 +01:00
79103990fa autopep8
automatically formats Python code to conform to the PEP 8 style guide —
default usage (whitespace changes only)
2015-02-15 11:44:00 +01:00
22dbafbc00 Code cleanup, round 1
Some formatting cleanup, plus print modernization, & typos correction.
2015-02-14 00:08:07 +01:00
0df283778c Indentation
Replaced tabs with spaces, and removed some trailing spaces.
2015-02-12 23:57:46 +01:00
a6282b5449 Move2folder script
Added a script to organize comics in a folder tree by Publisher/Series
(Volume).
2015-02-12 19:15:17 +01:00
5574280ad6 Filename parser tweaks
Fixes the Scan Info tag being left blank when the filename doesn’t
provide an issue number.
2015-02-12 19:09:33 +01:00
19b907b742 refactor (continue) 2015-02-11 19:45:45 +01:00
a9ff8f37b0 refactor core comicarchive classes in its own package comicapi 2015-02-11 19:45:02 +01:00
0769111f8c #70 added support for the day field on the gui 2015-02-09 21:50:02 +01:00
cf6ae8b5ae aligned with comicstreamer updates
refactor qt specific functions in utils.py in new ui.qtutils module
2015-02-02 17:20:48 +01:00
1d6846ced3 gitignore
changed to README.md for github.
2015-01-23 17:42:22 +01:00
d516d80093 Removed unused FileTableWidget, and explicitly set the column count. This fixes a problem on ArchLinux systems
git-svn-id: http://comictagger.googlecode.com/svn/trunk@744 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-07-06 18:19:50 +00:00
bf9ab71fd9 release notes update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@737 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-14 03:56:46 +00:00
33b00ad323 Text tweaks
git-svn-id: http://comictagger.googlecode.com/svn/trunk@736 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-14 03:56:32 +00:00
301ff084f1 fixes for webp, api key handling, and CV rate limit
git-svn-id: http://comictagger.googlecode.com/svn/trunk@734 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-13 06:26:44 +00:00
0c146bb245 minor fix
git-svn-id: http://comictagger.googlecode.com/svn/trunk@733 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-13 06:26:13 +00:00
08cc4a1acb Use pip-installed pyinstaller
git-svn-id: http://comictagger.googlecode.com/svn/trunk@732 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-13 06:25:35 +00:00
f97a1653d9 dos-ified release_notes file
git-svn-id: http://comictagger.googlecode.com/svn/trunk@728 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-18 15:44:38 +00:00
d9dbab301a prep for release
git-svn-id: http://comictagger.googlecode.com/svn/trunk@727 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-18 15:42:05 +00:00
3d93197101 Added warning when rar is tried be loaded, and unrar tool isn't known
Fixed a bug when erroneous message is show when file is attempted to be reloaded

git-svn-id: http://comictagger.googlecode.com/svn/trunk@726 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-12 06:08:07 +00:00
752a1d8923 actual version bump
git-svn-id: http://comictagger.googlecode.com/svn/trunk@714 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 04:04:40 +00:00
68002daffa bumped version number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@713 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 04:02:13 +00:00
ad5062c582 Persist some auto-tag options
git-svn-id: http://comictagger.googlecode.com/svn/trunk@712 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 03:21:24 +00:00
2680468f34 New CBL transform to copy story arcs to generic tags
git-svn-id: http://comictagger.googlecode.com/svn/trunk@711 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 02:06:44 +00:00
6156fc296a Added settings option to auto-clear form when importing from CV
added settings option to remove html tables from CV summary

git-svn-id: http://comictagger.googlecode.com/svn/trunk@710 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 01:52:14 +00:00
0feed294d4 Avoid an exception condition
git-svn-id: http://comictagger.googlecode.com/svn/trunk@709 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 01:50:40 +00:00
e57736b955 Decouple comicarchive from settings
Enforce single instance of GUI app

git-svn-id: http://comictagger.googlecode.com/svn/trunk@708 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:13:04 +00:00
70fcdc0129 Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@707 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:12:05 +00:00
9a64195ebd Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@706 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:10:18 +00:00
b0f229f851 Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@705 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:09:03 +00:00
877a5ccd85 Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@704 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:08:22 +00:00
c0f2e2f771 Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@703 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:07:39 +00:00
0adfc9beb3 properly decode the user settings path
git-svn-id: http://comictagger.googlecode.com/svn/trunk@702 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:46:56 +00:00
d0bc41d7ee Allow user to specify the GUI start up tag style on the command line
git-svn-id: http://comictagger.googlecode.com/svn/trunk@701 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:44:47 +00:00
fa46a065a4 fixed some spelling errors
git-svn-id: http://comictagger.googlecode.com/svn/trunk@700 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:43:21 +00:00
8fcd5ba7d6 try to parse table HTML in the comment field
git-svn-id: http://comictagger.googlecode.com/svn/trunk@699 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:42:11 +00:00
759cdc6b40 use the requirements in the setup
git-svn-id: http://comictagger.googlecode.com/svn/trunk@698 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:40:22 +00:00
1405d9ff0e more process tweaks
git-svn-id: http://comictagger.googlecode.com/svn/trunk@692 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 22:28:50 +00:00
d8fcbbad0a Upload the zip package to pypi index site also
git-svn-id: http://comictagger.googlecode.com/svn/trunk@691 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 22:28:03 +00:00
3eca25db34 changed build checklist
git-svn-id: http://comictagger.googlecode.com/svn/trunk@688 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 21:39:16 +00:00
c8a5a89369 changed download URL to point at google drive site
git-svn-id: http://comictagger.googlecode.com/svn/trunk@687 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 21:38:55 +00:00
ff578ea819 bumped version to 1.1.12
git-svn-id: http://comictagger.googlecode.com/svn/trunk@686 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 21:38:22 +00:00
1c730c25d5 removed auto-upload to google code site
git-svn-id: http://comictagger.googlecode.com/svn/trunk@685 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 21:38:02 +00:00
35b7b39b86 Don't choke when the version string server fails.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@683 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 20:59:35 +00:00
719c711484 Language tweak
git-svn-id: http://comictagger.googlecode.com/svn/trunk@668 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 18:03:08 +00:00
afbbc9d00c git-svn-id: http://comictagger.googlecode.com/svn/trunk@667 6c5673fe-1810-88d6-992b-cd32ca31540c 2014-03-23 17:48:59 +00:00
b8e0a45fc8 bumped version and release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@665 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 17:31:00 +00:00
b7360dd33e Updated copyright dates
git-svn-id: http://comictagger.googlecode.com/svn/trunk@664 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 17:30:23 +00:00
d9f1956426 handle a crash bug when file starts with --
git-svn-id: http://comictagger.googlecode.com/svn/trunk@663 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 16:56:04 +00:00
b5c7f36410 New pyunrar version to handle rar tools 5.x
git-svn-id: http://comictagger.googlecode.com/svn/trunk@662 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:43:03 +00:00
0b0663d935 Update copyright date
git-svn-id: http://comictagger.googlecode.com/svn/trunk@661 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:42:07 +00:00
eee1f65436 handle corner case of non-numeric issue ending in "."
git-svn-id: http://comictagger.googlecode.com/svn/trunk@660 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:41:38 +00:00
9a8d4149f2 fixed spelling error
git-svn-id: http://comictagger.googlecode.com/svn/trunk@659 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:40:01 +00:00
b02a205668 Make sure all error print outs are unicode
Catch error when zipfile list fails

git-svn-id: http://comictagger.googlecode.com/svn/trunk@658 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:38:36 +00:00
57284dfbed fixed typo in makefile
git-svn-id: http://comictagger.googlecode.com/svn/trunk@657 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:37:19 +00:00
afcbde7fc6 update todo
git-svn-id: http://comictagger.googlecode.com/svn/trunk@651 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:45:15 +00:00
151fac5bf1 updated release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@650 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:45:06 +00:00
57c1efdab9 makefile TAGGER_BASE can be set in the environment
git-svn-id: http://comictagger.googlecode.com/svn/trunk@649 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:43:22 +00:00
6b272cef87 When searching for a title, convert the string to a list of words separated by "ANDS", and then back to a string
git-svn-id: http://comictagger.googlecode.com/svn/trunk@648 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:40:58 +00:00
1cdc732739 Added a message when not able to open selected folder or file
git-svn-id: http://comictagger.googlecode.com/svn/trunk@647 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:39:47 +00:00
d1b00d162d Allow any size archive to be considered a comic
git-svn-id: http://comictagger.googlecode.com/svn/trunk@646 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:37:13 +00:00
3dd3980bc1 update todo file
git-svn-id: http://comictagger.googlecode.com/svn/trunk@645 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-08-18 18:01:01 +00:00
cbf475eb26 removed filtering out of period (".")
git-svn-id: http://comictagger.googlecode.com/svn/trunk@644 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-08-18 18:00:04 +00:00
ac8b575659 bumped version number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@643 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-08-18 17:56:38 +00:00
ac8ef286a4 Perform the rar test first, since some rars can be falsly identified as zips, somehow...
git-svn-id: http://comictagger.googlecode.com/svn/trunk@641 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-07-23 17:06:35 +00:00
f567dc37be Handle case of None value credit tags in XML
git-svn-id: http://comictagger.googlecode.com/svn/trunk@640 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-07-08 23:32:24 +00:00
15c5fc5258 release notes update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@637 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-09 01:31:26 +00:00
cc985b52a5 Do the limited series check/elimination after cover matching
git-svn-id: http://comictagger.googlecode.com/svn/trunk@636 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-08 02:39:06 +00:00
910b0386be Remove tooltip if not expandable
git-svn-id: http://comictagger.googlecode.com/svn/trunk@635 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-08 02:38:06 +00:00
0fece23405 Allow rename w/smart cleanup to have "--"
git-svn-id: http://comictagger.googlecode.com/svn/trunk@634 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 22:30:32 +00:00
eee320e0c7 bumped version number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@632 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 21:07:00 +00:00
accabf8e21 Added keyboard shortcut for form clear
git-svn-id: http://comictagger.googlecode.com/svn/trunk@631 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 21:06:48 +00:00
acc253d35c todo update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@630 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 18:36:30 +00:00
ede0154efe issueCount now gets passed to issueidentifier.
a possible technique for eliminating potential volumes is coded, but commented out for now

git-svn-id: http://comictagger.googlecode.com/svn/trunk@629 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 18:24:57 +00:00
5b805b1428 auto-tag progress window now uses coverimagewidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@628 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 18:22:14 +00:00
2e6b2a89db Added a raw image data mode for the coverimagewidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@627 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 18:21:34 +00:00
c028bb4ddc Make sure to catch all non-numeric characters after a # for the issue number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@626 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-04 01:48:42 +00:00
b70beb5684 more file name parser enhancements
git-svn-id: http://comictagger.googlecode.com/svn/trunk@625 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-04 01:22:39 +00:00
128af4521b better filename parsing
git-svn-id: http://comictagger.googlecode.com/svn/trunk@623 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-02 16:31:50 +00:00
43cf7a80c8 remove print
git-svn-id: http://comictagger.googlecode.com/svn/trunk@622 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-01 22:33:05 +00:00
3223ed190c Make sure form is updated when removing top item from list
git-svn-id: http://comictagger.googlecode.com/svn/trunk@621 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-01 22:32:20 +00:00
9e2817c037 deal with CV bug (wrong result set count) when not specifying page=1
git-svn-id: http://comictagger.googlecode.com/svn/trunk@620 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-01 22:31:25 +00:00
6e7bd10fb9 deal with pagination bug on comicvine side reporting wrong result set size when not specifiying page=1
git-svn-id: http://comictagger.googlecode.com/svn/trunk@619 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-01 22:30:30 +00:00
c099205779 Reworked the issue string parsing
git-svn-id: http://comictagger.googlecode.com/svn/trunk@618 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-30 18:05:10 +00:00
47d8da0e80 removed extra line
git-svn-id: http://comictagger.googlecode.com/svn/trunk@615 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-22 02:37:13 +00:00
0f7e88e58c bump to 1.1.8-beta
git-svn-id: http://comictagger.googlecode.com/svn/trunk@614 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-22 00:49:20 +00:00
65902a15b1 add-on script for renaming files based on transform list
git-svn-id: http://comictagger.googlecode.com/svn/trunk@613 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-21 06:55:32 +00:00
a68b2babeb some reworking so scripts get passed all options after scriptname
git-svn-id: http://comictagger.googlecode.com/svn/trunk@612 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-21 06:53:44 +00:00
4098802e43 sleep 1 sec before retrying after http 500 error
git-svn-id: http://comictagger.googlecode.com/svn/trunk@611 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-21 06:51:43 +00:00
9c14258e9f verify need to check version in GUI
git-svn-id: http://comictagger.googlecode.com/svn/trunk@610 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-17 18:10:08 +00:00
33bdbe8be8 verify need to check version in CLI
git-svn-id: http://comictagger.googlecode.com/svn/trunk@609 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-17 18:09:43 +00:00
a76864c109 be a little smarted in colon replacement in renaming
git-svn-id: http://comictagger.googlecode.com/svn/trunk@608 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-17 18:09:10 +00:00
cb68d07751 Added special handling of HTTP 500 error that Comic Vine seems to give occasionally.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@607 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-17 18:08:39 +00:00
8e9fccdbbc removed line feed from prints
git-svn-id: http://comictagger.googlecode.com/svn/trunk@600 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-13 05:30:53 +00:00
39990fc2b4 Updated todo and release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@599 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:56:15 +00:00
e8c315d834 parse scan info by default
git-svn-id: http://comictagger.googlecode.com/svn/trunk@598 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:55:38 +00:00
f8a06a8746 Make sure there is a default image URL if none exists
git-svn-id: http://comictagger.googlecode.com/svn/trunk@597 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:53:35 +00:00
9415087da7 removed debug print
git-svn-id: http://comictagger.googlecode.com/svn/trunk@596 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:52:43 +00:00
9aee5c32eb Made the description font a little smaller
git-svn-id: http://comictagger.googlecode.com/svn/trunk@595 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:52:23 +00:00
fcdb4a3889 cli option to assume issue number 1 if not found/parsed
git-svn-id: http://comictagger.googlecode.com/svn/trunk@594 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 06:11:25 +00:00
534a326258 Remember filelist sorting
git-svn-id: http://comictagger.googlecode.com/svn/trunk@593 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 06:10:50 +00:00
0390ff5919 Added option to parse scan info from filename
git-svn-id: http://comictagger.googlecode.com/svn/trunk@592 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 04:49:08 +00:00
b800ae1751 Added issue description to the match and issue selection dialogs
git-svn-id: http://comictagger.googlecode.com/svn/trunk@591 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 01:56:24 +00:00
a2c17982d3 Fixed the resizing with the splitter
git-svn-id: http://comictagger.googlecode.com/svn/trunk@590 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 01:55:59 +00:00
0347befae6 bumped version number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@589 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 01:54:59 +00:00
af54b79790 Added cover date to issue selection dialog
git-svn-id: http://comictagger.googlecode.com/svn/trunk@588 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-11 01:57:19 +00:00
dd04ae98a0 Remove optimization for eliminating one-shots from consideratoion (not need with new CV search method)
git-svn-id: http://comictagger.googlecode.com/svn/trunk@587 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-11 01:32:07 +00:00
31b76fba92 Make sure out data is set in the case of pages that don't need to be resized
git-svn-id: http://comictagger.googlecode.com/svn/trunk@586 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-10 20:46:59 +00:00
9f4a4b0eb0 More version checking stuff
git-svn-id: http://comictagger.googlecode.com/svn/trunk@585 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-06 19:31:00 +00:00
575a23c6bf More version checking stuff
git-svn-id: http://comictagger.googlecode.com/svn/trunk@584 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-06 19:30:01 +00:00
5d84f09359 Check online for new version
Use non-deprecated "read_file" for configparser

git-svn-id: http://comictagger.googlecode.com/svn/trunk@583 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-05 19:48:49 +00:00
3072583482 Normalize issue number for search
git-svn-id: http://comictagger.googlecode.com/svn/trunk@582 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-05 19:43:45 +00:00
8d867cf78a This file will be checked by the app to see if it should update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@581 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-04 19:42:10 +00:00
36c79b5a2a Twitter and facebook buttons
git-svn-id: http://comictagger.googlecode.com/svn/trunk@580 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-04 19:18:55 +00:00
dfdaf731b4 updated release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@576 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-03 17:37:23 +00:00
67bff8586c Make sure start_year test is with all ints
git-svn-id: http://comictagger.googlecode.com/svn/trunk@575 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-03 00:34:55 +00:00
9e4cbea6e4 Made sure some prints are unicode
git-svn-id: http://comictagger.googlecode.com/svn/trunk@574 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-03 00:33:03 +00:00
d150b2ce54 made Auto-ID use the info already fetched from the 'issues' query for the image and page URLs (rather than use the cache or fetch again)
git-svn-id: http://comictagger.googlecode.com/svn/trunk@573 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 22:37:28 +00:00
a20949cc4d got rid of debug print
git-svn-id: http://comictagger.googlecode.com/svn/trunk@572 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 22:33:13 +00:00
e3fceb20a2 merged all the cover_date parsing into one function in CV talker
git-svn-id: http://comictagger.googlecode.com/svn/trunk@571 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 20:47:18 +00:00
f4e00d9ef3 bumped version
git-svn-id: http://comictagger.googlecode.com/svn/trunk@570 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 19:59:35 +00:00
1980bd5988 Added search across issues by volume id, issue number, and date for much faster matching
git-svn-id: http://comictagger.googlecode.com/svn/trunk@569 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 19:58:23 +00:00
db54affc74 Handle None cover_date
git-svn-id: http://comictagger.googlecode.com/svn/trunk@568 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 19:57:50 +00:00
0edb9444ef Nice twitter button for code page
git-svn-id: http://comictagger.googlecode.com/svn/trunk@567 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 16:42:49 +00:00
b22c25f53f Remove parsing of title. We're back to how it was before, except now we get 'none' instead of empty string.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@566 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 14:11:00 +00:00
76e6666a79 Tweaks for dealing with unicode issue "number"
Updated release_notes


git-svn-id: http://comictagger.googlecode.com/svn/trunk@563 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-30 16:31:56 +00:00
a804a10e0e use unicode in case of weird things like "1/2" symbol
git-svn-id: http://comictagger.googlecode.com/svn/trunk@562 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-30 06:26:41 +00:00
fe413b12c1 Use issues filtered query to get issue list instead of deprecated volume.issues
git-svn-id: http://comictagger.googlecode.com/svn/trunk@561 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-30 06:25:04 +00:00
e38dc2f063 CV API changes: use cover_date instead of publish_month/year for issues, roles are now a list
bumped version

git-svn-id: http://comictagger.googlecode.com/svn/trunk@560 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-29 23:09:41 +00:00
5e5418090b Added resource types for comicvine requests
git-svn-id: http://comictagger.googlecode.com/svn/trunk@557 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-28 19:04:30 +00:00
56c1f8582a todo update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@554 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 19:35:41 +00:00
00f8c0a280 removed typo
git-svn-id: http://comictagger.googlecode.com/svn/trunk@553 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 19:25:42 +00:00
1d915eb155 make sure issue number comparisons are case-normalized in case of alpha appendage
git-svn-id: http://comictagger.googlecode.com/svn/trunk@552 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 19:21:20 +00:00
b7b8060ef2 Fixed filename parsing to find "AU" issues
git-svn-id: http://comictagger.googlecode.com/svn/trunk@551 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 19:20:10 +00:00
2d190b076a Bumped version and notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@550 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 18:17:20 +00:00
cd92b1afea cleanup
git-svn-id: http://comictagger.googlecode.com/svn/trunk@549 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 17:58:05 +00:00
4d21a001d6 Fix the way sorting is done by issues
git-svn-id: http://comictagger.googlecode.com/svn/trunk@548 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 17:57:05 +00:00
4af59d2315 Handle changes to the ComicVine API and result sets
git-svn-id: http://comictagger.googlecode.com/svn/trunk@547 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 17:56:30 +00:00
c9c98b6c11 Handle if volume description is None
git-svn-id: http://comictagger.googlecode.com/svn/trunk@546 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 17:55:02 +00:00
1ff43db2ce Add-on for reducing page sizes in comics
git-svn-id: http://comictagger.googlecode.com/svn/trunk@545 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-24 17:45:10 +00:00
822f6b4729 0.1 issue gets special consideration a "first" issue.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@544 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-11 23:19:50 +00:00
44a8dc6815 Fixed flawed RE assumption when parsing issue number with # in front. Now properly handle issues with decimal point
git-svn-id: http://comictagger.googlecode.com/svn/trunk@543 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-11 23:18:07 +00:00
a35576895c Removed warning about writing CBI to RAR since CBL supports it now. Yay!
git-svn-id: http://comictagger.googlecode.com/svn/trunk@542 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-11 23:16:22 +00:00
631662b30c added configparser to requirements notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@534 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-25 07:10:28 +00:00
cbe3f5a2dc some commented out lines for building on 64-bit snow leopard
git-svn-id: http://comictagger.googlecode.com/svn/trunk@533 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-25 07:01:42 +00:00
73f8bd426b added experimental function to look for scanner page
tweaked image sorting to push some files that begin with "-" to end

git-svn-id: http://comictagger.googlecode.com/svn/trunk@532 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-25 07:00:22 +00:00
0642604480 cropcover now creates a PNG instead of JPEG in case of palletized image
git-svn-id: http://comictagger.googlecode.com/svn/trunk@531 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-25 06:55:44 +00:00
1d95f5076e bumped version, release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@530 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-25 06:54:52 +00:00
53b0c2e8f9 Use backported "configparser" module instead of stock "ConfigParser" to better handle unicode
git-svn-id: http://comictagger.googlecode.com/svn/trunk@529 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-24 19:08:14 +00:00
f59f5fe981 use env command to launch python
git-svn-id: http://comictagger.googlecode.com/svn/trunk@528 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-24 19:04:43 +00:00
67545d8a13 Fixed issue where month_name creation was failing by not decoding date string from system
git-svn-id: http://comictagger.googlecode.com/svn/trunk@527 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-24 18:56:29 +00:00
ab3e3b40c4 Added a script to thin out fat binaries
git-svn-id: http://comictagger.googlecode.com/svn/trunk@526 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-21 22:40:00 +00:00
188024c2db Make sure the mac bundle Info.plist has the version string set
git-svn-id: http://comictagger.googlecode.com/svn/trunk@524 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-19 22:15:24 +00:00
324b56a623 text cleanup
git-svn-id: http://comictagger.googlecode.com/svn/trunk@523 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-19 18:03:40 +00:00
782d424392 added support for retaining the new CR "day" field in the metadata
git-svn-id: http://comictagger.googlecode.com/svn/trunk@522 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-19 01:50:53 +00:00
cf63bfda9d added pypi project update to the upload target
git-svn-id: http://comictagger.googlecode.com/svn/trunk@521 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-19 01:23:09 +00:00
903d4c647c rearranged XML output to more closely match the order of ComicRack output
git-svn-id: http://comictagger.googlecode.com/svn/trunk@520 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-19 01:22:37 +00:00
407b83fe90 fixed date for release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@515 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-15 02:37:04 +00:00
27edc80d2b a few makefile tweaks
git-svn-id: http://comictagger.googlecode.com/svn/trunk@514 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-15 01:05:35 +00:00
01f48f8b91 release notes update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@513 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-15 01:02:52 +00:00
527e690170 removed cruft from script printouts
git-svn-id: http://comictagger.googlecode.com/svn/trunk@512 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-15 01:02:35 +00:00
d100572aa4 add import of script into the try-catch
git-svn-id: http://comictagger.googlecode.com/svn/trunk@511 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-15 01:00:59 +00:00
42640c4ad5 reduced size of filename font in infobox
git-svn-id: http://comictagger.googlecode.com/svn/trunk@510 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-15 00:25:37 +00:00
a61972e503 wrapped a try-catch around script execution
git-svn-id: http://comictagger.googlecode.com/svn/trunk@509 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-15 00:25:06 +00:00
464e147223 back to "ctmain"
git-svn-id: http://comictagger.googlecode.com/svn/trunk@508 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-15 00:23:54 +00:00
8759784561 Script fixes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@507 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 21:07:36 +00:00
ee5b4a689e updated readme
git-svn-id: http://comictagger.googlecode.com/svn/trunk@506 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 20:15:18 +00:00
71ccf1eea8 Script updates
git-svn-id: http://comictagger.googlecode.com/svn/trunk@505 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 20:06:42 +00:00
a9ee7c463b Added script folder to manifest
git-svn-id: http://comictagger.googlecode.com/svn/trunk@504 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 19:09:35 +00:00
6f683a71c7 added readme for script folder
git-svn-id: http://comictagger.googlecode.com/svn/trunk@503 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 19:09:13 +00:00
24b192b22c Tweaked the UI box
git-svn-id: http://comictagger.googlecode.com/svn/trunk@502 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 18:42:20 +00:00
b6b1a4737f got rid of debug print
git-svn-id: http://comictagger.googlecode.com/svn/trunk@501 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 18:41:33 +00:00
00202cc865 more scripts
git-svn-id: http://comictagger.googlecode.com/svn/trunk@499 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 06:36:28 +00:00
235524b06d added script options to help
git-svn-id: http://comictagger.googlecode.com/svn/trunk@498 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 06:30:53 +00:00
8a7f822970 restored accidentally lost dirty flag signals
git-svn-id: http://comictagger.googlecode.com/svn/trunk@495 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 05:38:25 +00:00
ff3f048bb4 Filename parsing preserves dashes in series name
git-svn-id: http://comictagger.googlecode.com/svn/trunk@494 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-14 05:37:00 +00:00
abda202f32 removed local setting, and now import tag style from comicarchive
git-svn-id: http://comictagger.googlecode.com/svn/trunk@493 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-13 21:55:56 +00:00
2d4ac84de0 app main entry point is now called "main"
git-svn-id: http://comictagger.googlecode.com/svn/trunk@492 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-13 21:55:00 +00:00
86732e7827 Moved tag style class to comicarchive module
git-svn-id: http://comictagger.googlecode.com/svn/trunk@491 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-13 21:54:07 +00:00
693b5b1978 Moved tagstyle class to comicarchive module
git-svn-id: http://comictagger.googlecode.com/svn/trunk@490 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-13 21:53:15 +00:00
e3d3ecfd31 fix basedir setting for frozen windows
git-svn-id: http://comictagger.googlecode.com/svn/trunk@489 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-13 21:52:02 +00:00
ce6b81ab73 Reworked the encoding in the recursive filelist creation
Changed the "fix encoding" fuction to only load once, and set the local from env

git-svn-id: http://comictagger.googlecode.com/svn/trunk@488 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-13 21:50:47 +00:00
501365b5a3 Removed mention of depricated folder archives
git-svn-id: http://comictagger.googlecode.com/svn/trunk@487 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-12 19:22:21 +00:00
c6741d4392 Renamed test script to inventory
git-svn-id: http://comictagger.googlecode.com/svn/trunk@486 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-12 18:36:49 +00:00
42feae53dd caught some small errors exposed by calling via script
git-svn-id: http://comictagger.googlecode.com/svn/trunk@485 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-12 18:29:29 +00:00
c65695b8dc updated test script
git-svn-id: http://comictagger.googlecode.com/svn/trunk@484 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-12 04:53:36 +00:00
4da71e262b bumped version
git-svn-id: http://comictagger.googlecode.com/svn/trunk@483 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-12 04:53:02 +00:00
c519fd33d5 fixed a bug in windows filename arg list processing
git-svn-id: http://comictagger.googlecode.com/svn/trunk@482 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-12 04:52:51 +00:00
07ef0211b9 windows mkdir version doesn't do -p
git-svn-id: http://comictagger.googlecode.com/svn/trunk@481 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-12 04:52:04 +00:00
c45b56a5b6 moved some code into utils:
recursive file list
	output encoding config

git-svn-id: http://comictagger.googlecode.com/svn/trunk@479 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-11 17:29:55 +00:00
6f27fc7669 added recursive flag for CLI
git-svn-id: http://comictagger.googlecode.com/svn/trunk@478 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-11 17:29:08 +00:00
4530ac017c disable support for "folder" archives
git-svn-id: http://comictagger.googlecode.com/svn/trunk@477 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-11 17:28:18 +00:00
400fe6efa3 text cleanup
git-svn-id: http://comictagger.googlecode.com/svn/trunk@476 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-11 17:27:54 +00:00
ac7a12d18d Experimental script using lib
git-svn-id: http://comictagger.googlecode.com/svn/trunk@475 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-09 00:15:42 +00:00
c2ff11fab7 make sure the release folder always exists during build process
git-svn-id: http://comictagger.googlecode.com/svn/trunk@474 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-08 22:14:18 +00:00
34019ff338 clean out generated release.nsh file
git-svn-id: http://comictagger.googlecode.com/svn/trunk@473 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-08 22:07:38 +00:00
176bc43888 rar_exe_path is maintained only in the settings now.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@472 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-08 22:05:28 +00:00
2e290c4c74 Used RE to make sure duplicate doesn't get added to path
git-svn-id: http://comictagger.googlecode.com/svn/trunk@471 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-08 22:04:56 +00:00
74a374d46b typos and and text fixes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@470 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-08 22:01:58 +00:00
58f5f10c78 abandoned debian build for now.
cleaned up the windows makefile a bit

git-svn-id: http://comictagger.googlecode.com/svn/trunk@469 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-08 05:38:14 +00:00
7d8ed954a9 Changed text regarding PIL requirement
make now uses the python dist package to make the source zip

git-svn-id: http://comictagger.googlecode.com/svn/trunk@468 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-08 01:14:04 +00:00
078b3cef3c more python packaging tweaks
git-svn-id: http://comictagger.googlecode.com/svn/trunk@464 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 23:09:00 +00:00
22ef0250ca python packaging tweaks
git-svn-id: http://comictagger.googlecode.com/svn/trunk@463 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 22:38:27 +00:00
cc53162dcc Added a readme.txt for the source distrubution
git-svn-id: http://comictagger.googlecode.com/svn/trunk@462 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 21:34:00 +00:00
fa309cfcef Got mac build working with new structure
git-svn-id: http://comictagger.googlecode.com/svn/trunk@461 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 17:35:25 +00:00
4d57b0cf79 God deb built using fpm!
git-svn-id: http://comictagger.googlecode.com/svn/trunk@460 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 07:04:23 +00:00
6ea5d28609 More distutil fun
git-svn-id: http://comictagger.googlecode.com/svn/trunk@459 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 05:21:07 +00:00
9d56a2ce9a Got frozen windows build working again
git-svn-id: http://comictagger.googlecode.com/svn/trunk@458 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 04:50:10 +00:00
811759478a Function dist install on linux
git-svn-id: http://comictagger.googlecode.com/svn/trunk@457 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 04:19:20 +00:00
28e2d93314 Name conflict with launcher script
git-svn-id: http://comictagger.googlecode.com/svn/trunk@456 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 04:00:20 +00:00
93b3117699 MOre cleanup
git-svn-id: http://comictagger.googlecode.com/svn/trunk@455 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 02:40:29 +00:00
10e6a1019e First cut at a dist-package build
git-svn-id: http://comictagger.googlecode.com/svn/trunk@454 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 02:39:36 +00:00
2024555780 restructure - done, I think
git-svn-id: http://comictagger.googlecode.com/svn/trunk@453 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 01:20:05 +00:00
e15c3fa3e6 restructure - almost there!
git-svn-id: http://comictagger.googlecode.com/svn/trunk@452 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 01:12:49 +00:00
8aa6403f51 Restructure
git-svn-id: http://comictagger.googlecode.com/svn/trunk@451 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-07 01:01:39 +00:00
fb5fca1dc4 restructre
git-svn-id: http://comictagger.googlecode.com/svn/trunk@450 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 22:29:52 +00:00
75d5b1a695 Restructure
git-svn-id: http://comictagger.googlecode.com/svn/trunk@449 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 22:17:38 +00:00
e56d9bddbf restructure
git-svn-id: http://comictagger.googlecode.com/svn/trunk@448 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 22:10:48 +00:00
7d9aa70dc0 restructure
git-svn-id: http://comictagger.googlecode.com/svn/trunk@447 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 22:05:46 +00:00
6d72ed2a69 restructure
git-svn-id: http://comictagger.googlecode.com/svn/trunk@446 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 22:05:24 +00:00
9b584f78a0 restructure
git-svn-id: http://comictagger.googlecode.com/svn/trunk@445 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 22:04:40 +00:00
dfe0e74f9c Refactored code for restructure
git-svn-id: http://comictagger.googlecode.com/svn/trunk@444 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 22:03:53 +00:00
a11c08a2ee Restructure
git-svn-id: http://comictagger.googlecode.com/svn/trunk@443 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 21:59:43 +00:00
9159204883 Added missing file header
git-svn-id: http://comictagger.googlecode.com/svn/trunk@442 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 20:42:57 +00:00
605e27ce99 Deleted cruft file
git-svn-id: http://comictagger.googlecode.com/svn/trunk@441 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 20:35:24 +00:00
2dc08b36ea Added use google of upload tool to makefile
git-svn-id: http://comictagger.googlecode.com/svn/trunk@440 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 19:13:07 +00:00
60dae4f1fb Keep the google project file upload utility in a handy place
git-svn-id: http://comictagger.googlecode.com/svn/trunk@439 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 19:08:32 +00:00
85728d33bb New filename template variables
git-svn-id: http://comictagger.googlecode.com/svn/trunk@435 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 05:59:52 +00:00
2ade08aa89 Bumped version
git-svn-id: http://comictagger.googlecode.com/svn/trunk@434 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 05:59:08 +00:00
50909962d3 Implemented export to zip on command line
git-svn-id: http://comictagger.googlecode.com/svn/trunk@430 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-06 00:14:37 +00:00
cc02023730 Fixed an issue in rar directory reading when the first char in the path is a space.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@429 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 22:48:12 +00:00
5bdc40b9f5 Make sure to change codec for stderror too
git-svn-id: http://comictagger.googlecode.com/svn/trunk@428 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 22:47:45 +00:00
4f3e63db07 Make a lot of print statements go to stderr
git-svn-id: http://comictagger.googlecode.com/svn/trunk@427 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 22:27:35 +00:00
b8893b853f Release notes update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@426 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 06:42:24 +00:00
6da6f38673 Text tweak
git-svn-id: http://comictagger.googlecode.com/svn/trunk@425 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 06:37:21 +00:00
369dcbb5a1 Tweaked the pagebrowser layout
Added arrow icons for some buttons

git-svn-id: http://comictagger.googlecode.com/svn/trunk@424 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 06:37:04 +00:00
ec010f29e8 Center progress dialogs on update to keep from drifting due to growth
git-svn-id: http://comictagger.googlecode.com/svn/trunk@423 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 05:14:26 +00:00
22867bc9e6 Addded popup screen image
git-svn-id: http://comictagger.googlecode.com/svn/trunk@422 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 04:50:26 +00:00
dde1913e07 Font tweaks
git-svn-id: http://comictagger.googlecode.com/svn/trunk@421 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 04:49:44 +00:00
5b5842a5f8 tweaked the dialogs window flags to enable maximize on some, and remove the help button on others
git-svn-id: http://comictagger.googlecode.com/svn/trunk@420 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 03:51:50 +00:00
fbf086886f Made selection list font a little smaller
git-svn-id: http://comictagger.googlecode.com/svn/trunk@419 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 03:51:05 +00:00
c1ff6c4b26 Fixed form resizing bug
git-svn-id: http://comictagger.googlecode.com/svn/trunk@418 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 03:50:03 +00:00
99b110d052 PageListEditor now uses CoverImageWidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@417 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-05 00:00:18 +00:00
3df498eed4 Page list editor displays 1-based list
git-svn-id: http://comictagger.googlecode.com/svn/trunk@416 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 22:20:04 +00:00
b5ab2a6ac9 Updated page browser to use coverimagewidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@415 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 22:19:31 +00:00
5c91960f04 Added option to not show controls in widget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@414 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 22:19:08 +00:00
3b52fd3213 Main window now uses the CoverImageWidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@413 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 21:05:31 +00:00
9366457b88 MatchSelectionWindow now uses CoverImageWidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@412 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 19:54:04 +00:00
1cb7ef66db Added tool tip about double-clicking
git-svn-id: http://comictagger.googlecode.com/svn/trunk@411 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 19:53:32 +00:00
ee6a05deae UI tweaks
git-svn-id: http://comictagger.googlecode.com/svn/trunk@410 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 19:53:03 +00:00
c978883584 Volume selection widget now uses CoverImageWidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@409 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 19:11:57 +00:00
9b5508ecba Added URL (singe image) mode.
Tweakd resize logic

git-svn-id: http://comictagger.googlecode.com/svn/trunk@408 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 19:11:28 +00:00
8e1c6fae7c Mac OS X acts weird with modality settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@407 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 19:10:53 +00:00
59e662f5a7 Fixed window modality of issue selection window
git-svn-id: http://comictagger.googlecode.com/svn/trunk@406 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 17:25:39 +00:00
6486d97ee3 Added modal image quick popup
git-svn-id: http://comictagger.googlecode.com/svn/trunk@405 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 17:24:48 +00:00
8c088440c5 updated coverimagewidget to manage background loading of alt cover URLs
git-svn-id: http://comictagger.googlecode.com/svn/trunk@404 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 05:15:23 +00:00
320ee1c5d1 Updated todo
git-svn-id: http://comictagger.googlecode.com/svn/trunk@403 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 05:14:29 +00:00
e123720354 Issue selection dialog now uses the coverimagewidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@402 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 05:13:23 +00:00
d39d4e79ad Added async version of the alt cover URL fetcher
git-svn-id: http://comictagger.googlecode.com/svn/trunk@401 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 05:09:48 +00:00
8d7eeece30 No need to pre-fetch now, since the cover widget manages this itself
git-svn-id: http://comictagger.googlecode.com/svn/trunk@400 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-04 05:08:22 +00:00
3b64e1a3ec Added a new default publisher blacklist item
git-svn-id: http://comictagger.googlecode.com/svn/trunk@399 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-03 18:16:16 +00:00
81ae9bd635 Change the post auto-tag dialog to also show low-confidence single matches
git-svn-id: http://comictagger.googlecode.com/svn/trunk@398 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-03 18:15:48 +00:00
27846772e9 Reworked the post auto-tag selection dialog:
New display image widgets
  Sorting
  Added issue title

git-svn-id: http://comictagger.googlecode.com/svn/trunk@397 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-03 18:14:16 +00:00
baf697b919 New widget for managing the loading and displaying of archive pages and covers from Comic Vine
git-svn-id: http://comictagger.googlecode.com/svn/trunk@396 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-03 18:13:18 +00:00
59ede8d446 Clean up the strings from the alt cover URL list
git-svn-id: http://comictagger.googlecode.com/svn/trunk@395 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-03 18:12:10 +00:00
8b748a3343 Made the alt cover threshold more stringent
git-svn-id: http://comictagger.googlecode.com/svn/trunk@394 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-03 18:10:59 +00:00
75471aaddc Added caching of the alt cover image URL list
git-svn-id: http://comictagger.googlecode.com/svn/trunk@393 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-02 18:41:06 +00:00
7225f261f1 Tuned the cover score thresholds a bit
Fixed a "one-shot" bug where sometimes there is a zero issue but not a "1"

git-svn-id: http://comictagger.googlecode.com/svn/trunk@392 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-02 18:40:40 +00:00
c466264d43 UI tweaks for auto tag match window
git-svn-id: http://comictagger.googlecode.com/svn/trunk@391 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-02 18:39:39 +00:00
14e801b717 Added support for alternate covers from comicvine
git-svn-id: http://comictagger.googlecode.com/svn/trunk@390 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-02 06:03:58 +00:00
af4b467814 Added support for gif in archive
git-svn-id: http://comictagger.googlecode.com/svn/trunk@389 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-02-02 06:02:25 +00:00
255 changed files with 51571 additions and 14421 deletions

37
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,37 @@
---
name: Bug report
about: Report a bug
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Attach logs**
`%LOCALAPPDATA%\ComicTagger\logs` on windows
`~/Library/Logs/ComicTagger` on macOS
`~/.cache/ComicTagger/log` on Linux
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. Fedora]
- Version [e.g. 1.6.0b2]
- Where did you install ComicTagger from? [e.g. releases page]
**Additional context**
Add any other context about the problem here.

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: feature-request
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

98
.github/workflows/build.yaml vendored Normal file
View File

@ -0,0 +1,98 @@
name: CI
env:
LC_COLLATE: en_US.UTF-8
on:
pull_request:
push:
branches:
- '**'
tags-ignore:
- '**'
jobs:
lint:
permissions:
checks: write
contents: read
pull-requests: write
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.9]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install build dependencies
run: |
python -m pip install flake8
- uses: reviewdog/action-setup@v1
with:
reviewdog_version: nightly
- run: flake8 | reviewdog -f=flake8 -reporter=github-pr-review -tee -level=error -fail-on-error
env:
REVIEWDOG_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
build-and-test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.9, 3.13]
os: [ubuntu-22.04, macos-13, macos-14, windows-latest]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install tox
run: |
python -m pip install --upgrade --upgrade-strategy eager tox
- name: Install macos dependencies
run: |
brew upgrade icu4c pkg-config || brew install icu4c pkg-config
if: runner.os == 'macOS'
- name: Install linux dependencies
run: |
sudo apt-get update && sudo apt-get upgrade && sudo apt-get install pkg-config libicu-dev libqt6gui6 libfuse2 desktop-file-utils
if: runner.os == 'Linux'
- name: Build and install PyPi packages
run: |
export PKG_CONFIG_PATH="/usr/local/opt/icu4c/lib/pkgconfig:/opt/homebrew/opt/icu4c/lib/pkgconfig${PKG_CONFIG_PATH+:$PKG_CONFIG_PATH}";
export PATH="/usr/local/opt/icu4c/bin:/usr/local/opt/icu4c/sbin${PATH+:$PATH}"
python -m tox r -m build
shell: bash
- name: Archive production artifacts
uses: actions/upload-artifact@v4
with:
name: "${{ format('ComicTagger-{0}', matrix.os) }}"
path: |
dist/*.whl
dist/binary/*.zip
dist/binary/*.tar.gz
dist/binary/*.dmg
dist/binary/*.AppImage
if: matrix.python == 3.12
- name: PyTest
run: |
python -m tox p -e py${{ matrix.python-version }}-none,py${{ matrix.python-version }}-gui,py${{ matrix.python-version }}-7z,py${{ matrix.python-version }}-cbr,py${{ matrix.python-version }}-all
shell: bash

43
.github/workflows/contributions.yaml vendored Normal file
View File

@ -0,0 +1,43 @@
name: Contributions
on:
push:
branches:
- 'develop'
tags-ignore:
- '**'
jobs:
contrib-readme-job:
permissions:
contents: write
runs-on: ubuntu-latest
env:
CI_COMMIT_AUTHOR: github-actions[bot]
CI_COMMIT_EMAIL: <41898282+github-actions[bot]@users.noreply.github.com>
CI_COMMIT_MESSAGE: Update AUTHORS
name: A job to automate contrib in readme
steps:
- name: Contribute List
uses: akhilmhdh/contributors-readme-action@v2.3.6
with:
use_username: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Update AUTHORS
run: |
git config --global log.mailmap true
git log --reverse '--format=%aN <%aE>' | cat -n | sort -uk2 | sort -n | cut -f2- >AUTHORS
- name: Commit and push AUTHORS
run: |
if ! git diff --exit-code; then
git pull
git config --global user.name "${{ env.CI_COMMIT_AUTHOR }}"
git config --global user.email "${{ env.CI_COMMIT_EMAIL }}"
git commit -a -m "${{ env.CI_COMMIT_MESSAGE }}"
git push
fi

76
.github/workflows/package.yaml vendored Normal file
View File

@ -0,0 +1,76 @@
name: Package
env:
LC_COLLATE: en_US.UTF-8
on:
push:
tags:
- "[0-9]+.[0-9]+.[0-9]+*"
jobs:
package:
permissions:
# IMPORTANT: this permission is mandatory for trusted publishing
id-token: write
contents: write
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.13]
os: [ubuntu-22.04, ubuntu-22.04-arm, macos-13, macos-14, windows-latest]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install tox
run: |
python -m pip install --upgrade --upgrade-strategy eager tox
- name: Install macos dependencies
run: |
brew upgrade icu4c pkg-config || brew install icu4c pkg-config
if: runner.os == 'macOS'
- name: Install linux dependencies
run: |
sudo apt-get update && sudo apt-get upgrade && sudo apt-get install pkg-config libicu-dev libqt6gui6 libfuse2 desktop-file-utils
if: runner.os == 'Linux'
- name: Build, Install and Test PyPi packages
run: |
export PKG_CONFIG_PATH="/usr/local/opt/icu4c/lib/pkgconfig:/opt/homebrew/opt/icu4c/lib/pkgconfig${PKG_CONFIG_PATH+:$PKG_CONFIG_PATH}";
export PATH="/usr/local/opt/icu4c/bin:/usr/local/opt/icu4c/sbin${PATH+:$PATH}"
python -m tox p
- name: Release PyPi package
run: |
python -m tox r -e pypi-upload
shell: bash
if: matrix.os == 'ubuntu-22.04'
- name: Get release name
shell: bash
run: |
git fetch --depth=1 origin +refs/tags/*:refs/tags/* # github is dumb
echo "release_name=$(git tag -l --format "%(refname:strip=2): %(contents:lines=1)" ${{ github.ref_name }})" >> $GITHUB_ENV
- name: Release
uses: softprops/action-gh-release@v2
if: startsWith(github.ref, 'refs/tags/')
with:
name: "${{ env.release_name }}"
prerelease: "${{ contains(github.ref, '-') }}" # alpha-releases should be 1.3.0-alpha.x full releases should be 1.3.0
draft: false
# upload the single application zip file for each OS and include the wheel built on linux
files: |
dist/binary/*.zip
dist/binary/*.tar.gz
dist/binary/*.dmg
dist/binary/*.AppImage
dist/*${{ fromJSON('["never", ""]')[matrix.os == 'ubuntu-22.04'] }}.whl

160
.gitignore vendored Normal file
View File

@ -0,0 +1,160 @@
# generated by setuptools_scm
ctversion.py
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion
*.iml
## Directory-based project format:
.idea/
### Other editors
.*.swp
nbproject/
.vscode
comictaggerlib/_version.py
*.exe
*.zip
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# for testing
temp/

9
.mailmap Normal file
View File

@ -0,0 +1,9 @@
Andrew W. Buchanan <buchanan@difference.com>
Davide Romanini <d.romanini@cineca.it> <davide.romanini@gmail.com>
Davide Romanini <d.romanini@cineca.it> <user159033@92-63-141-211.rdns.melbourne.co.uk>
Michael Fitzurka <MichaelFitzurka@users.noreply.github.com> <MichaelFitzurka@github.com>
Timmy Welch <timmy@narnian.us>
beville <beville@users.noreply.github.com> <(no author)@6c5673fe-1810-88d6-992b-cd32ca31540c>
beville <beville@users.noreply.github.com> <beville@6c5673fe-1810-88d6-992b-cd32ca31540c>
beville <beville@users.noreply.github.com> <beville@gmail.com@6c5673fe-1810-88d6-992b-cd32ca31540c>
beville <beville@users.noreply.github.com> <beville@users.noreply.github.com>

46
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,46 @@
exclude: ^(scripts|comictaggerlib/graphics/resources.py)
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: debug-statements
- id: name-tests-test
- id: requirements-txt-fixer
- repo: https://github.com/asottile/setup-cfg-fmt
rev: v2.8.0
hooks:
- id: setup-cfg-fmt
- repo: https://github.com/asottile/pyupgrade
rev: v3.19.1
hooks:
- id: pyupgrade
args: [--py39-plus]
- repo: https://github.com/PyCQA/autoflake
rev: v2.3.1
hooks:
- id: autoflake
args: [-i, --remove-all-unused-imports, --ignore-init-module-imports]
- repo: https://github.com/PyCQA/isort
rev: 6.0.1
hooks:
- id: isort
args: [--af,--add-import, 'from __future__ import annotations']
- repo: https://github.com/psf/black
rev: 25.1.0
hooks:
- id: black
- repo: https://github.com/PyCQA/flake8
rev: 7.2.0
hooks:
- id: flake8
additional_dependencies: [flake8-encodings, flake8-builtins, flake8-print, flake8-no-nested-comprehensions]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.15.0
hooks:
- id: mypy
additional_dependencies: [types-setuptools, types-requests, settngs>=0.10.4, pillow>=9.1.0]
ci:
skip: [mypy]

23
AUTHORS Normal file
View File

@ -0,0 +1,23 @@
beville <beville@users.noreply.github.com>
Davide Romanini <d.romanini@cineca.it>
fcanc <f.canc@icloud.com>
Alban Seurat <alkpone@alkpone.com>
tlc <tlc@users.noreply.github.com>
Marek Pawlak <francuz14@gmail.com>
Timmy Welch <timmy@narnian.us>
J.P. Cranford <philipcranford4@gmail.com>
thFrgttn <39759781+thFrgttn@users.noreply.github.com>
Andrew W. Buchanan <buchanan@difference.com>
Michael Fitzurka <MichaelFitzurka@users.noreply.github.com>
Richard Haussmann <richard.haussmann@gmail.com>
Mizaki <jinxybob@hotmail.com>
Xavier Jouvenot <x.jouvenot@gmail.com>
github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Ben Longman <deck@steamdeck.lan>
Sven Hesse <drmccoy@drmccoy.de>
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
kcgthb <kcgthb@users.noreply.github.com>
Kilian Cavalotti <kcgthb@users.noreply.github.com>
David Bugl <david.bugl@gmx.at>
HSN <64664577+N-Hertstein@users.noreply.github.com>
Emmanuel Ferdman <emmanuelferdman@gmail.com>

98
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,98 @@
# How to contribute
If your not sure what you can do or you need to ask a question or just want to talk about ComicTagger head over to the [discussions tab](https://github.com/comictagger/comictagger/discussions/categories/general) and start a discussion
## Tests
We have tests written using pytest! Some of them even pass! If you are contributing code any tests you can write are appreciated.
A great place to start is extending the tests that are already made.
For example the file tests/filenames.py has lists of filenames to be parsed in the format:
```py
pytest.param(
"Star Wars - War of the Bounty Hunters - IG-88 (2021) (Digital) (Kileko-Empire).cbz",
"number ends series, no-issue",
{
"issue": "",
"series": "Star Wars - War of the Bounty Hunters - IG-88",
"volume": "",
"year": "2021",
"remainder": "(Digital) (Kileko-Empire)",
"issue_count": "",
},
marks=pytest.mark.xfail,
)
```
A test consists of 3-4 parts
1. The filename to be parsed
2. The reason it might fail
3. What the result of parsing the filename should be
4. `marks=pytest.mark.xfail` This marks the test as expected to fail
If you are not comfortable creating a pull request you can [open an issue](https://github.com/comictagger/comictagger/issues/new/choose) or [start a discussion](https://github.com/comictagger/comictagger/discussions/new)
## Submitting changes
Please open a [GitHub Pull Request](https://github.com/comictagger/comictagger/pull/new/develop) with a clear list of what you've done (read more about [pull requests](http://help.github.com/pull-requests/)). When you send a pull request, we will love you forever if you include tests. We can always use more test coverage. Please run the code tools below and make sure all of your commits are atomic (one feature per commit).
## Contributing Code
Currently only python 3.9 is supported however 3.10 will probably work if you try it
Those on linux should install `Pillow` from the system package manager if possible and if the GUI `PyQt6` should be installed from the system package manager
Those on macOS will need to ensure that you are using python3 in x86 mode either by installing an x86 only version of python or using the universal installer and using `python3-intel64` instead of `python3`
1. Clone the repository
```
git clone https://github.com/comictagger/comictagger.git
```
2. It is preferred to use a virtual env for running from source:
```
python3 -m venv venv
```
3. Activate the virtual env:
```
. venv/bin/activate
```
or if on windows PowerShell
```
. venv/bin/activate.ps1
```
4. Install tox:
```bash
pip install tox
```
5. If you are on an M1 Mac you will need to export two environment variables for tests to pass.
```
export tox_python=python3.9-intel64
export tox_env=m1env
```
6. install ComicTagger
```
tox run -e venv
```
7. Make your changes
8. Build to ensure that your changes work: this will produce a binary build in the dist folder
```bash
tox run -m build
```
The build runs these formatters and linters automatically
setup-cfg-fmt: Formats the setup.cfg file
autoflake: Removes unused imports
isort: sorts imports so that you can always find where an import is located<br>
black: formats all of the code consistently so there are no surprises<br>
flake8: checks for code quality and style (warns for unused imports and similar issues)<br>
mypy: checks the types of variables and functions to catch errors
pytest: runs tests for ComicTagger functionality

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,25 +0,0 @@
TAGGER_BASE := $(HOME)/Dropbox/tagger/comictagger
VERSION_STR := $(shell grep version $(TAGGER_BASE)/ctversion.py| cut -d= -f2 | sed 's/\"//g')
all: clean
clean:
rm -f *~ *.pyc *.pyo
rm -f logdict*.log
make -C mac clean
make -C windows clean
zip:
cd release; \
rm -rf *zip comictagger-src-$(VERSION_STR) ; \
svn export https://comictagger.googlecode.com/svn/trunk/ comictagger-src-$(VERSION_STR); \
zip -r comictagger-src-$(VERSION_STR).zip comictagger-src-$(VERSION_STR); \
rm -rf comictagger-src-$(VERSION_STR)
@echo When satisfied with release, do this:
@echo make svn_tag
svn_tag:
svn copy https://comictagger.googlecode.com/svn/trunk \
https://comictagger.googlecode.com/svn/tags/$(VERSION_STR) -m "Release $(VERSION_STR)"

221
README.md Normal file
View File

@ -0,0 +1,221 @@
[![CI](https://github.com/comictagger/comictagger/actions/workflows/build.yaml/badge.svg?branch=develop&event=push)](https://github.com/comictagger/comictagger/actions/workflows/build.yaml)
[![GitHub release (latest by date)](https://img.shields.io/github/downloads/comictagger/comictagger/latest/total)](https://github.com/comictagger/comictagger/releases/latest)
[![PyPI](https://img.shields.io/pypi/v/comictagger)](https://pypi.org/project/comictagger/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/comictagger)](https://pypistats.org/packages/comictagger)
[![Chocolatey package](https://img.shields.io/chocolatey/dt/comictagger?color=blue&label=chocolatey)](https://community.chocolatey.org/packages/comictagger)
[![WinGet](https://img.shields.io/winget/v/ComicTagger.ComicTagger)](https://github.com/microsoft/winget-pkgs/tree/master/manifests/c/ComicTagger/ComicTagger)
[![PyPI - License](https://img.shields.io/pypi/l/comictagger)](https://opensource.org/licenses/Apache-2.0)
[![GitHub Discussions](https://img.shields.io/github/discussions/comictagger/comictagger)](https://github.com/comictagger/comictagger/discussions)
[![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/comictagger/community)
[![Google Group](https://img.shields.io/badge/discuss-on%20groups-%23207de5)](https://groups.google.com/forum/#!forum/comictagger)
[![Twitter](https://img.shields.io/badge/%40comictagger-twitter-lightgrey)](https://twitter.com/comictagger)
[![Facebook](https://img.shields.io/badge/comictagger-facebook-lightgrey)](https://www.facebook.com/ComicTagger-139615369550787/)
# ComicTagger
ComicTagger is a **multi-platform** app for **writing metadata to digital comics**, written in Python and PyQt.
![ComicTagger logo](https://raw.githubusercontent.com/comictagger/comictagger/develop/comictaggerlib/graphics/app.png)
## Features
* Runs on macOS, Microsoft Windows, and Linux systems
* Get comic information from [Comic Vine](https://comicvine.gamespot.com/)
* **Automatic issue matching** using advanced image processing techniques
* **Batch processing** in the GUI for tagging hundreds or more comics at a time
* Support for **ComicRack** and **ComicBookLover** tagging formats
* Native full support for **CBZ** digital comics
* Native read only support for **CBR** digital comics: full support enabled installing additional [rar tools](https://www.rarlab.com/download.htm)
* Command line interface (CLI) enabling **custom scripting** and **batch operations on large collections**
For details, screen-shots, and more, visit [the Wiki](https://github.com/comictagger/comictagger/wiki)
## Installation
### Binaries
Windows, Linux and MacOS binaries are provided in the [Releases Page](https://github.com/comictagger/comictagger/releases).
Just unzip the archive in any folder and run, no additional installation steps are required.
### PIP installation
A pip package is provided, you can install it with:
```
$ pip3 install comictagger[GUI]
```
There are optional dependencies. You can install the optional dependencies by specifying one or more of them in braces e.g. `comictagger[CBR,GUI]`
Optional dependencies:
1. `ICU`: Ensures that comic pages are supported correctly. This should always be installed. *Currently only exists in the latest alpha release *
1. `CBR`: Provides support for CBR/RAR files.
1. `GUI`: Installs the GUI.
1. `7Z`: Provides support for CB7/7Z files.
1. `all`: Installs all of the above optional dependencies.
### Chocolatey installation (Windows only)
A [Chocolatey package](https://community.chocolatey.org/packages/comictagger), maintained by @Xav83, is provided, you can install it with:
```powershell
choco install comictagger
```
### WinGet installation (Windows only)
A [WinGet package](https://github.com/microsoft/winget-pkgs/tree/master/manifests/c/ComicTagger/ComicTagger), maintained by @Sn1cket, is provided, you can install it with:
```powershell
winget install ComicTagger.ComicTagger
```
### From source
1. Ensure you have python 3.9 installed
2. Clone this repository `git clone https://github.com/comictagger/comictagger.git`
7. `pip3 install .[ICU]` or `pip3 install .[GUI,ICU]`
## Contributors
<!-- readme: beville,davide-romanini,collaborators,contributors -start -->
<table>
<tr>
<td align="center">
<a href="https://github.com/beville">
<img src="https://avatars.githubusercontent.com/u/7294848?v=4" width="100;" alt="beville"/>
<br />
<sub><b>beville</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/davide-romanini">
<img src="https://avatars.githubusercontent.com/u/731199?v=4" width="100;" alt="davide-romanini"/>
<br />
<sub><b>davide-romanini</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/fcanc">
<img src="https://avatars.githubusercontent.com/u/4999486?v=4" width="100;" alt="fcanc"/>
<br />
<sub><b>fcanc</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/lordwelch">
<img src="https://avatars.githubusercontent.com/u/7547075?v=4" width="100;" alt="lordwelch"/>
<br />
<sub><b>lordwelch</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/mizaki">
<img src="https://avatars.githubusercontent.com/u/1141189?v=4" width="100;" alt="mizaki"/>
<br />
<sub><b>mizaki</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/MichaelFitzurka">
<img src="https://avatars.githubusercontent.com/u/27830765?v=4" width="100;" alt="MichaelFitzurka"/>
<br />
<sub><b>MichaelFitzurka</b></sub>
</a>
</td></tr>
<tr>
<td align="center">
<a href="https://github.com/abuchanan920">
<img src="https://avatars.githubusercontent.com/u/368793?v=4" width="100;" alt="abuchanan920"/>
<br />
<sub><b>abuchanan920</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/N-Hertstein">
<img src="https://avatars.githubusercontent.com/u/64664577?v=4" width="100;" alt="N-Hertstein"/>
<br />
<sub><b>N-Hertstein</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/kcgthb">
<img src="https://avatars.githubusercontent.com/u/186807?v=4" width="100;" alt="kcgthb"/>
<br />
<sub><b>kcgthb</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/rhaussmann">
<img src="https://avatars.githubusercontent.com/u/7084007?v=4" width="100;" alt="rhaussmann"/>
<br />
<sub><b>rhaussmann</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/AlbanSeurat">
<img src="https://avatars.githubusercontent.com/u/500180?v=4" width="100;" alt="AlbanSeurat"/>
<br />
<sub><b>AlbanSeurat</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/Sn1cket">
<img src="https://avatars.githubusercontent.com/u/32904645?v=4" width="100;" alt="Sn1cket"/>
<br />
<sub><b>Sn1cket</b></sub>
</a>
</td></tr>
<tr>
<td align="center">
<a href="https://github.com/emmanuel-ferdman">
<img src="https://avatars.githubusercontent.com/u/35470921?v=4" width="100;" alt="emmanuel-ferdman"/>
<br />
<sub><b>emmanuel-ferdman</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/jpcranford">
<img src="https://avatars.githubusercontent.com/u/21347202?v=4" width="100;" alt="jpcranford"/>
<br />
<sub><b>jpcranford</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/PawlakMarek">
<img src="https://avatars.githubusercontent.com/u/26022173?v=4" width="100;" alt="PawlakMarek"/>
<br />
<sub><b>PawlakMarek</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/DrMcCoy">
<img src="https://avatars.githubusercontent.com/u/156130?v=4" width="100;" alt="DrMcCoy"/>
<br />
<sub><b>DrMcCoy</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/Xav83">
<img src="https://avatars.githubusercontent.com/u/6787157?v=4" width="100;" alt="Xav83"/>
<br />
<sub><b>Xav83</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/thFrgttn">
<img src="https://avatars.githubusercontent.com/u/39759781?v=4" width="100;" alt="thFrgttn"/>
<br />
<sub><b>thFrgttn</b></sub>
</a>
</td></tr>
<tr>
<td align="center">
<a href="https://github.com/tlc">
<img src="https://avatars.githubusercontent.com/u/19436?v=4" width="100;" alt="tlc"/>
<br />
<sub><b>tlc</b></sub>
</a>
</td></tr>
</table>
<!-- readme: beville,davide-romanini,collaborators,contributors -end -->

View File

@ -1,18 +0,0 @@
The unrar.dll library is freeware. This means:
1. All copyrights to RAR and the unrar.dll are exclusively
owned by the author - Alexander Roshal.
2. The unrar.dll library may be used in any software to handle RAR
archives without limitations free of charge.
3. THE RAR ARCHIVER AND THE UNRAR.DLL LIBRARY ARE DISTRIBUTED "AS IS".
NO WARRANTY OF ANY KIND IS EXPRESSED OR IMPLIED. YOU USE AT
YOUR OWN RISK. THE AUTHOR WILL NOT BE LIABLE FOR DATA LOSS,
DAMAGES, LOSS OF PROFITS OR ANY OTHER KIND OF LOSS WHILE USING
OR MISUSING THIS SOFTWARE.
Thank you for your interest in RAR and unrar.dll.
Alexander L. Roshal

Binary file not shown.

View File

@ -1,140 +0,0 @@
#ifndef _UNRAR_DLL_
#define _UNRAR_DLL_
#define ERAR_END_ARCHIVE 10
#define ERAR_NO_MEMORY 11
#define ERAR_BAD_DATA 12
#define ERAR_BAD_ARCHIVE 13
#define ERAR_UNKNOWN_FORMAT 14
#define ERAR_EOPEN 15
#define ERAR_ECREATE 16
#define ERAR_ECLOSE 17
#define ERAR_EREAD 18
#define ERAR_EWRITE 19
#define ERAR_SMALL_BUF 20
#define ERAR_UNKNOWN 21
#define ERAR_MISSING_PASSWORD 22
#define RAR_OM_LIST 0
#define RAR_OM_EXTRACT 1
#define RAR_OM_LIST_INCSPLIT 2
#define RAR_SKIP 0
#define RAR_TEST 1
#define RAR_EXTRACT 2
#define RAR_VOL_ASK 0
#define RAR_VOL_NOTIFY 1
#define RAR_DLL_VERSION 4
#ifdef _UNIX
#define CALLBACK
#define PASCAL
#define LONG long
#define HANDLE void *
#define LPARAM long
#define UINT unsigned int
#endif
struct RARHeaderData
{
char ArcName[260];
char FileName[260];
unsigned int Flags;
unsigned int PackSize;
unsigned int UnpSize;
unsigned int HostOS;
unsigned int FileCRC;
unsigned int FileTime;
unsigned int UnpVer;
unsigned int Method;
unsigned int FileAttr;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
};
struct RARHeaderDataEx
{
char ArcName[1024];
wchar_t ArcNameW[1024];
char FileName[1024];
wchar_t FileNameW[1024];
unsigned int Flags;
unsigned int PackSize;
unsigned int PackSizeHigh;
unsigned int UnpSize;
unsigned int UnpSizeHigh;
unsigned int HostOS;
unsigned int FileCRC;
unsigned int FileTime;
unsigned int UnpVer;
unsigned int Method;
unsigned int FileAttr;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
unsigned int Reserved[1024];
};
struct RAROpenArchiveData
{
char *ArcName;
unsigned int OpenMode;
unsigned int OpenResult;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
};
struct RAROpenArchiveDataEx
{
char *ArcName;
wchar_t *ArcNameW;
unsigned int OpenMode;
unsigned int OpenResult;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
unsigned int Flags;
unsigned int Reserved[32];
};
enum UNRARCALLBACK_MESSAGES {
UCM_CHANGEVOLUME,UCM_PROCESSDATA,UCM_NEEDPASSWORD
};
typedef int (CALLBACK *UNRARCALLBACK)(UINT msg,LPARAM UserData,LPARAM P1,LPARAM P2);
typedef int (PASCAL *CHANGEVOLPROC)(char *ArcName,int Mode);
typedef int (PASCAL *PROCESSDATAPROC)(unsigned char *Addr,int Size);
#ifdef __cplusplus
extern "C" {
#endif
HANDLE PASCAL RAROpenArchive(struct RAROpenArchiveData *ArchiveData);
HANDLE PASCAL RAROpenArchiveEx(struct RAROpenArchiveDataEx *ArchiveData);
int PASCAL RARCloseArchive(HANDLE hArcData);
int PASCAL RARReadHeader(HANDLE hArcData,struct RARHeaderData *HeaderData);
int PASCAL RARReadHeaderEx(HANDLE hArcData,struct RARHeaderDataEx *HeaderData);
int PASCAL RARProcessFile(HANDLE hArcData,int Operation,char *DestPath,char *DestName);
int PASCAL RARProcessFileW(HANDLE hArcData,int Operation,wchar_t *DestPath,wchar_t *DestName);
void PASCAL RARSetCallback(HANDLE hArcData,UNRARCALLBACK Callback,LPARAM UserData);
void PASCAL RARSetChangeVolProc(HANDLE hArcData,CHANGEVOLPROC ChangeVolProc);
void PASCAL RARSetProcessDataProc(HANDLE hArcData,PROCESSDATAPROC ProcessDataProc);
void PASCAL RARSetPassword(HANDLE hArcData,char *Password);
int PASCAL RARGetDllVersion();
#ifdef __cplusplus
}
#endif
#endif

Binary file not shown.

View File

@ -1,606 +0,0 @@
UnRAR.dll Manual
~~~~~~~~~~~~~~~~
UnRAR.dll is a 32-bit Windows dynamic-link library which provides
file extraction from RAR archives.
Exported functions
====================================================================
HANDLE PASCAL RAROpenArchive(struct RAROpenArchiveData *ArchiveData)
====================================================================
Description
~~~~~~~~~~~
Open RAR archive and allocate memory structures
Parameters
~~~~~~~~~~
ArchiveData Points to RAROpenArchiveData structure
struct RAROpenArchiveData
{
char *ArcName;
UINT OpenMode;
UINT OpenResult;
char *CmtBuf;
UINT CmtBufSize;
UINT CmtSize;
UINT CmtState;
};
Structure fields:
ArcName
Input parameter which should point to zero terminated string
containing the archive name.
OpenMode
Input parameter.
Possible values
RAR_OM_LIST
Open archive for reading file headers only.
RAR_OM_EXTRACT
Open archive for testing and extracting files.
RAR_OM_LIST_INCSPLIT
Open archive for reading file headers only. If you open an archive
in such mode, RARReadHeader[Ex] will return all file headers,
including those with "file continued from previous volume" flag.
In case of RAR_OM_LIST such headers are automatically skipped.
So if you process RAR volumes in RAR_OM_LIST_INCSPLIT mode, you will
get several file header records for same file if file is split between
volumes. For such files only the last file header record will contain
the correct file CRC and if you wish to get the correct packed size,
you need to sum up packed sizes of all parts.
OpenResult
Output parameter.
Possible values
0 Success
ERAR_NO_MEMORY Not enough memory to initialize data structures
ERAR_BAD_DATA Archive header broken
ERAR_BAD_ARCHIVE File is not valid RAR archive
ERAR_UNKNOWN_FORMAT Unknown encryption used for archive headers
ERAR_EOPEN File open error
CmtBuf
Input parameter which should point to the buffer for archive
comments. Maximum comment size is limited to 64Kb. Comment text is
zero terminated. If the comment text is larger than the buffer
size, the comment text will be truncated. If CmtBuf is set to
NULL, comments will not be read.
CmtBufSize
Input parameter which should contain size of buffer for archive
comments.
CmtSize
Output parameter containing size of comments actually read into the
buffer, cannot exceed CmtBufSize.
CmtState
Output parameter.
Possible values
0 comments not present
1 Comments read completely
ERAR_NO_MEMORY Not enough memory to extract comments
ERAR_BAD_DATA Broken comment
ERAR_UNKNOWN_FORMAT Unknown comment format
ERAR_SMALL_BUF Buffer too small, comments not completely read
Return values
~~~~~~~~~~~~~
Archive handle or NULL in case of error
========================================================================
HANDLE PASCAL RAROpenArchiveEx(struct RAROpenArchiveDataEx *ArchiveData)
========================================================================
Description
~~~~~~~~~~~
Similar to RAROpenArchive, but uses RAROpenArchiveDataEx structure
allowing to specify Unicode archive name and returning information
about archive flags.
Parameters
~~~~~~~~~~
ArchiveData Points to RAROpenArchiveDataEx structure
struct RAROpenArchiveDataEx
{
char *ArcName;
wchar_t *ArcNameW;
unsigned int OpenMode;
unsigned int OpenResult;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
unsigned int Flags;
unsigned int Reserved[32];
};
Structure fields:
ArcNameW
Input parameter which should point to zero terminated Unicode string
containing the archive name or NULL if Unicode name is not specified.
Flags
Output parameter. Combination of bit flags.
Possible values
0x0001 - Volume attribute (archive volume)
0x0002 - Archive comment present
0x0004 - Archive lock attribute
0x0008 - Solid attribute (solid archive)
0x0010 - New volume naming scheme ('volname.partN.rar')
0x0020 - Authenticity information present
0x0040 - Recovery record present
0x0080 - Block headers are encrypted
0x0100 - First volume (set only by RAR 3.0 and later)
Reserved[32]
Reserved for future use. Must be zero.
Information on other structure fields and function return values
is available above, in RAROpenArchive function description.
====================================================================
int PASCAL RARCloseArchive(HANDLE hArcData)
====================================================================
Description
~~~~~~~~~~~
Close RAR archive and release allocated memory. It must be called when
archive processing is finished, even if the archive processing was stopped
due to an error.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
Return values
~~~~~~~~~~~~~
0 Success
ERAR_ECLOSE Archive close error
====================================================================
int PASCAL RARReadHeader(HANDLE hArcData,
struct RARHeaderData *HeaderData)
====================================================================
Description
~~~~~~~~~~~
Read header of file in archive.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
HeaderData
It should point to RARHeaderData structure:
struct RARHeaderData
{
char ArcName[260];
char FileName[260];
UINT Flags;
UINT PackSize;
UINT UnpSize;
UINT HostOS;
UINT FileCRC;
UINT FileTime;
UINT UnpVer;
UINT Method;
UINT FileAttr;
char *CmtBuf;
UINT CmtBufSize;
UINT CmtSize;
UINT CmtState;
};
Structure fields:
ArcName
Output parameter which contains a zero terminated string of the
current archive name. May be used to determine the current volume
name.
FileName
Output parameter which contains a zero terminated string of the
file name in OEM (DOS) encoding.
Flags
Output parameter which contains file flags:
0x01 - file continued from previous volume
0x02 - file continued on next volume
0x04 - file encrypted with password
0x08 - file comment present
0x10 - compression of previous files is used (solid flag)
bits 7 6 5
0 0 0 - dictionary size 64 Kb
0 0 1 - dictionary size 128 Kb
0 1 0 - dictionary size 256 Kb
0 1 1 - dictionary size 512 Kb
1 0 0 - dictionary size 1024 Kb
1 0 1 - dictionary size 2048 KB
1 1 0 - dictionary size 4096 KB
1 1 1 - file is directory
Other bits are reserved.
PackSize
Output parameter means packed file size or size of the
file part if file was split between volumes.
UnpSize
Output parameter - unpacked file size.
HostOS
Output parameter - operating system used for archiving:
0 - MS DOS;
1 - OS/2.
2 - Win32
3 - Unix
FileCRC
Output parameter which contains unpacked file CRC. In case of file parts
split between volumes only the last part contains the correct CRC
and it is accessible only in RAR_OM_LIST_INCSPLIT listing mode.
FileTime
Output parameter - contains date and time in standard MS DOS format.
UnpVer
Output parameter - RAR version needed to extract file.
It is encoded as 10 * Major version + minor version.
Method
Output parameter - packing method.
FileAttr
Output parameter - file attributes.
CmtBuf
File comments support is not implemented in the new DLL version yet.
Now CmtState is always 0.
/*
* Input parameter which should point to the buffer for file
* comments. Maximum comment size is limited to 64Kb. Comment text is
* a zero terminated string in OEM encoding. If the comment text is
* larger than the buffer size, the comment text will be truncated.
* If CmtBuf is set to NULL, comments will not be read.
*/
CmtBufSize
Input parameter which should contain size of buffer for archive
comments.
CmtSize
Output parameter containing size of comments actually read into the
buffer, should not exceed CmtBufSize.
CmtState
Output parameter.
Possible values
0 Absent comments
1 Comments read completely
ERAR_NO_MEMORY Not enough memory to extract comments
ERAR_BAD_DATA Broken comment
ERAR_UNKNOWN_FORMAT Unknown comment format
ERAR_SMALL_BUF Buffer too small, comments not completely read
Return values
~~~~~~~~~~~~~
0 Success
ERAR_END_ARCHIVE End of archive
ERAR_BAD_DATA File header broken
====================================================================
int PASCAL RARReadHeaderEx(HANDLE hArcData,
struct RARHeaderDataEx *HeaderData)
====================================================================
Description
~~~~~~~~~~~
Similar to RARReadHeader, but uses RARHeaderDataEx structure,
containing information about Unicode file names and 64 bit file sizes.
struct RARHeaderDataEx
{
char ArcName[1024];
wchar_t ArcNameW[1024];
char FileName[1024];
wchar_t FileNameW[1024];
unsigned int Flags;
unsigned int PackSize;
unsigned int PackSizeHigh;
unsigned int UnpSize;
unsigned int UnpSizeHigh;
unsigned int HostOS;
unsigned int FileCRC;
unsigned int FileTime;
unsigned int UnpVer;
unsigned int Method;
unsigned int FileAttr;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
unsigned int Reserved[1024];
};
====================================================================
int PASCAL RARProcessFile(HANDLE hArcData,
int Operation,
char *DestPath,
char *DestName)
====================================================================
Description
~~~~~~~~~~~
Performs action and moves the current position in the archive to
the next file. Extract or test the current file from the archive
opened in RAR_OM_EXTRACT mode. If the mode RAR_OM_LIST is set,
then a call to this function will simply skip the archive position
to the next file.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
Operation
File operation.
Possible values
RAR_SKIP Move to the next file in the archive. If the
archive is solid and RAR_OM_EXTRACT mode was set
when the archive was opened, the current file will
be processed - the operation will be performed
slower than a simple seek.
RAR_TEST Test the current file and move to the next file in
the archive. If the archive was opened with
RAR_OM_LIST mode, the operation is equal to
RAR_SKIP.
RAR_EXTRACT Extract the current file and move to the next file.
If the archive was opened with RAR_OM_LIST mode,
the operation is equal to RAR_SKIP.
DestPath
This parameter should point to a zero terminated string containing the
destination directory to which to extract files to. If DestPath is equal
to NULL, it means extract to the current directory. This parameter has
meaning only if DestName is NULL.
DestName
This parameter should point to a string containing the full path and name
to assign to extracted file or it can be NULL to use the default name.
If DestName is defined (not NULL), it overrides both the original file
name saved in the archive and path specigied in DestPath setting.
Both DestPath and DestName must be in OEM encoding. If necessary,
use CharToOem to convert text to OEM before passing to this function.
Return values
~~~~~~~~~~~~~
0 Success
ERAR_BAD_DATA File CRC error
ERAR_BAD_ARCHIVE Volume is not valid RAR archive
ERAR_UNKNOWN_FORMAT Unknown archive format
ERAR_EOPEN Volume open error
ERAR_ECREATE File create error
ERAR_ECLOSE File close error
ERAR_EREAD Read error
ERAR_EWRITE Write error
Note: if you wish to cancel extraction, return -1 when processing
UCM_PROCESSDATA callback message.
====================================================================
int PASCAL RARProcessFileW(HANDLE hArcData,
int Operation,
wchar_t *DestPath,
wchar_t *DestName)
====================================================================
Description
~~~~~~~~~~~
Unicode version of RARProcessFile. It uses Unicode DestPath
and DestName parameters, other parameters and return values
are the same as in RARProcessFile.
====================================================================
void PASCAL RARSetCallback(HANDLE hArcData,
int PASCAL (*CallbackProc)(UINT msg,LPARAM UserData,LPARAM P1,LPARAM P2),
LPARAM UserData);
====================================================================
Description
~~~~~~~~~~~
Set a user-defined callback function to process Unrar events.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
CallbackProc
It should point to a user-defined callback function.
The function will be passed four parameters:
msg Type of event. Described below.
UserData User defined value passed to RARSetCallback.
P1 and P2 Event dependent parameters. Described below.
Possible events
UCM_CHANGEVOLUME Process volume change.
P1 Points to the zero terminated name
of the next volume.
P2 The function call mode:
RAR_VOL_ASK Required volume is absent. The function should
prompt user and return a positive value
to retry or return -1 value to terminate
operation. The function may also specify a new
volume name, placing it to the address specified
by P1 parameter.
RAR_VOL_NOTIFY Required volume is successfully opened.
This is a notification call and volume name
modification is not allowed. The function should
return a positive value to continue or -1
to terminate operation.
UCM_PROCESSDATA Process unpacked data. It may be used to read
a file while it is being extracted or tested
without actual extracting file to disk.
Return a positive value to continue process
or -1 to cancel the archive operation
P1 Address pointing to the unpacked data.
Function may refer to the data but must not
change it.
P2 Size of the unpacked data. It is guaranteed
only that the size will not exceed the maximum
dictionary size (4 Mb in RAR 3.0).
UCM_NEEDPASSWORD DLL needs a password to process archive.
This message must be processed if you wish
to be able to handle archives with encrypted
file names. It can be also used as replacement
of RARSetPassword function even for usual
encrypted files with non-encrypted names.
P1 Address pointing to the buffer for a password.
You need to copy a password here.
P2 Size of the password buffer.
UserData
User data passed to callback function.
Other functions of UnRAR.dll should not be called from the callback
function.
Return values
~~~~~~~~~~~~~
None
====================================================================
void PASCAL RARSetChangeVolProc(HANDLE hArcData,
int PASCAL (*ChangeVolProc)(char *ArcName,int Mode));
====================================================================
Obsoleted, use RARSetCallback instead.
====================================================================
void PASCAL RARSetProcessDataProc(HANDLE hArcData,
int PASCAL (*ProcessDataProc)(unsigned char *Addr,int Size))
====================================================================
Obsoleted, use RARSetCallback instead.
====================================================================
void PASCAL RARSetPassword(HANDLE hArcData,
char *Password);
====================================================================
Description
~~~~~~~~~~~
Set a password to decrypt files.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
Password
It should point to a string containing a zero terminated password.
Return values
~~~~~~~~~~~~~
None
====================================================================
void PASCAL RARGetDllVersion();
====================================================================
Description
~~~~~~~~~~~
Returns API version.
Parameters
~~~~~~~~~~
None.
Return values
~~~~~~~~~~~~~
Returns an integer value denoting UnRAR.dll API version, which is also
defined in unrar.h as RAR_DLL_VERSION. API version number is incremented
only in case of noticeable changes in UnRAR.dll API. Do not confuse it
with version of UnRAR.dll stored in DLL resources, which is incremented
with every DLL rebuild.
If RARGetDllVersion() returns a value lower than UnRAR.dll which your
application was designed for, it may indicate that DLL version is too old
and it will fail to provide all necessary functions to your application.
This function is absent in old versions of UnRAR.dll, so it is safer
to use LoadLibrary and GetProcAddress to access this function.

View File

@ -1,80 +0,0 @@
List of unrar.dll API changes. We do not include performance and reliability
improvements into this list, but this library and RAR/UnRAR tools share
the same source code. So the latest version of unrar.dll usually contains
same decompression algorithm changes as the latest UnRAR version.
============================================================================
-- 18 January 2008
all LONG parameters of CallbackProc function were changed
to LPARAM type for 64 bit mode compatibility.
-- 12 December 2007
Added new RAR_OM_LIST_INCSPLIT open mode for function RAROpenArchive.
-- 14 August 2007
Added NoCrypt\unrar_nocrypt.dll without decryption code for those
applications where presence of encryption or decryption code is not
allowed because of legal restrictions.
-- 14 December 2006
Added ERAR_MISSING_PASSWORD error type. This error is returned
if empty password is specified for encrypted file.
-- 12 June 2003
Added RARProcessFileW function, Unicode version of RARProcessFile
-- 9 August 2002
Added RAROpenArchiveEx function allowing to specify Unicode archive
name and get archive flags.
-- 24 January 2002
Added RARReadHeaderEx function allowing to read Unicode file names
and 64 bit file sizes.
-- 23 January 2002
Added ERAR_UNKNOWN error type (it is used for all errors which
do not have special ERAR code yet) and UCM_NEEDPASSWORD callback
message.
Unrar.dll automatically opens all next volumes not only when extracting,
but also in RAR_OM_LIST mode.
-- 27 November 2001
RARSetChangeVolProc and RARSetProcessDataProc are replaced by
the single callback function installed with RARSetCallback.
Unlike old style callbacks, the new function accepts the user defined
parameter. Unrar.dll still supports RARSetChangeVolProc and
RARSetProcessDataProc for compatibility purposes, but if you write
a new application, better use RARSetCallback.
File comments support is not implemented in the new DLL version yet.
Now CmtState is always 0.
-- 13 August 2001
Added RARGetDllVersion function, so you may distinguish old unrar.dll,
which used C style callback functions and the new one with PASCAL callbacks.
-- 10 May 2001
Callback functions in RARSetChangeVolProc and RARSetProcessDataProc
use PASCAL style call convention now.

View File

@ -1 +0,0 @@
This is x64 version of unrar.dll.

Binary file not shown.

Binary file not shown.

View File

@ -1,177 +0,0 @@
# Copyright (c) 2003-2005 Jimmy Retzlaff, 2008 Konstantin Yegupov
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""
pyUnRAR2 is a ctypes based wrapper around the free UnRAR.dll.
It is an modified version of Jimmy Retzlaff's pyUnRAR - more simple,
stable and foolproof.
Notice that it has INCOMPATIBLE interface.
It enables reading and unpacking of archives created with the
RAR/WinRAR archivers. There is a low-level interface which is very
similar to the C interface provided by UnRAR. There is also a
higher level interface which makes some common operations easier.
"""
__version__ = '0.99.2'
try:
WindowsError
in_windows = True
except NameError:
in_windows = False
if in_windows:
from windows import RarFileImplementation
else:
from unix import RarFileImplementation
import fnmatch, time, weakref
class RarInfo(object):
"""Represents a file header in an archive. Don't instantiate directly.
Use only to obtain information about file.
YOU CANNOT EXTRACT FILE CONTENTS USING THIS OBJECT.
USE METHODS OF RarFile CLASS INSTEAD.
Properties:
index - index of file within the archive
filename - name of the file in the archive including path (if any)
datetime - file date/time as a struct_time suitable for time.strftime
isdir - True if the file is a directory
size - size in bytes of the uncompressed file
comment - comment associated with the file
Note - this is not currently intended to be a Python file-like object.
"""
def __init__(self, rarfile, data):
self.rarfile = weakref.proxy(rarfile)
self.index = data['index']
self.filename = data['filename']
self.isdir = data['isdir']
self.size = data['size']
self.datetime = data['datetime']
self.comment = data['comment']
def __str__(self):
try :
arcName = self.rarfile.archiveName
except ReferenceError:
arcName = "[ARCHIVE_NO_LONGER_LOADED]"
return '<RarInfo "%s" in "%s">' % (self.filename, arcName)
class RarFile(RarFileImplementation):
def __init__(self, archiveName, password=None):
"""Instantiate the archive.
archiveName is the name of the RAR file.
password is used to decrypt the files in the archive.
Properties:
comment - comment associated with the archive
>>> print RarFile('test.rar').comment
This is a test.
"""
self.archiveName = archiveName
RarFileImplementation.init(self, password)
def __del__(self):
self.destruct()
def infoiter(self):
"""Iterate over all the files in the archive, generating RarInfos.
>>> import os
>>> for fileInArchive in RarFile('test.rar').infoiter():
... print os.path.split(fileInArchive.filename)[-1],
... print fileInArchive.isdir,
... print fileInArchive.size,
... print fileInArchive.comment,
... print tuple(fileInArchive.datetime)[0:5],
... print time.strftime('%a, %d %b %Y %H:%M', fileInArchive.datetime)
test True 0 None (2003, 6, 30, 1, 59) Mon, 30 Jun 2003 01:59
test.txt False 20 None (2003, 6, 30, 2, 1) Mon, 30 Jun 2003 02:01
this.py False 1030 None (2002, 2, 8, 16, 47) Fri, 08 Feb 2002 16:47
"""
for params in RarFileImplementation.infoiter(self):
yield RarInfo(self, params)
def infolist(self):
"""Return a list of RarInfos, descripting the contents of the archive."""
return list(self.infoiter())
def read_files(self, condition='*'):
"""Read specific files from archive into memory.
If "condition" is a list of numbers, then return files which have those positions in infolist.
If "condition" is a string, then it is treated as a wildcard for names of files to extract.
If "condition" is a function, it is treated as a callback function, which accepts a RarInfo object
and returns boolean True (extract) or False (skip).
If "condition" is omitted, all files are returned.
Returns list of tuples (RarInfo info, str contents)
"""
checker = condition2checker(condition)
return RarFileImplementation.read_files(self, checker)
def extract(self, condition='*', path='.', withSubpath=True, overwrite=True):
"""Extract specific files from archive to disk.
If "condition" is a list of numbers, then extract files which have those positions in infolist.
If "condition" is a string, then it is treated as a wildcard for names of files to extract.
If "condition" is a function, it is treated as a callback function, which accepts a RarInfo object
and returns either boolean True (extract) or boolean False (skip).
DEPRECATED: If "condition" callback returns string (only supported for Windows) -
that string will be used as a new name to save the file under.
If "condition" is omitted, all files are extracted.
"path" is a directory to extract to
"withSubpath" flag denotes whether files are extracted with their full path in the archive.
"overwrite" flag denotes whether extracted files will overwrite old ones. Defaults to true.
Returns list of RarInfos for extracted files."""
checker = condition2checker(condition)
return RarFileImplementation.extract(self, checker, path, withSubpath, overwrite)
def condition2checker(condition):
"""Converts different condition types to callback"""
if type(condition) in [str, unicode]:
def smatcher(info):
return fnmatch.fnmatch(info.filename, condition)
return smatcher
elif type(condition) in [list, tuple] and type(condition[0]) in [int, long]:
def imatcher(info):
return info.index in condition
return imatcher
elif callable(condition):
return condition
else:
raise TypeError

View File

@ -1,30 +0,0 @@
# Copyright (c) 2003-2005 Jimmy Retzlaff, 2008 Konstantin Yegupov
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Low level interface - see UnRARDLL\UNRARDLL.TXT
class ArchiveHeaderBroken(Exception): pass
class InvalidRARArchive(Exception): pass
class FileOpenError(Exception): pass
class IncorrectRARPassword(Exception): pass
class InvalidRARArchiveUsage(Exception): pass

View File

@ -1,139 +0,0 @@
import os, sys
import UnRAR2
from UnRAR2.rar_exceptions import *
def cleanup(dir='test'):
for path, dirs, files in os.walk(dir):
for fn in files:
os.remove(os.path.join(path, fn))
for dir in dirs:
os.removedirs(os.path.join(path, dir))
# reuse RarArchive object, en
cleanup()
rarc = UnRAR2.RarFile('test.rar')
rarc.infolist()
for info in rarc.infoiter():
saveinfo = info
assert (str(info)=="""<RarInfo "test" in "test.rar">""")
break
rarc.extract()
assert os.path.exists('test'+os.sep+'test.txt')
assert os.path.exists('test'+os.sep+'this.py')
del rarc
assert (str(saveinfo)=="""<RarInfo "test" in "[ARCHIVE_NO_LONGER_LOADED]">""")
cleanup()
# extract all the files in test.rar
cleanup()
UnRAR2.RarFile('test.rar').extract()
assert os.path.exists('test'+os.sep+'test.txt')
assert os.path.exists('test'+os.sep+'this.py')
cleanup()
# extract all the files in test.rar matching the wildcard *.txt
cleanup()
UnRAR2.RarFile('test.rar').extract('*.txt')
assert os.path.exists('test'+os.sep+'test.txt')
assert not os.path.exists('test'+os.sep+'this.py')
cleanup()
# check the name and size of each file, extracting small ones
cleanup()
archive = UnRAR2.RarFile('test.rar')
assert archive.comment == 'This is a test.'
archive.extract(lambda rarinfo: rarinfo.size <= 1024)
for rarinfo in archive.infoiter():
if rarinfo.size <= 1024 and not rarinfo.isdir:
assert rarinfo.size == os.stat(rarinfo.filename).st_size
assert file('test'+os.sep+'test.txt', 'rt').read() == 'This is only a test.'
assert not os.path.exists('test'+os.sep+'this.py')
cleanup()
# extract this.py, overriding it's destination
cleanup('test2')
archive = UnRAR2.RarFile('test.rar')
archive.extract('*.py', 'test2', False)
assert os.path.exists('test2'+os.sep+'this.py')
cleanup('test2')
# extract test.txt to memory
cleanup()
archive = UnRAR2.RarFile('test.rar')
entries = UnRAR2.RarFile('test.rar').read_files('*test.txt')
assert len(entries)==1
assert entries[0][0].filename.endswith('test.txt')
assert entries[0][1]=='This is only a test.'
# extract all the files in test.rar with overwriting
cleanup()
fo = open('test'+os.sep+'test.txt',"wt")
fo.write("blah")
fo.close()
UnRAR2.RarFile('test.rar').extract('*.txt')
assert open('test'+os.sep+'test.txt',"rt").read()!="blah"
cleanup()
# extract all the files in test.rar without overwriting
cleanup()
fo = open('test'+os.sep+'test.txt',"wt")
fo.write("blahblah")
fo.close()
UnRAR2.RarFile('test.rar').extract('*.txt', overwrite = False)
assert open('test'+os.sep+'test.txt',"rt").read()=="blahblah"
cleanup()
# list big file in an archive
list(UnRAR2.RarFile('test_nulls.rar').infoiter())
# extract files from an archive with protected files
cleanup()
UnRAR2.RarFile('test_protected_files.rar', password="protected").extract()
assert os.path.exists('test'+os.sep+'top_secret_xxx_file.txt')
cleanup()
errored = False
try:
UnRAR2.RarFile('test_protected_files.rar', password="proteqted").extract()
except IncorrectRARPassword:
errored = True
assert not os.path.exists('test'+os.sep+'top_secret_xxx_file.txt')
assert errored
cleanup()
# extract files from an archive with protected headers
cleanup()
UnRAR2.RarFile('test_protected_headers.rar', password="secret").extract()
assert os.path.exists('test'+os.sep+'top_secret_xxx_file.txt')
cleanup()
errored = False
try:
UnRAR2.RarFile('test_protected_headers.rar', password="seqret").extract()
except IncorrectRARPassword:
errored = True
assert not os.path.exists('test'+os.sep+'top_secret_xxx_file.txt')
assert errored
cleanup()
# make sure docstring examples are working
import doctest
doctest.testmod(UnRAR2)
# update documentation
import pydoc
pydoc.writedoc(UnRAR2)
# cleanup
try:
os.remove('__init__.pyc')
except:
pass

View File

@ -1,175 +0,0 @@
# Copyright (c) 2003-2005 Jimmy Retzlaff, 2008 Konstantin Yegupov
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Unix version uses unrar command line executable
import subprocess
import gc
import os, os.path
import time, re
from rar_exceptions import *
class UnpackerNotInstalled(Exception): pass
rar_executable_cached = None
def call_unrar(params):
"Calls rar/unrar command line executable, returns stdout pipe"
global rar_executable_cached
if rar_executable_cached is None:
for command in ('unrar', 'rar'):
try:
subprocess.Popen([command], stdout=subprocess.PIPE)
rar_executable_cached = command
break
except OSError:
pass
if rar_executable_cached is None:
raise UnpackerNotInstalled("No suitable RAR unpacker installed")
assert type(params) == list, "params must be list"
args = [rar_executable_cached] + params
try:
gc.disable() # See http://bugs.python.org/issue1336
return subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
finally:
gc.enable()
class RarFileImplementation(object):
def init(self, password=None):
self.password = password
stdoutdata, stderrdata = self.call('v', []).communicate()
for line in stderrdata.splitlines():
if line.strip().startswith("Cannot open"):
raise FileOpenError
if line.find("CRC failed")>=0:
raise IncorrectRARPassword
accum = []
source = iter(stdoutdata.splitlines())
line = ''
while not (line.startswith('Comment:') or line.startswith('Pathname/Comment')):
if line.strip().endswith('is not RAR archive'):
raise InvalidRARArchive
line = source.next()
while not line.startswith('Pathname/Comment'):
accum.append(line.rstrip('\n'))
line = source.next()
if len(accum):
accum[0] = accum[0][9:]
self.comment = '\n'.join(accum[:-1])
else:
self.comment = None
def escaped_password(self):
return '-' if self.password == None else self.password
def call(self, cmd, options=[], files=[]):
options2 = options + ['p'+self.escaped_password()]
soptions = ['-'+x for x in options2]
return call_unrar([cmd]+soptions+['--',self.archiveName]+files)
def infoiter(self):
stdoutdata, stderrdata = self.call('v', ['c-']).communicate()
for line in stderrdata.splitlines():
if line.strip().startswith("Cannot open"):
raise FileOpenError
accum = []
source = iter(stdoutdata.splitlines())
line = ''
while not line.startswith('--------------'):
if line.strip().endswith('is not RAR archive'):
raise InvalidRARArchive
if line.find("CRC failed")>=0:
raise IncorrectRARPassword
line = source.next()
line = source.next()
i = 0
re_spaces = re.compile(r"\s+")
while not line.startswith('--------------'):
accum.append(line)
if len(accum)==2:
data = {}
data['index'] = i
data['filename'] = accum[0].strip()
info = re_spaces.split(accum[1].strip())
data['size'] = int(info[0])
attr = info[5]
data['isdir'] = 'd' in attr.lower()
data['datetime'] = time.strptime(info[3]+" "+info[4], '%d-%m-%y %H:%M')
data['comment'] = None
yield data
accum = []
i += 1
line = source.next()
def read_files(self, checker):
res = []
for info in self.infoiter():
checkres = checker(info)
if checkres==True and not info.isdir:
pipe = self.call('p', ['inul'], [info.filename]).stdout
res.append((info, pipe.read()))
return res
def extract(self, checker, path, withSubpath, overwrite):
res = []
command = 'x'
if not withSubpath:
command = 'e'
options = []
if overwrite:
options.append('o+')
else:
options.append('o-')
if not path.endswith(os.sep):
path += os.sep
names = []
for info in self.infoiter():
checkres = checker(info)
if type(checkres) in [str, unicode]:
raise NotImplementedError("Condition callbacks returning strings are deprecated and only supported in Windows")
if checkres==True and not info.isdir:
names.append(info.filename)
res.append(info)
names.append(path)
proc = self.call(command, options, names)
stdoutdata, stderrdata = proc.communicate()
if stderrdata.find("CRC failed")>=0:
raise IncorrectRARPassword
return res
def destruct(self):
pass

View File

@ -1,309 +0,0 @@
# Copyright (c) 2003-2005 Jimmy Retzlaff, 2008 Konstantin Yegupov
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Low level interface - see UnRARDLL\UNRARDLL.TXT
from __future__ import generators
import ctypes, ctypes.wintypes
import os, os.path, sys
import Queue
import time
from rar_exceptions import *
ERAR_END_ARCHIVE = 10
ERAR_NO_MEMORY = 11
ERAR_BAD_DATA = 12
ERAR_BAD_ARCHIVE = 13
ERAR_UNKNOWN_FORMAT = 14
ERAR_EOPEN = 15
ERAR_ECREATE = 16
ERAR_ECLOSE = 17
ERAR_EREAD = 18
ERAR_EWRITE = 19
ERAR_SMALL_BUF = 20
ERAR_UNKNOWN = 21
RAR_OM_LIST = 0
RAR_OM_EXTRACT = 1
RAR_SKIP = 0
RAR_TEST = 1
RAR_EXTRACT = 2
RAR_VOL_ASK = 0
RAR_VOL_NOTIFY = 1
RAR_DLL_VERSION = 3
# enum UNRARCALLBACK_MESSAGES
UCM_CHANGEVOLUME = 0
UCM_PROCESSDATA = 1
UCM_NEEDPASSWORD = 2
architecture_bits = ctypes.sizeof(ctypes.c_voidp)*8
dll_name = "unrar.dll"
if architecture_bits == 64:
dll_name = "x64\\unrar64.dll"
try:
unrar = ctypes.WinDLL(os.path.join(os.path.split(__file__)[0], 'UnRARDLL', dll_name))
except WindowsError:
unrar = ctypes.WinDLL(dll_name)
class RAROpenArchiveDataEx(ctypes.Structure):
def __init__(self, ArcName=None, ArcNameW=u'', OpenMode=RAR_OM_LIST):
self.CmtBuf = ctypes.c_buffer(64*1024)
ctypes.Structure.__init__(self, ArcName=ArcName, ArcNameW=ArcNameW, OpenMode=OpenMode, _CmtBuf=ctypes.addressof(self.CmtBuf), CmtBufSize=ctypes.sizeof(self.CmtBuf))
_fields_ = [
('ArcName', ctypes.c_char_p),
('ArcNameW', ctypes.c_wchar_p),
('OpenMode', ctypes.c_uint),
('OpenResult', ctypes.c_uint),
('_CmtBuf', ctypes.c_voidp),
('CmtBufSize', ctypes.c_uint),
('CmtSize', ctypes.c_uint),
('CmtState', ctypes.c_uint),
('Flags', ctypes.c_uint),
('Reserved', ctypes.c_uint*32),
]
class RARHeaderDataEx(ctypes.Structure):
def __init__(self):
self.CmtBuf = ctypes.c_buffer(64*1024)
ctypes.Structure.__init__(self, _CmtBuf=ctypes.addressof(self.CmtBuf), CmtBufSize=ctypes.sizeof(self.CmtBuf))
_fields_ = [
('ArcName', ctypes.c_char*1024),
('ArcNameW', ctypes.c_wchar*1024),
('FileName', ctypes.c_char*1024),
('FileNameW', ctypes.c_wchar*1024),
('Flags', ctypes.c_uint),
('PackSize', ctypes.c_uint),
('PackSizeHigh', ctypes.c_uint),
('UnpSize', ctypes.c_uint),
('UnpSizeHigh', ctypes.c_uint),
('HostOS', ctypes.c_uint),
('FileCRC', ctypes.c_uint),
('FileTime', ctypes.c_uint),
('UnpVer', ctypes.c_uint),
('Method', ctypes.c_uint),
('FileAttr', ctypes.c_uint),
('_CmtBuf', ctypes.c_voidp),
('CmtBufSize', ctypes.c_uint),
('CmtSize', ctypes.c_uint),
('CmtState', ctypes.c_uint),
('Reserved', ctypes.c_uint*1024),
]
def DosDateTimeToTimeTuple(dosDateTime):
"""Convert an MS-DOS format date time to a Python time tuple.
"""
dosDate = dosDateTime >> 16
dosTime = dosDateTime & 0xffff
day = dosDate & 0x1f
month = (dosDate >> 5) & 0xf
year = 1980 + (dosDate >> 9)
second = 2*(dosTime & 0x1f)
minute = (dosTime >> 5) & 0x3f
hour = dosTime >> 11
return time.localtime(time.mktime((year, month, day, hour, minute, second, 0, 1, -1)))
def _wrap(restype, function, argtypes):
result = function
result.argtypes = argtypes
result.restype = restype
return result
RARGetDllVersion = _wrap(ctypes.c_int, unrar.RARGetDllVersion, [])
RAROpenArchiveEx = _wrap(ctypes.wintypes.HANDLE, unrar.RAROpenArchiveEx, [ctypes.POINTER(RAROpenArchiveDataEx)])
RARReadHeaderEx = _wrap(ctypes.c_int, unrar.RARReadHeaderEx, [ctypes.wintypes.HANDLE, ctypes.POINTER(RARHeaderDataEx)])
_RARSetPassword = _wrap(ctypes.c_int, unrar.RARSetPassword, [ctypes.wintypes.HANDLE, ctypes.c_char_p])
def RARSetPassword(*args, **kwargs):
_RARSetPassword(*args, **kwargs)
RARProcessFile = _wrap(ctypes.c_int, unrar.RARProcessFile, [ctypes.wintypes.HANDLE, ctypes.c_int, ctypes.c_char_p, ctypes.c_char_p])
RARCloseArchive = _wrap(ctypes.c_int, unrar.RARCloseArchive, [ctypes.wintypes.HANDLE])
UNRARCALLBACK = ctypes.WINFUNCTYPE(ctypes.c_int, ctypes.c_uint, ctypes.c_long, ctypes.c_long, ctypes.c_long)
RARSetCallback = _wrap(ctypes.c_int, unrar.RARSetCallback, [ctypes.wintypes.HANDLE, UNRARCALLBACK, ctypes.c_long])
RARExceptions = {
ERAR_NO_MEMORY : MemoryError,
ERAR_BAD_DATA : ArchiveHeaderBroken,
ERAR_BAD_ARCHIVE : InvalidRARArchive,
ERAR_EOPEN : FileOpenError,
}
class PassiveReader:
"""Used for reading files to memory"""
def __init__(self, usercallback = None):
self.buf = []
self.ucb = usercallback
def _callback(self, msg, UserData, P1, P2):
if msg == UCM_PROCESSDATA:
data = (ctypes.c_char*P2).from_address(P1).raw
if self.ucb!=None:
self.ucb(data)
else:
self.buf.append(data)
return 1
def get_result(self):
return ''.join(self.buf)
class RarInfoIterator(object):
def __init__(self, arc):
self.arc = arc
self.index = 0
self.headerData = RARHeaderDataEx()
self.res = RARReadHeaderEx(self.arc._handle, ctypes.byref(self.headerData))
if self.res==ERAR_BAD_DATA:
raise IncorrectRARPassword
self.arc.lockStatus = "locked"
self.arc.needskip = False
def __iter__(self):
return self
def next(self):
if self.index>0:
if self.arc.needskip:
RARProcessFile(self.arc._handle, RAR_SKIP, None, None)
self.res = RARReadHeaderEx(self.arc._handle, ctypes.byref(self.headerData))
if self.res:
raise StopIteration
self.arc.needskip = True
data = {}
data['index'] = self.index
data['filename'] = self.headerData.FileName
data['datetime'] = DosDateTimeToTimeTuple(self.headerData.FileTime)
data['isdir'] = ((self.headerData.Flags & 0xE0) == 0xE0)
data['size'] = self.headerData.UnpSize + (self.headerData.UnpSizeHigh << 32)
if self.headerData.CmtState == 1:
data['comment'] = self.headerData.CmtBuf.value
else:
data['comment'] = None
self.index += 1
return data
def __del__(self):
self.arc.lockStatus = "finished"
def generate_password_provider(password):
def password_provider_callback(msg, UserData, P1, P2):
if msg == UCM_NEEDPASSWORD and password!=None:
(ctypes.c_char*P2).from_address(P1).value = password
return 1
return password_provider_callback
class RarFileImplementation(object):
def init(self, password=None):
self.password = password
archiveData = RAROpenArchiveDataEx(ArcNameW=self.archiveName, OpenMode=RAR_OM_EXTRACT)
self._handle = RAROpenArchiveEx(ctypes.byref(archiveData))
self.c_callback = UNRARCALLBACK(generate_password_provider(self.password))
RARSetCallback(self._handle, self.c_callback, 1)
if archiveData.OpenResult != 0:
raise RARExceptions[archiveData.OpenResult]
if archiveData.CmtState == 1:
self.comment = archiveData.CmtBuf.value
else:
self.comment = None
if password:
RARSetPassword(self._handle, password)
self.lockStatus = "ready"
def destruct(self):
if self._handle and RARCloseArchive:
RARCloseArchive(self._handle)
def make_sure_ready(self):
if self.lockStatus == "locked":
raise InvalidRARArchiveUsage("cannot execute infoiter() without finishing previous one")
if self.lockStatus == "finished":
self.destruct()
self.init(self.password)
def infoiter(self):
self.make_sure_ready()
return RarInfoIterator(self)
def read_files(self, checker):
res = []
for info in self.infoiter():
if checker(info) and not info.isdir:
reader = PassiveReader()
c_callback = UNRARCALLBACK(reader._callback)
RARSetCallback(self._handle, c_callback, 1)
tmpres = RARProcessFile(self._handle, RAR_TEST, None, None)
if tmpres==ERAR_BAD_DATA:
raise IncorrectRARPassword
self.needskip = False
res.append((info, reader.get_result()))
return res
def extract(self, checker, path, withSubpath, overwrite):
res = []
for info in self.infoiter():
checkres = checker(info)
if checkres!=False and not info.isdir:
if checkres==True:
fn = info.filename
if not withSubpath:
fn = os.path.split(fn)[-1]
target = os.path.join(path, fn)
else:
raise DeprecationWarning, "Condition callbacks returning strings are deprecated and only supported in Windows"
target = checkres
if overwrite or (not os.path.exists(target)):
tmpres = RARProcessFile(self._handle, RAR_EXTRACT, None, target)
if tmpres==ERAR_BAD_DATA:
raise IncorrectRARPassword
self.needskip = False
res.append(info)
return res

View File

@ -1,208 +0,0 @@
"""
A PyQT4 dialog to select from automated issue matches
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import sys
import os
from PyQt4 import QtCore, QtGui, uic
from PyQt4.QtCore import QUrl, pyqtSignal, QByteArray
from imagefetcher import ImageFetcher
from settings import ComicTaggerSettings
from options import MetaDataStyle
class AutoTagMatchWindow(QtGui.QDialog):
volume_id = 0
def __init__(self, parent, match_set_list, style, fetch_func):
super(AutoTagMatchWindow, self).__init__(parent)
uic.loadUi(os.path.join(ComicTaggerSettings.baseDir(), 'autotagmatchwindow.ui' ), self)
self.skipButton = QtGui.QPushButton(self.tr("Skip"))
self.buttonBox.addButton(self.skipButton, QtGui.QDialogButtonBox.ActionRole)
self.buttonBox.button(QtGui.QDialogButtonBox.Ok).setText("Accept and Next")
self.match_set_list = match_set_list
self.style = style
self.fetch_func = fetch_func
self.current_match_set_idx = 0
self.twList.currentItemChanged.connect(self.currentItemChanged)
self.twList.cellDoubleClicked.connect(self.cellDoubleClicked)
self.skipButton.clicked.connect(self.skipToNext)
self.updateData()
def updateData( self):
self.current_match_set = self.match_set_list[ self.current_match_set_idx ]
if self.current_match_set_idx + 1 == len( self.match_set_list ):
self.skipButton.setDisabled(True)
self.setCoverImage()
self.populateTable()
self.twList.resizeColumnsToContents()
self.current_row = 0
self.twList.selectRow( 0 )
path = self.current_match_set.ca.path
self.setWindowTitle( u"Select correct match ({0} of {1}): {2}".format(
self.current_match_set_idx+1,
len( self.match_set_list ),
os.path.split(path)[1] ))
def populateTable( self ):
while self.twList.rowCount() > 0:
self.twList.removeRow(0)
self.twList.setSortingEnabled(False)
row = 0
for match in self.current_match_set.matches:
self.twList.insertRow(row)
item_text = match['series']
item = QtGui.QTableWidgetItem(item_text)
item.setData( QtCore.Qt.ToolTipRole, item_text )
item.setFlags(QtCore.Qt.ItemIsSelectable| QtCore.Qt.ItemIsEnabled)
self.twList.setItem(row, 0, item)
if match['publisher'] is not None:
item_text = u"{0}".format(match['publisher'])
else:
item_text = u"Unknown"
item = QtGui.QTableWidgetItem(item_text)
item.setData( QtCore.Qt.ToolTipRole, item_text )
item.setFlags(QtCore.Qt.ItemIsSelectable| QtCore.Qt.ItemIsEnabled)
self.twList.setItem(row, 1, item)
item_text = ""
if match['month'] is not None:
item_text = u"{0}/".format(match['month'])
if match['year'] is not None:
item_text += u"{0}".format(match['year'])
else:
item_text += u"????"
item = QtGui.QTableWidgetItem(item_text)
item.setData( QtCore.Qt.ToolTipRole, item_text )
item.setFlags(QtCore.Qt.ItemIsSelectable| QtCore.Qt.ItemIsEnabled)
self.twList.setItem(row, 2, item)
row += 1
def cellDoubleClicked( self, r, c ):
self.accept()
def currentItemChanged( self, curr, prev ):
if curr is None:
return
if prev is not None and prev.row() == curr.row():
return
self.current_row = curr.row()
# list selection was changed, update the the issue cover
self.labelThumbnail.setPixmap(QtGui.QPixmap(os.path.join(ComicTaggerSettings.baseDir(), 'graphics/nocover.png' )))
self.cover_fetcher = ImageFetcher( )
self.cover_fetcher.fetchComplete.connect(self.coverFetchComplete)
self.cover_fetcher.fetch( self.current_match_set.matches[self.current_row]['img_url'] )
# called when the image is done loading
def coverFetchComplete( self, image_data, issue_id ):
img = QtGui.QImage()
img.loadFromData( image_data )
self.labelThumbnail.setPixmap(QtGui.QPixmap(img))
def setCoverImage( self ):
ca = self.current_match_set.ca
cover_idx = ca.readMetadata(self.style).getCoverPageIndexList()[0]
image_data = ca.getPage( cover_idx )
self.labelCover.setScaledContents(True)
if image_data is not None:
img = QtGui.QImage()
img.loadFromData( image_data )
self.labelCover.setPixmap(QtGui.QPixmap(img))
else:
self.labelCover.setPixmap(QtGui.QPixmap(os.path.join(ComicTaggerSettings.baseDir(), 'graphics/nocover.png' )))
def accept(self):
self.saveMatch()
self.current_match_set_idx += 1
if self.current_match_set_idx == len( self.match_set_list ):
# no more items
QtGui.QDialog.accept(self)
else:
self.updateData()
def skipToNext( self ):
self.current_match_set_idx += 1
if self.current_match_set_idx == len( self.match_set_list ):
# no more items
QtGui.QDialog.reject(self)
else:
self.updateData()
def reject(self):
reply = QtGui.QMessageBox.question(self,
self.tr("Cancel Matching"),
self.tr("Are you sure you wish to cancel the matching process?"),
QtGui.QMessageBox.Yes, QtGui.QMessageBox.No )
if reply == QtGui.QMessageBox.No:
return
QtGui.QDialog.reject(self)
def saveMatch( self ):
match = self.current_match_set.matches[self.current_row]
ca = self.current_match_set.ca
md = ca.readMetadata( self.style )
if md.isEmpty:
md = ca.metadataFromFilename()
# now get the particular issue data
cv_md = self.fetch_func( match )
if cv_md is None:
QtGui.QMessageBox.critical(self, self.tr("Network Issue"), self.tr("Could not connect to ComicVine to get issue details!"))
return
QtGui.QApplication.setOverrideCursor(QtGui.QCursor(QtCore.Qt.WaitCursor))
md.overlay( cv_md )
success = ca.writeMetadata( md, self.style )
ca.loadCache( [ MetaDataStyle.CBI, MetaDataStyle.CIX ] )
QtGui.QApplication.restoreOverrideCursor()
if not success:
QtGui.QMessageBox.warning(self, self.tr("Write Error"), self.tr("Saving the tags to the archive seemed to fail!"))

View File

@ -1,161 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<ui version="4.0">
<class>dialogMatchSelect</class>
<widget class="QDialog" name="dialogMatchSelect">
<property name="geometry">
<rect>
<x>0</x>
<y>0</y>
<width>831</width>
<height>506</height>
</rect>
</property>
<property name="windowTitle">
<string>Select Match</string>
</property>
<layout class="QGridLayout" name="gridLayout">
<item row="0" column="1">
<layout class="QVBoxLayout" name="verticalLayout">
<item>
<layout class="QHBoxLayout" name="horizontalLayout">
<item>
<widget class="QLabel" name="labelCover">
<property name="minimumSize">
<size>
<width>200</width>
<height>0</height>
</size>
</property>
<property name="maximumSize">
<size>
<width>200</width>
<height>300</height>
</size>
</property>
<property name="text">
<string>TextLabel</string>
</property>
</widget>
</item>
<item>
<widget class="QTableWidget" name="twList">
<property name="font">
<font>
<pointsize>9</pointsize>
</font>
</property>
<property name="selectionMode">
<enum>QAbstractItemView::SingleSelection</enum>
</property>
<property name="selectionBehavior">
<enum>QAbstractItemView::SelectRows</enum>
</property>
<property name="rowCount">
<number>0</number>
</property>
<property name="columnCount">
<number>3</number>
</property>
<attribute name="horizontalHeaderStretchLastSection">
<bool>true</bool>
</attribute>
<attribute name="verticalHeaderVisible">
<bool>false</bool>
</attribute>
<column>
<property name="text">
<string>Series</string>
</property>
</column>
<column>
<property name="text">
<string>Publisher</string>
</property>
</column>
<column>
<property name="text">
<string>Date</string>
</property>
</column>
</widget>
</item>
<item>
<widget class="QLabel" name="labelThumbnail">
<property name="minimumSize">
<size>
<width>200</width>
<height>0</height>
</size>
</property>
<property name="maximumSize">
<size>
<width>200</width>
<height>300</height>
</size>
</property>
<property name="frameShape">
<enum>QFrame::Panel</enum>
</property>
<property name="frameShadow">
<enum>QFrame::Sunken</enum>
</property>
<property name="text">
<string/>
</property>
<property name="scaledContents">
<bool>true</bool>
</property>
</widget>
</item>
</layout>
</item>
<item>
<widget class="QDialogButtonBox" name="buttonBox">
<property name="orientation">
<enum>Qt::Horizontal</enum>
</property>
<property name="standardButtons">
<set>QDialogButtonBox::Cancel|QDialogButtonBox::Ok</set>
</property>
</widget>
</item>
</layout>
</item>
</layout>
</widget>
<resources/>
<connections>
<connection>
<sender>buttonBox</sender>
<signal>accepted()</signal>
<receiver>dialogMatchSelect</receiver>
<slot>accept()</slot>
<hints>
<hint type="sourcelabel">
<x>248</x>
<y>254</y>
</hint>
<hint type="destinationlabel">
<x>157</x>
<y>274</y>
</hint>
</hints>
</connection>
<connection>
<sender>buttonBox</sender>
<signal>rejected()</signal>
<receiver>dialogMatchSelect</receiver>
<slot>reject()</slot>
<hints>
<hint type="sourcelabel">
<x>316</x>
<y>260</y>
</hint>
<hint type="destinationlabel">
<x>286</x>
<y>274</y>
</hint>
</hints>
</connection>
</connections>
</ui>

View File

@ -1,67 +0,0 @@
"""
A PyQT4 dialog to show ID log and progress
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import sys
from PyQt4 import QtCore, QtGui, uic
import os
from settings import ComicTaggerSettings
class AutoTagProgressWindow(QtGui.QDialog):
def __init__(self, parent):
super(AutoTagProgressWindow, self).__init__(parent)
uic.loadUi(os.path.join(ComicTaggerSettings.baseDir(), 'autotagprogresswindow.ui' ), self)
self.lblTest.setPixmap(QtGui.QPixmap(os.path.join(ComicTaggerSettings.baseDir(), 'graphics/nocover.png' )))
self.lblArchive.setPixmap(QtGui.QPixmap(os.path.join(ComicTaggerSettings.baseDir(), 'graphics/nocover.png' )))
self.isdone = False
# we can't specify relative font sizes in the UI designer, so
# make font for scroll window a smidge smaller
f = self.textEdit.font()
if f.pointSize() > 10:
f.setPointSize( f.pointSize() - 2 )
self.textEdit.setFont( f )
def setArchiveImage( self, img_data):
self.setCoverImage( img_data, self.lblArchive )
def setTestImage( self, img_data):
self.setCoverImage( img_data, self.lblTest )
def setCoverImage( self, img_data , label):
if img_data is not None:
img = QtGui.QImage()
img.loadFromData( img_data )
label.setPixmap(QtGui.QPixmap(img))
label.setScaledContents(True)
else:
label.setPixmap(QtGui.QPixmap(os.path.join(ComicTaggerSettings.baseDir(), 'graphics/nocover.png' )))
label.setScaledContents(True)
QtCore.QCoreApplication.processEvents()
QtCore.QCoreApplication.processEvents()
def reject(self):
QtGui.QDialog.reject(self)
self.isdone = True

View File

@ -1,101 +0,0 @@
"""
A PyQT4 dialog to confirm and set options for auto-tag
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from PyQt4 import QtCore, QtGui, uic
from settings import ComicTaggerSettings
from settingswindow import SettingsWindow
from filerenamer import FileRenamer
import os
import utils
class AutoTagStartWindow(QtGui.QDialog):
def __init__( self, parent, settings, msg ):
super(AutoTagStartWindow, self).__init__(parent)
uic.loadUi(os.path.join(ComicTaggerSettings.baseDir(), 'autotagstartwindow.ui' ), self)
self.label.setText( msg )
self.settings = settings
self.cbxSaveOnLowConfidence.setCheckState( QtCore.Qt.Unchecked )
self.cbxDontUseYear.setCheckState( QtCore.Qt.Unchecked )
self.cbxAssumeIssueOne.setCheckState( QtCore.Qt.Unchecked )
self.cbxIgnoreLeadingDigitsInFilename.setCheckState( QtCore.Qt.Unchecked )
self.cbxRemoveAfterSuccess.setCheckState( QtCore.Qt.Unchecked )
self.cbxSpecifySearchString.setCheckState( QtCore.Qt.Unchecked )
self.leNameLengthMatchTolerance.setText( str(self.settings.id_length_delta_thresh) )
self.leSearchString.setEnabled( False )
nlmtTip = (
""" <html>The <b>Name Length Match Tolerance</b> is for eliminating automatic
search matches that are too long compared to your series name search. The higher
it is, the more likely to have a good match, but each search will take longer and
use more bandwidth. Too low, and only the very closest lexical matches will be
explored.</html>""" )
self.leNameLengthMatchTolerance.setToolTip(nlmtTip)
ssTip = (
"""<html>
The <b>series search string</b> specifies the search string to be used for all selected archives.
Use this only when trying to match archives with hard-to-parse filenames. All archives selected
should be from the same series.
</html>"""
)
self.leSearchString.setToolTip(ssTip)
self.cbxSpecifySearchString.setToolTip(ssTip)
validator = QtGui.QIntValidator(0, 99, self)
self.leNameLengthMatchTolerance.setValidator(validator)
self.cbxSpecifySearchString.stateChanged.connect(self.searchStringToggle)
self.autoSaveOnLow = False
self.dontUseYear = False
self.assumeIssueOne = False
self.ignoreLeadingDigitsInFilename = False
self.removeAfterSuccess = False
self.searchString = None
self.nameLengthMatchTolerance = self.settings.id_length_delta_thresh
def searchStringToggle(self):
enable = self.cbxSpecifySearchString.isChecked()
self.leSearchString.setEnabled( enable )
def accept( self ):
QtGui.QDialog.accept(self)
self.autoSaveOnLow = self.cbxSaveOnLowConfidence.isChecked()
self.dontUseYear = self.cbxDontUseYear.isChecked()
self.assumeIssueOne = self.cbxAssumeIssueOne.isChecked()
self.ignoreLeadingDigitsInFilename = self.cbxIgnoreLeadingDigitsInFilename.isChecked()
self.removeAfterSuccess = self.cbxRemoveAfterSuccess.isChecked()
self.nameLengthMatchTolerance = int(self.leNameLengthMatchTolerance.text())
if self.cbxSpecifySearchString.isChecked():
self.searchString = unicode(self.leSearchString.text())
if len(self.searchString) == 0:
self.searchString = None

View File

@ -1,242 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<ui version="4.0">
<class>dialogExport</class>
<widget class="QDialog" name="dialogExport">
<property name="windowModality">
<enum>Qt::NonModal</enum>
</property>
<property name="geometry">
<rect>
<x>0</x>
<y>0</y>
<width>607</width>
<height>319</height>
</rect>
</property>
<property name="sizePolicy">
<sizepolicy hsizetype="Preferred" vsizetype="MinimumExpanding">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="windowTitle">
<string>Auto-Tag</string>
</property>
<property name="modal">
<bool>false</bool>
</property>
<layout class="QGridLayout" name="gridLayout_3">
<item row="0" column="0">
<layout class="QVBoxLayout" name="verticalLayout">
<item>
<widget class="QLabel" name="label">
<property name="sizePolicy">
<sizepolicy hsizetype="Preferred" vsizetype="Preferred">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="text">
<string/>
</property>
<property name="wordWrap">
<bool>true</bool>
</property>
</widget>
</item>
<item>
<layout class="QFormLayout" name="formLayout">
<property name="sizeConstraint">
<enum>QLayout::SetFixedSize</enum>
</property>
<property name="fieldGrowthPolicy">
<enum>QFormLayout::AllNonFixedFieldsGrow</enum>
</property>
<item row="0" column="0" colspan="2">
<widget class="QCheckBox" name="cbxSaveOnLowConfidence">
<property name="sizePolicy">
<sizepolicy hsizetype="Minimum" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="text">
<string>Save on low confidence match</string>
</property>
</widget>
</item>
<item row="1" column="0" colspan="2">
<widget class="QCheckBox" name="cbxDontUseYear">
<property name="sizePolicy">
<sizepolicy hsizetype="Minimum" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="text">
<string>Don't use publication year in indentification process</string>
</property>
</widget>
</item>
<item row="2" column="0" colspan="2">
<widget class="QCheckBox" name="cbxAssumeIssueOne">
<property name="sizePolicy">
<sizepolicy hsizetype="Minimum" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="text">
<string>If no issue number, assume &quot;1&quot;</string>
</property>
</widget>
</item>
<item row="3" column="0" colspan="2">
<widget class="QCheckBox" name="cbxIgnoreLeadingDigitsInFilename">
<property name="sizePolicy">
<sizepolicy hsizetype="Minimum" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="text">
<string>Ignore leading (sequence) numbers in filename</string>
</property>
</widget>
</item>
<item row="4" column="0" colspan="2">
<widget class="QCheckBox" name="cbxRemoveAfterSuccess">
<property name="sizePolicy">
<sizepolicy hsizetype="Minimum" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="text">
<string>Remove archives from list after successful tagging</string>
</property>
</widget>
</item>
<item row="5" column="0" colspan="2">
<widget class="QCheckBox" name="cbxSpecifySearchString">
<property name="sizePolicy">
<sizepolicy hsizetype="Minimum" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="text">
<string>Specify series search string for all selected archives</string>
</property>
</widget>
</item>
<item row="6" column="0">
<widget class="QLabel" name="label_2">
<property name="sizePolicy">
<sizepolicy hsizetype="Preferred" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="minimumSize">
<size>
<width>40</width>
<height>0</height>
</size>
</property>
<property name="text">
<string/>
</property>
</widget>
</item>
<item row="6" column="1">
<widget class="QLineEdit" name="leSearchString">
<property name="sizePolicy">
<sizepolicy hsizetype="Expanding" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
</widget>
</item>
<item row="8" column="1">
<widget class="QLineEdit" name="leNameLengthMatchTolerance">
<property name="sizePolicy">
<sizepolicy hsizetype="Expanding" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="maximumSize">
<size>
<width>50</width>
<height>16777215</height>
</size>
</property>
</widget>
</item>
<item row="7" column="0" colspan="2">
<widget class="QLabel" name="label_3">
<property name="sizePolicy">
<sizepolicy hsizetype="Preferred" vsizetype="Fixed">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<property name="text">
<string>Adjust Name Length Match Tolerance:</string>
</property>
</widget>
</item>
</layout>
</item>
<item>
<widget class="QDialogButtonBox" name="buttonBox">
<property name="orientation">
<enum>Qt::Horizontal</enum>
</property>
<property name="standardButtons">
<set>QDialogButtonBox::Cancel|QDialogButtonBox::Ok</set>
</property>
</widget>
</item>
</layout>
</item>
</layout>
</widget>
<resources/>
<connections>
<connection>
<sender>buttonBox</sender>
<signal>accepted()</signal>
<receiver>dialogExport</receiver>
<slot>accept()</slot>
<hints>
<hint type="sourcelabel">
<x>346</x>
<y>187</y>
</hint>
<hint type="destinationlabel">
<x>277</x>
<y>104</y>
</hint>
</hints>
</connection>
<connection>
<sender>buttonBox</sender>
<signal>rejected()</signal>
<receiver>dialogExport</receiver>
<slot>reject()</slot>
<hints>
<hint type="sourcelabel">
<x>346</x>
<y>187</y>
</hint>
<hint type="destinationlabel">
<x>277</x>
<y>104</y>
</hint>
</hints>
</connection>
</connections>
</ui>

View File

@ -0,0 +1,11 @@
[Desktop Entry]
Encoding=UTF-8
Name=ComicTagger
GenericName=Comic Metadata Editor
Comment=A cross-platform GUI/CLI app for writing metadata to comic archives
Exec=comictagger %F
Icon=/usr/local/share/comictagger/app.png
Terminal=false
Type=Application
MimeType=text/plain;
Categories=Application;

View File

@ -0,0 +1,241 @@
# -*- mode: python ; coding: utf-8 -*-
import platform
from comictaggerlib import ctversion
enable_console = False
block_cipher = None
a = Analysis(
["../comictaggerlib/__main__.py"],
pathex=[],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
exe_binaries = []
exe_zipfiles = []
exe_datas = []
exe_exclude_binaries = True
coll_binaries = a.binaries
coll_zipfiles = a.zipfiles
coll_datas = a.datas
if platform.system() in ["Windows"]:
enable_console = True
exe_binaries = a.binaries
exe_zipfiles = a.zipfiles
exe_datas = a.datas
exe_exclude_binaries = False
coll_binaries = []
coll_zipfiles = []
coll_datas = []
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
exe_binaries,
exe_zipfiles,
exe_datas,
[],
exclude_binaries=exe_exclude_binaries,
name="comictagger",
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=enable_console,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
icon="windows/app.ico",
)
if platform.system() not in ["Windows"]:
coll = COLLECT(
exe,
coll_binaries,
coll_zipfiles,
coll_datas,
strip=False,
upx=True,
upx_exclude=[],
name="comictagger",
)
app = BUNDLE(
coll,
name="ComicTagger.app",
icon="mac/app.icns",
info_plist={
"NSHighResolutionCapable": "True",
"NSPrincipalClass": "NSApplication",
"NSRequiresAquaSystemAppearance": "False",
"CFBundleDisplayName": "ComicTagger",
"CFBundleShortVersionString": ctversion.version,
"CFBundleVersion": ctversion.version,
"CFBundleDocumentTypes": [
{
"CFBundleTypeRole": "Editor",
"LSHandlerRank": "Default",
"LSItemContentTypes": [
"public.folder",
],
"CFBundleTypeName": "Folder",
},
{
"CFBundleTypeExtensions": [
"cbz",
],
"LSTypeIsPackage": False,
"NSPersistentStoreTypeKey": "Binary",
"CFBundleTypeIconSystemGenerated": True,
"CFBundleTypeName": "ZIP Comic Archive",
"LSItemContentTypes": [
"public.zip-comic-archive",
"com.simplecomic.cbz-archive",
"com.macitbetter.cbz-archive",
"public.cbz-archive",
"cx.c3.cbz-archive",
"com.yacreader.yacreader.cbz",
"com.milke.cbz-archive",
"com.bitcartel.comicbooklover.cbz",
"public.archive.cbz",
"public.zip-archive",
],
"CFBundleTypeRole": "Editor",
"LSHandlerRank": "Default",
},
{
"CFBundleTypeExtensions": [
"cb7",
],
"LSTypeIsPackage": False,
"NSPersistentStoreTypeKey": "Binary",
"CFBundleTypeIconSystemGenerated": True,
"CFBundleTypeName": "7-Zip Comic Archive",
"LSItemContentTypes": [
"org.7-zip.7-zip-archive",
"com.simplecomic.cb7-archive",
"public.cb7-archive",
"com.macitbetter.cb7-archive",
"cx.c3.cb7-archive",
"org.7-zip.7-zip-comic-archive",
],
"CFBundleTypeRole": "Editor",
"LSHandlerRank": "Default",
},
{
"CFBundleTypeExtensions": [
"cbr",
],
"LSTypeIsPackage": False,
"NSPersistentStoreTypeKey": "Binary",
"CFBundleTypeIconSystemGenerated": True,
"CFBundleTypeName": "RAR Comic Archive",
"LSItemContentTypes": [
"com.rarlab.rar-archive",
"com.rarlab.rar-comic-archive",
"com.simplecomic.cbr-archive",
"com.macitbetter.cbr-archive",
"public.cbr-archive",
"cx.c3.cbr-archive",
"com.bitcartel.comicbooklover.cbr",
"com.milke.cbr-archive",
"public.archive.cbr",
"com.yacreader.yacreader.cbr",
],
"CFBundleTypeRole": "Editor",
"LSHandlerRank": "Default",
},
],
"UTImportedTypeDeclarations": [
{
"UTTypeIdentifier": "com.rarlab.rar-archive",
"UTTypeDescription": "RAR Archive",
"UTTypeConformsTo": [
"public.data",
"public.archive",
],
"UTTypeTagSpecification": {
"public.mime-type": [
"application/x-rar",
"application/x-rar-compressed",
],
"public.filename-extension": [
"rar",
],
},
},
{
"UTTypeConformsTo": [
"public.data",
"public.archive",
"com.rarlab.rar-archive",
],
"UTTypeIdentifier": "com.rarlab.rar-comic-archive",
"UTTypeDescription": "RAR Comic Archive",
"UTTypeTagSpecification": {
"public.mime-type": [
"application/vnd.comicbook-rar",
"application/x-cbr",
],
"public.filename-extension": [
"cbr",
],
},
},
{
"UTTypeConformsTo": [
"public.data",
"public.archive",
"public.zip-archive",
],
"UTTypeIdentifier": "public.zip-comic-archive",
"UTTypeDescription": "ZIP Comic Archive",
"UTTypeTagSpecification": {
"public.filename-extension": [
"cbz",
],
},
},
{
"UTTypeConformsTo": [
"public.data",
"public.archive",
"org.7-zip.7-zip-archive",
],
"UTTypeIdentifier": "org.7-zip.7-zip-comic-archive",
"UTTypeDescription": "7-Zip Comic Archive",
"UTTypeTagSpecification": {
"public.mime-type": [
"application/vnd.comicbook+7-zip",
"application/x-cb7-compressed",
],
"public.filename-extension": [
"cb7",
],
},
},
],
},
bundle_identifier="com.comictagger",
)

19
build-tools/dmgbuild.conf Normal file
View File

@ -0,0 +1,19 @@
import pathlib
app = "ComicTagger"
app_name = f"{app}.app"
path = f"dist/{app_name}"
# dmgbuild settings
format = 'ULMO'
files = (str(path),)
symlinks = {'Applications': '/Applications'}
icon = pathlib.Path().cwd() / 'build-tools' / 'mac' / 'volume.icns'
icon_locations = {
app_name: (100, 100),
'Applications': (300, 100)
}

View File

@ -0,0 +1,26 @@
from __future__ import annotations
import os
import pathlib
import settngs
import comictaggerlib.main
def generate() -> str:
app = comictaggerlib.main.App()
app.load_plugins(app.initial_arg_parser.parse_known_args()[0])
app.register_settings(True)
imports, types = settngs.generate_dict(app.manager.definitions)
imports2, types2 = settngs.generate_ns(app.manager.definitions)
i = imports.splitlines()
i.extend(set(imports2.splitlines()) - set(i))
os.linesep
return (os.linesep * 2).join((os.linesep.join(i), types2, types))
if __name__ == "__main__":
src = generate()
pathlib.Path("./comictaggerlib/ctsettings/settngs_namespace.py").write_text(src)
print(src, end="")

View File

@ -0,0 +1,38 @@
from __future__ import annotations
import argparse
import os
import pathlib
import platform
try:
import niquests as requests
except ImportError:
import requests
arch = platform.machine()
parser = argparse.ArgumentParser()
parser.add_argument("APPIMAGETOOL", default=f"build/appimagetool-{arch}.AppImage", type=pathlib.Path, nargs="?")
opts = parser.parse_args()
opts.APPIMAGETOOL = opts.APPIMAGETOOL.absolute()
def urlretrieve(url: str, dest: pathlib.Path) -> None:
resp = requests.get(url)
if resp.status_code == 200:
dest.parent.mkdir(parents=True, exist_ok=True)
dest.write_bytes(resp.content)
if opts.APPIMAGETOOL.exists():
raise SystemExit(0)
urlretrieve(
f"https://github.com/AppImage/appimagetool/releases/latest/download/appimagetool-{arch}.AppImage",
opts.APPIMAGETOOL,
)
os.chmod(opts.APPIMAGETOOL, 0o0700)
if not opts.APPIMAGETOOL.exists():
raise SystemExit(1)

View File

@ -1,26 +1,27 @@
PYINSTALLER_CMD := python $(HOME)/pyinstaller-2.0/pyinstaller.py
TAGGER_BASE := $(HOME)/Dropbox/tagger/comictagger
PYINSTALLER_CMD := pyinstaller
TAGGER_BASE ?= ../
TAGGER_SRC := $(TAGGER_BASE)/comictaggerlib
APP_NAME := ComicTagger
VERSION_STR := $(shell grep version $(TAGGER_BASE)/ctversion.py| cut -d= -f2 | sed 's/\"//g')
VERSION_STR := $(shell cd .. && python setup.py --version)
MAC_BASE := $(TAGGER_BASE)/mac
DIST_DIR := $(MAC_BASE)/dist
STAGING := $(MAC_BASE)/$(APP_NAME)
APP_BUNDLE := $(DIST_DIR)/$(APP_NAME).app
VOLUME_NAME := $(APP_NAME)-$(VERSION_STR)
VOLUME_NAME := "$(APP_NAME)-$(VERSION_STR)"
DMG_FILE := $(VOLUME_NAME).dmg
all: clean dist diskimage
dist:
$(PYINSTALLER_CMD) $(TAGGER_BASE)/comictagger.py -o $(MAC_BASE) -w -n $(APP_NAME) -s
cp $(TAGGER_BASE)/*.ui $(APP_BUNDLE)/Contents/MacOS
cp -a $(TAGGER_BASE)/graphics $(APP_BUNDLE)/Contents/MacOS
$(PYINSTALLER_CMD) $(TAGGER_BASE)/comictagger.py -w -n $(APP_NAME) -s
cp -a $(TAGGER_SRC)/ui $(APP_BUNDLE)/Contents/MacOS
cp -a $(TAGGER_SRC)/graphics $(APP_BUNDLE)/Contents/MacOS
cp $(MAC_BASE)/app.icns $(APP_BUNDLE)/Contents/Resources/icon-windowed.icns
# fix the version string in the Info.plist
sed -i -e 's/0\.0\.0/$(VERSION_STR)/' $(MAC_BASE)/dist/ComicTagger.app/Contents/Info.plist
clean:
rm -rf $(DIST_DIR) $(MAC_BASE)/build
rm -f $(MAC_BASE)/*.spec
@ -29,7 +30,7 @@ clean:
rm -f raw*.dmg
echo $(VERSION_STR)
diskimage:
#Set up disk image staging folder
# Set up disk image staging folder
rm -rf $(STAGING)
mkdir $(STAGING)
cp $(TAGGER_BASE)/release_notes.txt $(STAGING)
@ -38,27 +39,27 @@ diskimage:
cp $(MAC_BASE)/volume.icns $(STAGING)/.VolumeIcon.icns
SetFile -c icnC $(STAGING)/.VolumeIcon.icns
##generate raw disk image
# generate raw disk image
rm -f $(DMG_FILE)
hdiutil create -srcfolder $(STAGING) -volname $(VOLUME_NAME) -format UDRW -ov raw-$(DMG_FILE)
hdiutil create -srcfolder $(STAGING) -volname $(VOLUME_NAME) -format UDRW -ov raw-$(DMG_FILE)
#remove working files and folders
# remove working files and folders
rm -rf $(STAGING)
# we now have a raw DMG file.
# remount it so we can set the volume icon properly
mkdir -p $(STAGING)
hdiutil attach raw-$(DMG_FILE) -mountpoint $(STAGING)
SetFile -a C $(STAGING)
hdiutil detach $(STAGING)
rm -rf $(STAGING)
# convert the raw image
rm -f $(DMG_FILE)
hdiutil convert raw-$(DMG_FILE) -format UDZO -o $(DMG_FILE)
rm -f raw-$(DMG_FILE)
#move finished product to release folder
# move finished product to release folder
mkdir -p $(TAGGER_BASE)/release
mv $(DMG_FILE) $(TAGGER_BASE)/release

21
build-tools/mac/make_thin.sh Executable file
View File

@ -0,0 +1,21 @@
rm -rf thin
BINFOLDER=$1
LIST=`cd $BINFOLDER; ls Qt* *.so *.dylib Python 2>/dev/null`
for FILE in $LIST
do
ISFAT=`lipo -info $BINFOLDER/$FILE|grep -v Non-fat`
if [ "$ISFAT" != "" ]
then
echo "Fat Binary: $FILE"
mkdir -p thin
lipo -thin i386 -output thin/$FILE $BINFOLDER/$FILE
fi
done
if [ -d thin ]
then
mv thin/* $BINFOLDER
else
echo No files to lipo
fi
rm -rf thin

View File

@ -0,0 +1,267 @@
from __future__ import annotations
import argparse
import base64
import json
import os
import sys
from http import HTTPStatus
from pathlib import Path
from typing import NoReturn
from urllib.parse import urlparse
import keyring
import requests
from id import IdentityError, detect_credential
_GITHUB_STEP_SUMMARY = Path(os.getenv("GITHUB_STEP_SUMMARY", "fail.txt"))
# The top-level error message that gets rendered.
# This message wraps one of the other templates/messages defined below.
_ERROR_SUMMARY_MESSAGE = """
Trusted publishing exchange failure:
{message}
You're seeing this because the action wasn't given the inputs needed to
perform password-based or token-based authentication. If you intended to
perform one of those authentication methods instead of trusted
publishing, then you should double-check your secret configuration and variable
names.
Read more about trusted publishers at https://docs.pypi.org/trusted-publishers/
Read more about how this action uses trusted publishers at
https://github.com/marketplace/actions/pypi-publish#trusted-publishing
"""
# Rendered if OIDC identity token retrieval fails for any reason.
_TOKEN_RETRIEVAL_FAILED_MESSAGE = """
OpenID Connect token retrieval failed: {identity_error}
This generally indicates a workflow configuration error, such as insufficient
permissions. Make sure that your workflow has `id-token: write` configured
at the job level, e.g.:
```yaml
permissions:
id-token: write
```
Learn more at https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#adding-permissions-settings.
""" # noqa: S105; not a password
# Specialization of the token retrieval failure case, when we know that
# the failure cause is use within a third-party PR.
_TOKEN_RETRIEVAL_FAILED_FORK_PR_MESSAGE = """
OpenID Connect token retrieval failed: {identity_error}
The workflow context indicates that this action was called from a
pull request on a fork. GitHub doesn't give these workflows OIDC permissions,
even if `id-token: write` is explicitly configured.
To fix this, change your publishing workflow to use an event that
forks of your repository cannot trigger (such as tag or release
creation, or a manually triggered workflow dispatch).
""" # noqa: S105; not a password
# Rendered if the package index refuses the given OIDC token.
_SERVER_REFUSED_TOKEN_EXCHANGE_MESSAGE = """
Token request failed: the server refused the request for the following reasons:
{reasons}
This generally indicates a trusted publisher configuration error, but could
also indicate an internal error on GitHub or PyPI's part.
{rendered_claims}
""" # noqa: S105; not a password
_RENDERED_CLAIMS = """
The claims rendered below are **for debugging purposes only**. You should **not**
use them to configure a trusted publisher unless they already match your expectations.
If a claim is not present in the claim set, then it is rendered as `MISSING`.
* `sub`: `{sub}`
* `repository`: `{repository}`
* `repository_owner`: `{repository_owner}`
* `repository_owner_id`: `{repository_owner_id}`
* `job_workflow_ref`: `{job_workflow_ref}`
* `ref`: `{ref}`
See https://docs.pypi.org/trusted-publishers/troubleshooting/ for more help.
"""
# Rendered if the package index's token response isn't valid JSON.
_SERVER_TOKEN_RESPONSE_MALFORMED_JSON = """
Token request failed: the index produced an unexpected
{status_code} response.
This strongly suggests a server configuration or downtime issue; wait
a few minutes and try again.
You can monitor PyPI's status here: https://status.python.org/
""" # noqa: S105; not a password
# Rendered if the package index's token response isn't a valid API token payload.
_SERVER_TOKEN_RESPONSE_MALFORMED_MESSAGE = """
Token response error: the index gave us an invalid response.
This strongly suggests a server configuration or downtime issue; wait
a few minutes and try again.
""" # noqa: S105; not a password
def die(msg: str) -> NoReturn:
with _GITHUB_STEP_SUMMARY.open("a", encoding="utf-8") as io:
print(_ERROR_SUMMARY_MESSAGE.format(message=msg), file=io)
# HACK: GitHub Actions' annotations don't work across multiple lines naively;
# translating `\n` into `%0A` (i.e., HTML percent-encoding) is known to work.
# See: https://github.com/actions/toolkit/issues/193
msg = msg.replace("\n", "%0A")
print(f"::error::Trusted publishing exchange failure: {msg}", file=sys.stderr)
sys.exit(1)
def debug(msg: str) -> None:
print(f"::debug::{msg.title()}", file=sys.stderr)
def assert_successful_audience_call(resp: requests.Response, domain: str) -> None:
if resp.ok:
return
if resp.status_code == HTTPStatus.FORBIDDEN:
# This index supports OIDC, but forbids the client from using
# it (either because it's disabled, ratelimited, etc.)
die(
f"audience retrieval failed: repository at {domain} has trusted publishing disabled",
)
elif resp.status_code == HTTPStatus.NOT_FOUND:
# This index does not support OIDC.
die(
f"audience retrieval failed: repository at {domain} does not indicate trusted publishing support",
)
else:
status = HTTPStatus(resp.status_code)
# Unknown: the index may or may not support OIDC, but didn't respond with
# something we expect. This can happen if the index is broken, in maintenance mode,
# misconfigured, etc.
die(
f"audience retrieval failed: repository at {domain} responded with unexpected {resp.status_code}: {status.phrase}",
)
def render_claims(token: str) -> str:
_, payload, _ = token.split(".", 2)
# urlsafe_b64decode needs padding; JWT payloads don't contain any.
payload += "=" * (4 - (len(payload) % 4))
claims = json.loads(base64.urlsafe_b64decode(payload))
def _get(name: str) -> str:
return claims.get(name, "MISSING")
return _RENDERED_CLAIMS.format(
sub=_get("sub"),
repository=_get("repository"),
repository_owner=_get("repository_owner"),
repository_owner_id=_get("repository_owner_id"),
job_workflow_ref=_get("job_workflow_ref"),
ref=_get("ref"),
)
def event_is_third_party_pr() -> bool:
# Non-`pull_request` events cannot be from third-party PRs.
if os.getenv("GITHUB_EVENT_NAME") != "pull_request":
return False
event_path = os.getenv("GITHUB_EVENT_PATH")
if not event_path:
# No GITHUB_EVENT_PATH indicates a weird GitHub or runner bug.
debug("unexpected: no GITHUB_EVENT_PATH to check")
return False
try:
event = json.loads(Path(event_path).read_bytes())
except json.JSONDecodeError:
debug("unexpected: GITHUB_EVENT_PATH does not contain valid JSON")
return False
try:
return event["pull_request"]["head"]["repo"]["fork"]
except KeyError:
return False
parser = argparse.ArgumentParser()
parser.add_argument("repository_url", default="https://upload.pypi.org/legacy/", type=urlparse, nargs="?")
opts = parser.parse_args()
repository_domain = opts.repository_url.netloc
token_exchange_url = f"https://{repository_domain}/_/oidc/mint-token"
# Indices are expected to support `https://{domain}/_/oidc/audience`,
# which tells OIDC exchange clients which audience to use.
audience_url = f"https://{repository_domain}/_/oidc/audience"
audience_resp = requests.get(audience_url, timeout=5)
assert_successful_audience_call(audience_resp, repository_domain)
oidc_audience = audience_resp.json()["audience"]
debug(f"selected trusted publishing exchange endpoint: {token_exchange_url}")
try:
oidc_token = detect_credential(audience=oidc_audience)
except IdentityError as identity_error:
cause_msg_tmpl = (
_TOKEN_RETRIEVAL_FAILED_FORK_PR_MESSAGE if event_is_third_party_pr() else _TOKEN_RETRIEVAL_FAILED_MESSAGE
)
for_cause_msg = cause_msg_tmpl.format(identity_error=identity_error)
die(for_cause_msg)
if not oidc_token:
die("Unabled to detect credentials. Is this runnnig in CI?")
# Now we can do the actual token exchange.
mint_token_resp = requests.post(
token_exchange_url,
json={"token": oidc_token},
timeout=5,
)
try:
mint_token_payload = mint_token_resp.json()
except requests.JSONDecodeError:
# Token exchange failure normally produces a JSON error response, but
# we might have hit a server error instead.
die(
_SERVER_TOKEN_RESPONSE_MALFORMED_JSON.format(
status_code=mint_token_resp.status_code,
),
)
# On failure, the JSON response includes the list of errors that
# occurred during minting.
if not mint_token_resp.ok:
reasons = "\n".join(f'* `{error["code"]}`: {error["description"]}' for error in mint_token_payload["errors"])
rendered_claims = render_claims(oidc_token)
die(
_SERVER_REFUSED_TOKEN_EXCHANGE_MESSAGE.format(
reasons=reasons,
rendered_claims=rendered_claims,
),
)
pypi_token = mint_token_payload.get("token")
if pypi_token is None:
die(_SERVER_TOKEN_RESPONSE_MALFORMED_MESSAGE)
# Mask the newly minted PyPI token, so that we don't accidentally leak it in logs.
print(f"::add-mask::{pypi_token}", file=sys.stderr)
keyring.set_password(opts.repository_url.geturl(), "__token__", pypi_token)

View File

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

View File

@ -0,0 +1,85 @@
from __future__ import annotations
import os
import pathlib
import platform
import sys
import tarfile
import zipfile
from comictaggerlib.ctversion import __version__
def addToZip(zf: zipfile.ZipFile, path: str, zippath: str) -> None:
if os.path.isfile(path):
zf.write(path, zippath)
elif os.path.isdir(path):
if zippath:
zf.write(path, zippath)
for nm in sorted(os.listdir(path)):
addToZip(zf, os.path.join(path, nm), os.path.join(zippath, nm))
def Zip(zip_file: pathlib.Path, path: pathlib.Path) -> None:
zip_file.unlink(missing_ok=True)
with zipfile.ZipFile(f"{zip_file}.zip", "w", compression=zipfile.ZIP_DEFLATED, compresslevel=8) as zf:
zippath = os.path.basename(path)
if not zippath:
zippath = os.path.basename(os.path.dirname(path))
if zippath in ("", os.curdir, os.pardir):
zippath = ""
addToZip(zf, str(path), zippath)
def addToTar(tf: tarfile.TarFile, path: str, zippath: str) -> None:
if os.path.isfile(path):
tf.add(path, zippath)
elif os.path.isdir(path):
if zippath:
tf.add(path, zippath, recursive=False)
for nm in sorted(os.listdir(path)):
addToTar(tf, os.path.join(path, nm), os.path.join(zippath, nm))
def Tar(tar_file: pathlib.Path, path: pathlib.Path) -> None:
tar_file.unlink(missing_ok=True)
with tarfile.open(f"{tar_file}.tar.gz", "w:gz") as tf:
zippath = os.path.basename(path)
if not zippath:
zippath = os.path.basename(os.path.dirname(path))
if zippath in ("", os.curdir, os.pardir):
zippath = ""
addToTar(tf, str(path), zippath)
if __name__ == "__main__":
app = "ComicTagger"
exe = app.casefold()
final_name = f"{app}-{__version__}-{platform.system()}-{platform.machine()}"
if platform.system() == "Windows":
exe = f"{exe}.exe"
elif platform.system() == "Darwin":
exe = f"{app}.app"
ver = platform.mac_ver()
final_name = f"{app}-{__version__}-macOS-{ver[0]}-{ver[2]}"
path = pathlib.Path(f"dist/{exe}")
binary_path = pathlib.Path("dist/binary")
binary_path.mkdir(parents=True, exist_ok=True)
archive_destination = binary_path / final_name
if platform.system() == "Darwin":
from dmgbuild.__main__ import main as dmg_main
sys.argv = [
"zip_artifacts",
"-s",
str(pathlib.Path(__file__).parent / "dmgbuild.conf"),
f"{app} {__version__}",
f"{archive_destination}.dmg",
]
dmg_main()
elif platform.system() == "Windows":
Zip(archive_destination, path)
else:
Tar(archive_destination, path)

View File

@ -1,99 +0,0 @@
"""
Class to manage modifying metadata specifically for CBL/CBI
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import utils
class CBLTransformer:
def __init__( self, metadata, settings ):
self.metadata = metadata
self.settings = settings
def apply( self ):
# helper funcs
def append_to_tags_if_unique( item ):
if item.lower() not in (tag.lower() for tag in self.metadata.tags):
self.metadata.tags.append( item )
def add_string_list_to_tags( str_list ):
if str_list is not None and str_list != "":
items = [ s.strip() for s in str_list.split(',') ]
for item in items:
append_to_tags_if_unique( item )
if self.settings.assume_lone_credit_is_primary:
# helper
def setLonePrimary( role_list ):
lone_credit = None
count = 0
for c in self.metadata.credits:
if c['role'].lower() in role_list:
count += 1
lone_credit = c
if count > 1:
lone_credit = None
break
if lone_credit is not None:
lone_credit['primary'] = True
return lone_credit, count
#need to loop three times, once for 'writer', 'artist', and then 'penciler' if no artist
setLonePrimary( ['writer'] )
c, count = setLonePrimary( ['artist'] )
if c is None and count == 0:
c, count = setLonePrimary( ['penciler', 'penciller'] )
if c is not None:
c['primary'] = False
self.metadata.addCredit( c['person'], 'Artist', True )
if self.settings.copy_characters_to_tags:
add_string_list_to_tags( self.metadata.characters )
if self.settings.copy_teams_to_tags:
add_string_list_to_tags( self.metadata.teams )
if self.settings.copy_locations_to_tags:
add_string_list_to_tags( self.metadata.locations )
if self.settings.copy_notes_to_comments:
if self.metadata.notes is not None:
if self.metadata.comments is None:
self.metadata.comments = ""
else:
self.metadata.comments += "\n\n"
if self.metadata.notes not in self.metadata.comments:
self.metadata.comments += self.metadata.notes
if self.settings.copy_weblink_to_comments:
if self.metadata.webLink is not None:
if self.metadata.comments is None:
self.metadata.comments = ""
else:
self.metadata.comments += "\n\n"
if self.metadata.webLink not in self.metadata.comments:
self.metadata.comments += self.metadata.webLink
return self.metadata

260
comet.py
View File

@ -1,260 +0,0 @@
"""
A python class to encapsulate CoMet data
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from datetime import datetime
import zipfile
from pprint import pprint
import xml.etree.ElementTree as ET
from genericmetadata import GenericMetadata
import utils
class CoMet:
writer_synonyms = ['writer', 'plotter', 'scripter']
penciller_synonyms = [ 'artist', 'penciller', 'penciler', 'breakdowns' ]
inker_synonyms = [ 'inker', 'artist', 'finishes' ]
colorist_synonyms = [ 'colorist', 'colourist', 'colorer', 'colourer' ]
letterer_synonyms = [ 'letterer']
cover_synonyms = [ 'cover', 'covers', 'coverartist', 'cover artist' ]
editor_synonyms = [ 'editor']
def metadataFromString( self, string ):
tree = ET.ElementTree(ET.fromstring( string ))
return self.convertXMLToMetadata( tree )
def stringFromMetadata( self, metadata ):
header = '<?xml version="1.0" encoding="UTF-8"?>\n'
tree = self.convertMetadataToXML( self, metadata )
return header + ET.tostring(tree.getroot())
def indent( self, elem, level=0 ):
# for making the XML output readable
i = "\n" + level*" "
if len(elem):
if not elem.text or not elem.text.strip():
elem.text = i + " "
if not elem.tail or not elem.tail.strip():
elem.tail = i
for elem in elem:
self.indent( elem, level+1 )
if not elem.tail or not elem.tail.strip():
elem.tail = i
else:
if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i
def convertMetadataToXML( self, filename, metadata ):
#shorthand for the metadata
md = metadata
# build a tree structure
root = ET.Element("comet")
root.attrib['xmlns:comet'] = "http://www.denvog.com/comet/"
root.attrib['xmlns:xsi'] = "http://www.w3.org/2001/XMLSchema-instance"
root.attrib['xsi:schemaLocation'] = "http://www.denvog.com http://www.denvog.com/comet/comet.xsd"
#helper func
def assign( comet_entry, md_entry):
if md_entry is not None:
ET.SubElement(root, comet_entry).text = u"{0}".format(md_entry)
# title is manditory
if md.title is None:
md.title = ""
assign( 'title', md.title )
assign( 'series', md.series )
assign( 'issue', md.issue ) #must be int??
assign( 'volume', md.volume )
assign( 'description', md.comments )
assign( 'publisher', md.publisher )
assign( 'pages', md.pageCount )
assign( 'format', md.format )
assign( 'language', md.language )
assign( 'rating', md.maturityRating )
assign( 'price', md.price )
assign( 'isVersionOf', md.isVersionOf )
assign( 'rights', md.rights )
assign( 'identifier', md.identifier )
assign( 'lastMark', md.lastMark )
assign( 'genre', md.genre ) # TODO repeatable
if md.characters is not None:
char_list = [ c.strip() for c in md.characters.split(',') ]
for c in char_list:
assign( 'character', c )
if md.manga is not None and md.manga == "YesAndRightToLeft":
assign( 'readingDirection', "rtl")
date_str = ""
if md.year is not None:
date_str = str(md.year).zfill(4)
if md.month is not None:
date_str += "-" + str(md.month).zfill(2)
assign( 'date', date_str )
assign( 'coverImage', md.coverImage )
# need to specially process the credits, since they are structured differently than CIX
credit_writer_list = list()
credit_penciller_list = list()
credit_inker_list = list()
credit_colorist_list = list()
credit_letterer_list = list()
credit_cover_list = list()
credit_editor_list = list()
# loop thru credits, and build a list for each role that CoMet supports
for credit in metadata.credits:
if credit['role'].lower() in set( self.writer_synonyms ):
ET.SubElement(root, 'writer').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.penciller_synonyms ):
ET.SubElement(root, 'penciller').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.inker_synonyms ):
ET.SubElement(root, 'inker').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.colorist_synonyms ):
ET.SubElement(root, 'colorist').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.letterer_synonyms ):
ET.SubElement(root, 'letterer').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.cover_synonyms ):
ET.SubElement(root, 'coverDesigner').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.editor_synonyms ):
ET.SubElement(root, 'editor').text = u"{0}".format(credit['person'])
# self pretty-print
self.indent(root)
# wrap it in an ElementTree instance, and save as XML
tree = ET.ElementTree(root)
return tree
def convertXMLToMetadata( self, tree ):
root = tree.getroot()
if root.tag != 'comet':
raise 1
return None
metadata = GenericMetadata()
md = metadata
# Helper function
def xlate( tag ):
node = root.find( tag )
if node is not None:
return node.text
else:
return None
md.series = xlate( 'series' )
md.title = xlate( 'title' )
md.issue = xlate( 'issue' )
md.volume = xlate( 'volume' )
md.comments = xlate( 'description' )
md.publisher = xlate( 'publisher' )
md.language = xlate( 'language' )
md.format = xlate( 'format' )
md.pageCount = xlate( 'pages' )
md.maturityRating = xlate( 'rating' )
md.price = xlate( 'price' )
md.isVersionOf = xlate( 'isVersionOf' )
md.rights = xlate( 'rights' )
md.identifier = xlate( 'identifier' )
md.lastMark = xlate( 'lastMark' )
md.genre = xlate( 'genre' ) # TODO - repeatable field
date = xlate( 'date' )
if date is not None:
parts = date.split('-')
if len( parts) > 0:
md.year = parts[0]
if len( parts) > 1:
md.month = parts[1]
md.coverImage = xlate( 'coverImage' )
readingDirection = xlate( 'readingDirection' )
if readingDirection is not None and readingDirection == "rtl":
md.manga = "YesAndRightToLeft"
# loop for character tags
char_list = []
for n in root:
if n.tag == 'character':
char_list.append(n.text.strip())
md.characters = utils.listToString( char_list )
# Now extract the credit info
for n in root:
if ( n.tag == 'writer' or
n.tag == 'penciller' or
n.tag == 'inker' or
n.tag == 'colorist' or
n.tag == 'letterer' or
n.tag == 'editor'
):
metadata.addCredit( n.text.strip(), n.tag.title() )
if n.tag == 'coverDesigner':
metadata.addCredit( n.text.strip(), "Cover" )
metadata.isEmpty = False
return metadata
#verify that the string actually contains CoMet data in XML format
def validateString( self, string ):
try:
tree = ET.ElementTree(ET.fromstring( string ))
root = tree.getroot()
if root.tag != 'comet':
raise Exception
except:
return False
return True
def writeToExternalFile( self, filename, metadata ):
tree = self.convertMetadataToXML( self, metadata )
#ET.dump(tree)
tree.write(filename, encoding='utf-8')
def readFromExternalFile( self, filename ):
tree = ET.parse( filename )
return self.convertXMLToMetadata( tree )

3
comicapi/__init__.py Normal file
View File

@ -0,0 +1,3 @@
from __future__ import annotations
__author__ = "dromanin"

View File

@ -0,0 +1,7 @@
from __future__ import annotations
import os
def get_hook_dirs() -> list[str]:
return [os.path.dirname(__file__)]

View File

@ -0,0 +1,10 @@
from __future__ import annotations
from PyInstaller.utils.hooks import collect_data_files, collect_entry_point
datas, hiddenimports = collect_entry_point("comicapi.archiver")
mdatas, mhiddenimports = collect_entry_point("comicapi.tags")
hiddenimports += mhiddenimports
datas += mdatas
datas += collect_data_files("comicapi.data")

468
comicapi/_url.py Normal file
View File

@ -0,0 +1,468 @@
# mypy: disable-error-code="no-redef"
from __future__ import annotations
try:
from urllib3.exceptions import HTTPError, LocationParseError, LocationValueError
from urllib3.util import Url, parse_url
except ImportError:
import re
import typing
class HTTPError(Exception):
"""Base exception used by this module."""
class LocationValueError(ValueError, HTTPError):
"""Raised when there is something wrong with a given URL input."""
class LocationParseError(LocationValueError):
"""Raised when get_host or similar fails to parse the URL input."""
def __init__(self, location: str) -> None:
message = f"Failed to parse: {location}"
super().__init__(message)
self.location = location
def to_str(x: str | bytes, encoding: str | None = None, errors: str | None = None) -> str:
if isinstance(x, str):
return x
elif not isinstance(x, bytes):
raise TypeError(f"not expecting type {type(x).__name__}")
if encoding or errors:
return x.decode(encoding or "utf-8", errors=errors or "strict")
return x.decode()
# We only want to normalize urls with an HTTP(S) scheme.
# urllib3 infers URLs without a scheme (None) to be http.
_NORMALIZABLE_SCHEMES = ("http", "https", None)
# Almost all of these patterns were derived from the
# 'rfc3986' module: https://github.com/python-hyper/rfc3986
_PERCENT_RE = re.compile(r"%[a-fA-F0-9]{2}")
_SCHEME_RE = re.compile(r"^(?:[a-zA-Z][a-zA-Z0-9+-]*:|/)")
_URI_RE = re.compile(
r"^(?:([a-zA-Z][a-zA-Z0-9+.-]*):)?" r"(?://([^\\/?#]*))?" r"([^?#]*)" r"(?:\?([^#]*))?" r"(?:#(.*))?$",
re.UNICODE | re.DOTALL,
)
_IPV4_PAT = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}"
_HEX_PAT = "[0-9A-Fa-f]{1,4}"
_LS32_PAT = "(?:{hex}:{hex}|{ipv4})".format(hex=_HEX_PAT, ipv4=_IPV4_PAT)
_subs = {"hex": _HEX_PAT, "ls32": _LS32_PAT}
_variations = [
# 6( h16 ":" ) ls32
"(?:%(hex)s:){6}%(ls32)s",
# "::" 5( h16 ":" ) ls32
"::(?:%(hex)s:){5}%(ls32)s",
# [ h16 ] "::" 4( h16 ":" ) ls32
"(?:%(hex)s)?::(?:%(hex)s:){4}%(ls32)s",
# [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32
"(?:(?:%(hex)s:)?%(hex)s)?::(?:%(hex)s:){3}%(ls32)s",
# [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32
"(?:(?:%(hex)s:){0,2}%(hex)s)?::(?:%(hex)s:){2}%(ls32)s",
# [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32
"(?:(?:%(hex)s:){0,3}%(hex)s)?::%(hex)s:%(ls32)s",
# [ *4( h16 ":" ) h16 ] "::" ls32
"(?:(?:%(hex)s:){0,4}%(hex)s)?::%(ls32)s",
# [ *5( h16 ":" ) h16 ] "::" h16
"(?:(?:%(hex)s:){0,5}%(hex)s)?::%(hex)s",
# [ *6( h16 ":" ) h16 ] "::"
"(?:(?:%(hex)s:){0,6}%(hex)s)?::",
]
_UNRESERVED_PAT = r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._\-~"
_IPV6_PAT = "(?:" + "|".join([x % _subs for x in _variations]) + ")"
_ZONE_ID_PAT = "(?:%25|%)(?:[" + _UNRESERVED_PAT + "]|%[a-fA-F0-9]{2})+"
_IPV6_ADDRZ_PAT = r"\[" + _IPV6_PAT + r"(?:" + _ZONE_ID_PAT + r")?\]"
_REG_NAME_PAT = r"(?:[^\[\]%:/?#]|%[a-fA-F0-9]{2})*"
_TARGET_RE = re.compile(r"^(/[^?#]*)(?:\?([^#]*))?(?:#.*)?$")
_IPV4_RE = re.compile("^" + _IPV4_PAT + "$")
_IPV6_RE = re.compile("^" + _IPV6_PAT + "$")
_IPV6_ADDRZ_RE = re.compile("^" + _IPV6_ADDRZ_PAT + "$")
_BRACELESS_IPV6_ADDRZ_RE = re.compile("^" + _IPV6_ADDRZ_PAT[2:-2] + "$")
_ZONE_ID_RE = re.compile("(" + _ZONE_ID_PAT + r")\]$")
_HOST_PORT_PAT = ("^(%s|%s|%s)(?::0*?(|0|[1-9][0-9]{0,4}))?$") % (
_REG_NAME_PAT,
_IPV4_PAT,
_IPV6_ADDRZ_PAT,
)
_HOST_PORT_RE = re.compile(_HOST_PORT_PAT, re.UNICODE | re.DOTALL)
_UNRESERVED_CHARS = set("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._-~")
_SUB_DELIM_CHARS = set("!$&'()*+,;=")
_USERINFO_CHARS = _UNRESERVED_CHARS | _SUB_DELIM_CHARS | {":"}
_PATH_CHARS = _USERINFO_CHARS | {"@", "/"}
_QUERY_CHARS = _FRAGMENT_CHARS = _PATH_CHARS | {"?"}
class Url(
typing.NamedTuple(
"Url",
[
("scheme", typing.Optional[str]),
("auth", typing.Optional[str]),
("host", typing.Optional[str]),
("port", typing.Optional[int]),
("path", typing.Optional[str]),
("query", typing.Optional[str]),
("fragment", typing.Optional[str]),
],
)
):
"""
Data structure for representing an HTTP URL. Used as a return value for
:func:`parse_url`. Both the scheme and host are normalized as they are
both case-insensitive according to RFC 3986.
"""
def __new__( # type: ignore[no-untyped-def]
cls,
scheme: str | None = None,
auth: str | None = None,
host: str | None = None,
port: int | None = None,
path: str | None = None,
query: str | None = None,
fragment: str | None = None,
):
if path and not path.startswith("/"):
path = "/" + path
if scheme is not None:
scheme = scheme.lower()
return super().__new__(cls, scheme, auth, host, port, path, query, fragment)
@property
def hostname(self) -> str | None:
"""For backwards-compatibility with urlparse. We're nice like that."""
return self.host
@property
def request_uri(self) -> str:
"""Absolute path including the query string."""
uri = self.path or "/"
if self.query is not None:
uri += "?" + self.query
return uri
@property
def authority(self) -> str | None:
"""
Authority component as defined in RFC 3986 3.2.
This includes userinfo (auth), host and port.
i.e.
userinfo@host:port
"""
userinfo = self.auth
netloc = self.netloc
if netloc is None or userinfo is None:
return netloc
else:
return f"{userinfo}@{netloc}"
@property
def netloc(self) -> str | None:
"""
Network location including host and port.
If you need the equivalent of urllib.parse's ``netloc``,
use the ``authority`` property instead.
"""
if self.host is None:
return None
if self.port:
return f"{self.host}:{self.port}"
return self.host
@property
def url(self) -> str:
"""
Convert self into a url
This function should more or less round-trip with :func:`.parse_url`. The
returned url may not be exactly the same as the url inputted to
:func:`.parse_url`, but it should be equivalent by the RFC (e.g., urls
with a blank port will have : removed).
Example:
.. code-block:: python
import urllib3
U = urllib3.util.parse_url("https://google.com/mail/")
print(U.url)
# "https://google.com/mail/"
print( urllib3.util.Url("https", "username:password",
"host.com", 80, "/path", "query", "fragment"
).url
)
# "https://username:password@host.com:80/path?query#fragment"
"""
scheme, auth, host, port, path, query, fragment = self
url = ""
# We use "is not None" we want things to happen with empty strings (or 0 port)
if scheme is not None:
url += scheme + "://"
if auth is not None:
url += auth + "@"
if host is not None:
url += host
if port is not None:
url += ":" + str(port)
if path is not None:
url += path
if query is not None:
url += "?" + query
if fragment is not None:
url += "#" + fragment
return url
def __str__(self) -> str:
return self.url
@typing.overload
def _encode_invalid_chars(component: str, allowed_chars: typing.Container[str]) -> str: # Abstract
...
@typing.overload
def _encode_invalid_chars(component: None, allowed_chars: typing.Container[str]) -> None: # Abstract
...
def _encode_invalid_chars(component: str | None, allowed_chars: typing.Container[str]) -> str | None:
"""Percent-encodes a URI component without reapplying
onto an already percent-encoded component.
"""
if component is None:
return component
component = to_str(component)
# Normalize existing percent-encoded bytes.
# Try to see if the component we're encoding is already percent-encoded
# so we can skip all '%' characters but still encode all others.
component, percent_encodings = _PERCENT_RE.subn(lambda match: match.group(0).upper(), component)
uri_bytes = component.encode("utf-8", "surrogatepass")
is_percent_encoded = percent_encodings == uri_bytes.count(b"%")
encoded_component = bytearray()
for i in range(0, len(uri_bytes)):
# Will return a single character bytestring
byte = uri_bytes[i : i + 1]
byte_ord = ord(byte)
if (is_percent_encoded and byte == b"%") or (byte_ord < 128 and byte.decode() in allowed_chars):
encoded_component += byte
continue
encoded_component.extend(b"%" + (hex(byte_ord)[2:].encode().zfill(2).upper()))
return encoded_component.decode()
def _remove_path_dot_segments(path: str) -> str:
# See http://tools.ietf.org/html/rfc3986#section-5.2.4 for pseudo-code
segments = path.split("/") # Turn the path into a list of segments
output = [] # Initialize the variable to use to store output
for segment in segments:
# '.' is the current directory, so ignore it, it is superfluous
if segment == ".":
continue
# Anything other than '..', should be appended to the output
if segment != "..":
output.append(segment)
# In this case segment == '..', if we can, we should pop the last
# element
elif output:
output.pop()
# If the path starts with '/' and the output is empty or the first string
# is non-empty
if path.startswith("/") and (not output or output[0]):
output.insert(0, "")
# If the path starts with '/.' or '/..' ensure we add one more empty
# string to add a trailing '/'
if path.endswith(("/.", "/..")):
output.append("")
return "/".join(output)
@typing.overload
def _normalize_host(host: None, scheme: str | None) -> None: ...
@typing.overload
def _normalize_host(host: str, scheme: str | None) -> str: ...
def _normalize_host(host: str | None, scheme: str | None) -> str | None:
if host:
if scheme in _NORMALIZABLE_SCHEMES:
is_ipv6 = _IPV6_ADDRZ_RE.match(host)
if is_ipv6:
# IPv6 hosts of the form 'a::b%zone' are encoded in a URL as
# such per RFC 6874: 'a::b%25zone'. Unquote the ZoneID
# separator as necessary to return a valid RFC 4007 scoped IP.
match = _ZONE_ID_RE.search(host)
if match:
start, end = match.span(1)
zone_id = host[start:end]
if zone_id.startswith("%25") and zone_id != "%25":
zone_id = zone_id[3:]
else:
zone_id = zone_id[1:]
zone_id = _encode_invalid_chars(zone_id, _UNRESERVED_CHARS)
return f"{host[:start].lower()}%{zone_id}{host[end:]}"
else:
return host.lower()
elif not _IPV4_RE.match(host):
return to_str(
b".".join([_idna_encode(label) for label in host.split(".")]),
"ascii",
)
return host
def _idna_encode(name: str) -> bytes:
if not name.isascii():
try:
import idna
except ImportError:
raise LocationParseError("Unable to parse URL without the 'idna' module") from None
try:
return idna.encode(name.lower(), strict=True, std3_rules=True)
except idna.IDNAError:
raise LocationParseError(f"Name '{name}' is not a valid IDNA label") from None
return name.lower().encode("ascii")
def _encode_target(target: str) -> str:
"""Percent-encodes a request target so that there are no invalid characters
Pre-condition for this function is that 'target' must start with '/'.
If that is the case then _TARGET_RE will always produce a match.
"""
match = _TARGET_RE.match(target)
if not match: # Defensive:
raise LocationParseError(f"{target!r} is not a valid request URI")
path, query = match.groups()
encoded_target = _encode_invalid_chars(path, _PATH_CHARS)
if query is not None:
query = _encode_invalid_chars(query, _QUERY_CHARS)
encoded_target += "?" + query
return encoded_target
def parse_url(url: str) -> Url:
"""
Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is
performed to parse incomplete urls. Fields not provided will be None.
This parser is RFC 3986 and RFC 6874 compliant.
The parser logic and helper functions are based heavily on
work done in the ``rfc3986`` module.
:param str url: URL to parse into a :class:`.Url` namedtuple.
Partly backwards-compatible with :mod:`urllib.parse`.
Example:
.. code-block:: python
import urllib3
print( urllib3.util.parse_url('http://google.com/mail/'))
# Url(scheme='http', host='google.com', port=None, path='/mail/', ...)
print( urllib3.util.parse_url('google.com:80'))
# Url(scheme=None, host='google.com', port=80, path=None, ...)
print( urllib3.util.parse_url('/foo?bar'))
# Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...)
"""
if not url:
# Empty
return Url()
source_url = url
if not _SCHEME_RE.search(url):
url = "//" + url
scheme: str | None
authority: str | None
auth: str | None
host: str | None
port: str | None
port_int: int | None
path: str | None
query: str | None
fragment: str | None
try:
scheme, authority, path, query, fragment = _URI_RE.match(url).groups() # type: ignore[union-attr]
normalize_uri = scheme is None or scheme.lower() in _NORMALIZABLE_SCHEMES
if scheme:
scheme = scheme.lower()
if authority:
auth, _, host_port = authority.rpartition("@")
auth = auth or None
host, port = _HOST_PORT_RE.match(host_port).groups() # type: ignore[union-attr]
if auth and normalize_uri:
auth = _encode_invalid_chars(auth, _USERINFO_CHARS)
if port == "":
port = None
else:
auth, host, port = None, None, None
if port is not None:
port_int = int(port)
if not (0 <= port_int <= 65535):
raise LocationParseError(url)
else:
port_int = None
host = _normalize_host(host, scheme)
if normalize_uri and path:
path = _remove_path_dot_segments(path)
path = _encode_invalid_chars(path, _PATH_CHARS)
if normalize_uri and query:
query = _encode_invalid_chars(query, _QUERY_CHARS)
if normalize_uri and fragment:
fragment = _encode_invalid_chars(fragment, _FRAGMENT_CHARS)
except (ValueError, AttributeError) as e:
raise LocationParseError(source_url) from e
# For the sake of backwards compatibility we put empty
# string values for path if there are any defined values
# beyond the path in the URL.
# TODO: Remove this when we break backwards compatibility.
if not path:
if query is not None or fragment is not None:
path = ""
else:
path = None
return Url(
scheme=scheme,
auth=auth,
host=host,
port=port_int,
path=path,
query=query,
fragment=fragment,
)
__all__ = ("Url", "parse_url", "HTTPError", "LocationParseError", "LocationValueError")

View File

@ -0,0 +1,13 @@
from __future__ import annotations
from comicapi.archivers.archiver import Archiver
from comicapi.archivers.folder import FolderArchiver
from comicapi.archivers.zip import ZipArchiver
class UnknownArchiver(Archiver):
def name(self) -> str:
return "Unknown"
__all__ = ["Archiver", "UnknownArchiver", "FolderArchiver", "ZipArchiver"]

View File

@ -0,0 +1,146 @@
from __future__ import annotations
import pathlib
from collections.abc import Collection
from typing import Protocol, runtime_checkable
@runtime_checkable
class Archiver(Protocol):
"""Archiver Protocol"""
"""The path to the archive"""
path: pathlib.Path
"""
The name of the executable used for this archiver. This should be the base name of the executable.
For example if 'rar.exe' is needed this should be "rar".
If an executable is not used this should be the empty string.
"""
exe: str = ""
"""
Whether or not this archiver is enabled.
If external imports are required and are not available this should be false. See rar.py and sevenzip.py.
"""
enabled: bool = True
"""
If self.path is a single file that can be hashed.
For example directories cannot be hashed.
"""
hashable: bool = True
supported_extensions: Collection[str] = set()
def __init__(self) -> None:
self.path = pathlib.Path()
def get_comment(self) -> str:
"""
Returns the comment from the current archive as a string.
Should always return a string. If comments are not supported in the archive the empty string should be returned.
"""
return ""
def set_comment(self, comment: str) -> bool:
"""
Returns True if the comment was successfully set on the current archive.
Should always return a boolean. If comments are not supported in the archive False should be returned.
"""
return False
def supports_comment(self) -> bool:
"""
Returns True if the current archive supports comments.
Should always return a boolean. If comments are not supported in the archive False should be returned.
"""
return False
def read_file(self, archive_file: str) -> bytes:
"""
Reads the named file from the current archive.
archive_file should always come from the output of get_filename_list.
Should always return a bytes object. Exceptions should be of the type OSError.
"""
raise NotImplementedError
def remove_file(self, archive_file: str) -> bool:
"""
Removes the named file from the current archive.
archive_file should always come from the output of get_filename_list.
Should always return a boolean. Failures should return False.
Rebuilding the archive without the named file is a standard way to remove a file.
"""
return False
def write_file(self, archive_file: str, data: bytes) -> bool:
"""
Writes the named file to the current archive.
Should always return a boolean. Failures should return False.
"""
return False
def get_filename_list(self) -> list[str]:
"""
Returns a list of filenames in the current archive.
Should always return a list of string. Failures should return an empty list.
"""
return []
def supports_files(self) -> bool:
"""
Returns True if the current archive supports arbitrary non-picture files.
Should always return a boolean.
If arbitrary non-picture files are not supported in the archive False should be returned.
"""
return False
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""
Copies the contents of another achive to the current archive.
Should always return a boolean. Failures should return False.
"""
return False
def is_writable(self) -> bool:
"""
Retuns True if the current archive is writeable
Should always return a boolean. Failures should return False.
"""
return False
def extension(self) -> str:
"""
Returns the extension that this archiver should use eg ".cbz".
Should always return a string. Failures should return the empty string.
"""
return ""
def name(self) -> str:
"""
Returns the name of this archiver for display purposes eg "CBZ".
Should always return a string. Failures should return the empty string.
"""
return ""
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
"""
Returns True if the given path can be opened by this archiver.
Should always return a boolean. Failures should return False.
"""
return False
@classmethod
def open(cls, path: pathlib.Path) -> Archiver:
"""
Opens the given archive.
Should always return a an Archver.
Should never cause an exception no file operations should take place in this method,
is_valid will always be called before open.
"""
archiver = cls()
archiver.path = path
return archiver

View File

@ -0,0 +1,115 @@
from __future__ import annotations
import logging
import os
import pathlib
from comicapi.archivers import Archiver
logger = logging.getLogger(__name__)
class FolderArchiver(Archiver):
"""Folder implementation"""
hashable = False
def __init__(self) -> None:
super().__init__()
self.comment_file_name = "ComicTaggerFolderComment.txt"
self._filename_list: list[str] = []
def get_comment(self) -> str:
try:
return (self.path / self.comment_file_name).read_text()
except OSError:
return ""
def set_comment(self, comment: str) -> bool:
self._filename_list = []
if comment:
return self.write_file(self.comment_file_name, comment.encode("utf-8"))
(self.path / self.comment_file_name).unlink(missing_ok=True)
return True
def supports_comment(self) -> bool:
return True
def read_file(self, archive_file: str) -> bytes:
try:
data = (self.path / archive_file).read_bytes()
except OSError as e:
logger.error("Error reading folder archive [%s]: %s :: %s", e, self.path, archive_file)
raise
return data
def remove_file(self, archive_file: str) -> bool:
self._filename_list = []
try:
(self.path / archive_file).unlink(missing_ok=True)
except OSError as e:
logger.error("Error removing file for folder archive [%s]: %s :: %s", e, self.path, archive_file)
return False
else:
return True
def write_file(self, archive_file: str, data: bytes) -> bool:
self._filename_list = []
try:
file_path = self.path / archive_file
file_path.parent.mkdir(exist_ok=True, parents=True)
with open(self.path / archive_file, mode="wb") as f:
f.write(data)
except OSError as e:
logger.error("Error writing folder archive [%s]: %s :: %s", e, self.path, archive_file)
return False
else:
return True
def get_filename_list(self) -> list[str]:
if self._filename_list:
return self._filename_list
filenames = []
try:
for root, _dirs, files in os.walk(self.path):
for f in files:
filenames.append(os.path.relpath(os.path.join(root, f), self.path).replace(os.path.sep, "/"))
self._filename_list = filenames
return filenames
except OSError as e:
logger.error("Error listing files in folder archive [%s]: %s", e, self.path)
return []
def supports_files(self) -> bool:
return True
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""Replace the current zip with one copied from another archive"""
self._filename_list = []
try:
for filename in other_archive.get_filename_list():
data = other_archive.read_file(filename)
if data is not None:
self.write_file(filename, data)
# preserve the old comment
comment = other_archive.get_comment()
if comment is not None:
if not self.set_comment(comment):
return False
except Exception:
logger.exception("Error while copying archive from %s to %s", other_archive.path, self.path)
return False
else:
return True
def is_writable(self) -> bool:
return True
def name(self) -> str:
return "Folder"
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
return path.is_dir()

347
comicapi/archivers/rar.py Normal file
View File

@ -0,0 +1,347 @@
from __future__ import annotations
import functools
import logging
import os
import pathlib
import platform
import shutil
import subprocess
import tempfile
from comicapi.archivers import Archiver
try:
import rarfile
rar_support = True
except ImportError:
rar_support = False
logger = logging.getLogger(__name__)
if not rar_support:
logger.error("rar unavailable")
# windows only, keeps the cmd.exe from popping up
STARTUPINFO = None
if platform.system() == "Windows":
STARTUPINFO = subprocess.STARTUPINFO() # type: ignore
STARTUPINFO.dwFlags |= subprocess.STARTF_USESHOWWINDOW # type: ignore
class RarArchiver(Archiver):
"""RAR implementation"""
enabled = rar_support
exe = "rar"
supported_extensions = frozenset({".cbr", ".rar"})
_rar: rarfile.RarFile | None = None
_rar_setup: rarfile.ToolSetup | None = None
_writeable: bool | None = None
def __init__(self) -> None:
super().__init__()
self._filename_list: list[str] = []
def get_comment(self) -> str:
rarc = self.get_rar_obj()
return (rarc.comment if rarc else "") or ""
def set_comment(self, comment: str) -> bool:
self._reset()
if rar_support and self.exe:
try:
# write comment to temp file
with tempfile.TemporaryDirectory() as tmp_dir:
tmp_file = pathlib.Path(tmp_dir) / "rar_comment.txt"
tmp_file.write_text(comment, encoding="utf-8")
working_dir = os.path.dirname(os.path.abspath(self.path))
# use external program to write comment to Rar archive
proc_args = [
self.exe,
"c",
f"-w{working_dir}",
"-c-",
f"-z{tmp_file}",
str(self.path),
]
result = subprocess.run(
proc_args,
startupinfo=STARTUPINFO,
stdin=subprocess.DEVNULL,
capture_output=True,
encoding="utf-8",
cwd=tmp_dir,
)
if result.returncode != 0:
logger.error(
"Error writing comment to rar archive [exitcode: %d]: %s :: %s",
result.returncode,
self.path,
result.stderr,
)
return False
except OSError as e:
logger.exception("Error writing comment to rar archive [%s]: %s", e, self.path)
return False
return True
return False
def supports_comment(self) -> bool:
return True
def read_file(self, archive_file: str) -> bytes:
rarc = self.get_rar_obj()
if rarc is None:
return b""
tries = 0
while tries < 7:
try:
tries = tries + 1
data: bytes = rarc.open(archive_file).read()
entries = [(rarc.getinfo(archive_file), data)]
if entries[0][0].file_size != len(entries[0][1]):
logger.info(
"Error reading rar archive [file is not expected size: %d vs %d] %s :: %s :: tries #%d",
entries[0][0].file_size,
len(entries[0][1]),
self.path,
archive_file,
tries,
)
continue
except OSError as e:
logger.error("Error reading rar archive [%s]: %s :: %s :: tries #%d", e, self.path, archive_file, tries)
except Exception as e:
logger.error(
"Unexpected exception reading rar archive [%s]: %s :: %s :: tries #%d",
e,
self.path,
archive_file,
tries,
)
break
else:
# Success. Entries is a list of of tuples: ( rarinfo, filedata)
if len(entries) == 1:
return entries[0][1]
raise OSError
raise OSError
def remove_file(self, archive_file: str) -> bool:
self._reset()
if self.exe:
working_dir = os.path.dirname(os.path.abspath(self.path))
# use external program to remove file from Rar archive
result = subprocess.run(
[self.exe, "d", f"-w{working_dir}", "-c-", self.path, archive_file],
startupinfo=STARTUPINFO,
stdin=subprocess.DEVNULL,
capture_output=True,
encoding="utf-8",
cwd=self.path.absolute().parent,
)
if result.returncode != 0:
logger.error(
"Error removing file from rar archive [exitcode: %d]: %s :: %s",
result.returncode,
self.path,
archive_file,
)
return False
return True
return False
def write_file(self, archive_file: str, data: bytes) -> bool:
self._reset()
if self.exe:
archive_path = pathlib.PurePosixPath(archive_file)
archive_name = archive_path.name
archive_parent = str(archive_path.parent).lstrip("./")
working_dir = os.path.dirname(os.path.abspath(self.path))
# use external program to write file to Rar archive
result = subprocess.run(
[
self.exe,
"a",
f"-w{working_dir}",
f"-si{archive_name}",
f"-ap{archive_parent}",
"-c-",
"-ep",
self.path,
],
input=data,
startupinfo=STARTUPINFO,
capture_output=True,
cwd=self.path.absolute().parent,
)
if result.returncode != 0:
logger.error(
"Error writing rar archive [exitcode: %d]: %s :: %s :: %s",
result.returncode,
self.path,
archive_file,
result.stderr,
)
return False
return True
return False
def get_filename_list(self) -> list[str]:
if self._filename_list:
return self._filename_list
rarc = self.get_rar_obj()
tries = 0
if rar_support and rarc:
while tries < 7:
try:
tries = tries + 1
namelist = []
for item in rarc.infolist():
if item.file_size != 0:
namelist.append(item.filename)
except OSError as e:
logger.error("Error listing files in rar archive [%s]: %s :: attempt #%d", e, self.path, tries)
else:
self._filename_list = namelist
return namelist
return []
def supports_files(self) -> bool:
return True
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""Replace the current archive with one copied from another archive"""
self._reset()
try:
with tempfile.TemporaryDirectory() as tmp_dir:
tmp_path = pathlib.Path(tmp_dir)
rar_cwd = tmp_path / "rar"
rar_cwd.mkdir(exist_ok=True)
rar_path = (tmp_path / self.path.name).with_suffix(".rar")
working_dir = os.path.dirname(os.path.abspath(self.path))
for filename in other_archive.get_filename_list():
(rar_cwd / filename).parent.mkdir(exist_ok=True, parents=True)
data = other_archive.read_file(filename)
if data is not None:
with open(rar_cwd / filename, mode="w+b") as tmp_file:
tmp_file.write(data)
result = subprocess.run(
[self.exe, "a", f"-w{working_dir}", "-r", "-c-", str(rar_path.absolute()), "."],
cwd=rar_cwd.absolute(),
startupinfo=STARTUPINFO,
stdin=subprocess.DEVNULL,
capture_output=True,
encoding="utf-8",
)
if result.returncode != 0:
logger.error(
"Error while copying to rar archive [exitcode: %d]: %s: %s",
result.returncode,
self.path,
result.stderr,
)
return False
self.path.unlink(missing_ok=True)
shutil.move(rar_path, self.path)
except Exception as e:
logger.exception("Error while copying to rar archive [%s]: from %s to %s", e, other_archive.path, self.path)
return False
else:
return True
@classmethod
@functools.cache
def _log_not_writeable(cls, exe: str) -> None:
logger.warning("Unable to find a useable copy of %r, will not be able to write rar files", exe)
def is_writable(self) -> bool:
return bool(self._writeable and bool(self.exe and (os.path.exists(self.exe) or shutil.which(self.exe))))
def extension(self) -> str:
return ".cbr"
def name(self) -> str:
return "RAR"
@classmethod
def _setup_rar(cls) -> None:
if cls._rar_setup is None:
assert rarfile
orig = rarfile.UNRAR_TOOL
rarfile.UNRAR_TOOL = cls.exe
try:
cls._rar_setup = rarfile.tool_setup(sevenzip=False, sevenzip2=False, force=True)
except rarfile.RarCannotExec:
rarfile.UNRAR_TOOL = orig
try:
cls._rar_setup = rarfile.tool_setup(force=True)
except rarfile.RarCannotExec as e:
logger.info(e)
if cls._writeable is None:
try:
cls._writeable = (
subprocess.run(
(cls.exe,),
startupinfo=STARTUPINFO,
capture_output=True,
# cwd=cls.path.absolute().parent,
)
.stdout.strip()
.startswith(b"RAR")
)
except OSError:
cls._writeable = False
if not cls._writeable:
cls._log_not_writeable(cls.exe or "rar")
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
if rar_support:
assert rarfile
cls._setup_rar()
# Fallback to standard
try:
return rarfile.is_rarfile(str(path))
except rarfile.RarCannotExec as e:
logger.info(e)
return False
def _reset(self) -> None:
self._rar = None
self._filename_list = []
def get_rar_obj(self) -> rarfile.RarFile | None:
if self._rar is not None:
return self._rar
if rar_support:
try:
rarc = rarfile.RarFile(str(self.path))
self._rar = rarc
except (OSError, rarfile.RarFileError) as e:
logger.error("Unable to get rar object [%s]: %s", e, self.path)
else:
return rarc
return None

View File

@ -0,0 +1,143 @@
from __future__ import annotations
import logging
import os
import pathlib
import shutil
import tempfile
from comicapi.archivers import Archiver
try:
import py7zr
z7_support = True
except ImportError:
z7_support = False
logger = logging.getLogger(__name__)
class SevenZipArchiver(Archiver):
"""7Z implementation"""
enabled = z7_support
supported_extensions = frozenset({".7z", ".cb7"})
def __init__(self) -> None:
super().__init__()
self._filename_list: list[str] = []
# @todo: Implement Comment?
def get_comment(self) -> str:
return ""
def set_comment(self, comment: str) -> bool:
return False
def read_file(self, archive_file: str) -> bytes:
data = b""
try:
with py7zr.SevenZipFile(self.path, "r") as zf:
data = zf.read([archive_file])[archive_file].read()
except (py7zr.Bad7zFile, OSError) as e:
logger.error("Error reading 7zip archive [%s]: %s :: %s", e, self.path, archive_file)
raise
return data
def remove_file(self, archive_file: str) -> bool:
self._filename_list = []
return self.rebuild([archive_file])
def write_file(self, archive_file: str, data: bytes) -> bool:
# At the moment, no other option but to rebuild the whole
# archive w/o the indicated file. Very sucky, but maybe
# another solution can be found
files = self.get_filename_list()
self._filename_list = []
if archive_file in files:
if not self.rebuild([archive_file]):
return False
try:
# now just add the archive file as a new one
with py7zr.SevenZipFile(self.path, "a") as zf:
zf.writestr(data, archive_file)
return True
except (py7zr.Bad7zFile, OSError) as e:
logger.error("Error writing 7zip archive [%s]: %s :: %s", e, self.path, archive_file)
return False
def get_filename_list(self) -> list[str]:
if self._filename_list:
return self._filename_list
try:
with py7zr.SevenZipFile(self.path, "r") as zf:
namelist: list[str] = [file.filename for file in zf.list() if not file.is_directory]
self._filename_list = namelist
return namelist
except (py7zr.Bad7zFile, OSError) as e:
logger.error("Error listing files in 7zip archive [%s]: %s", e, self.path)
return []
def supports_files(self) -> bool:
return True
def rebuild(self, exclude_list: list[str]) -> bool:
"""Zip helper func
This recompresses the zip archive, without the files in the exclude_list
"""
self._filename_list = []
try:
# py7zr treats all archives as if they used solid compression
# so we need to get the filename list first to read all the files at once
with py7zr.SevenZipFile(self.path, mode="r") as zin:
targets = [f for f in zin.getnames() if f not in exclude_list]
with tempfile.NamedTemporaryFile(dir=os.path.dirname(self.path), delete=False) as tmp_file:
with py7zr.SevenZipFile(tmp_file.file, mode="w") as zout:
with py7zr.SevenZipFile(self.path, mode="r") as zin:
for filename, buffer in zin.read(targets).items():
zout.writef(buffer, filename)
self.path.unlink(missing_ok=True)
tmp_file.close() # Required on windows
shutil.move(tmp_file.name, self.path)
except (py7zr.Bad7zFile, OSError) as e:
logger.error("Error rebuilding 7zip file [%s]: %s", e, self.path)
return False
return True
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""Replace the current zip with one copied from another archive"""
self._filename_list = []
try:
with py7zr.SevenZipFile(self.path, "w") as zout:
for filename in other_archive.get_filename_list():
data = other_archive.read_file(
filename
) # This will be very inefficient if other_archive is a 7z file
if data is not None:
zout.writestr(data, filename)
except Exception as e:
logger.error("Error while copying to 7zip archive [%s]: from %s to %s", e, other_archive.path, self.path)
return False
else:
return True
def is_writable(self) -> bool:
return True
def extension(self) -> str:
return ".cb7"
def name(self) -> str:
return "Seven Zip"
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
return py7zr.is_7zfile(path)

160
comicapi/archivers/zip.py Normal file
View File

@ -0,0 +1,160 @@
from __future__ import annotations
import logging
import os
import pathlib
import shutil
import tempfile
import zipfile
from typing import cast
import chardet
from zipremove import ZipFile
from comicapi.archivers import Archiver
logger = logging.getLogger(__name__)
class ZipArchiver(Archiver):
"""ZIP implementation"""
supported_extensions = frozenset((".cbz", ".zip"))
def __init__(self) -> None:
super().__init__()
self._filename_list: list[str] = []
def supports_comment(self) -> bool:
return True
def get_comment(self) -> str:
with ZipFile(self.path, "r") as zf:
encoding = chardet.detect(zf.comment, True)
if encoding["confidence"] > 60:
try:
comment = zf.comment.decode(encoding["encoding"])
except UnicodeDecodeError:
comment = zf.comment.decode("utf-8", errors="replace")
else:
comment = zf.comment.decode("utf-8", errors="replace")
return comment
def set_comment(self, comment: str) -> bool:
with ZipFile(self.path, mode="a") as zf:
zf.comment = bytes(comment, "utf-8")
return True
def read_file(self, archive_file: str) -> bytes:
with ZipFile(self.path, mode="r") as zf:
try:
data = zf.read(archive_file)
except (zipfile.BadZipfile, OSError) as e:
logger.exception("Error reading zip archive [%s]: %s :: %s", e, self.path, archive_file)
raise
return data
def remove_file(self, archive_file: str) -> bool:
files = self.get_filename_list()
self._filename_list = []
try:
with ZipFile(self.path, mode="a", allowZip64=True, compression=zipfile.ZIP_DEFLATED) as zf:
if archive_file in files:
zf.repack([zf.remove(archive_file)])
return True
except (zipfile.BadZipfile, OSError) as e:
logger.error("Error writing zip archive [%s]: %s :: %s", e, self.path, archive_file)
return False
def write_file(self, archive_file: str, data: bytes) -> bool:
files = self.get_filename_list()
self._filename_list = []
try:
# now just add the archive file as a new one
with ZipFile(self.path, mode="a", allowZip64=True, compression=zipfile.ZIP_DEFLATED) as zf:
if archive_file in files:
zf.repack([zf.remove(archive_file)])
zf.writestr(archive_file, data)
return True
except (zipfile.BadZipfile, OSError) as e:
logger.error("Error writing zip archive [%s]: %s :: %s", e, self.path, archive_file)
return False
def get_filename_list(self) -> list[str]:
if self._filename_list:
return self._filename_list
try:
with ZipFile(self.path, mode="r") as zf:
self._filename_list = [file.filename for file in zf.infolist() if not file.is_dir()]
return self._filename_list
except (zipfile.BadZipfile, OSError) as e:
logger.error("Error listing files in zip archive [%s]: %s", e, self.path)
return []
def supports_files(self) -> bool:
return True
def rebuild(self, exclude_list: list[str]) -> bool:
"""Zip helper func
This recompresses the zip archive, without the files in the exclude_list
"""
self._filename_list = []
try:
with ZipFile(
tempfile.NamedTemporaryFile(dir=os.path.dirname(self.path), delete=False), "w", allowZip64=True
) as zout:
with ZipFile(self.path, mode="r") as zin:
for item in zin.infolist():
buffer = zin.read(item.filename)
if item.filename not in exclude_list:
zout.writestr(item, buffer)
# preserve the old comment
zout.comment = zin.comment
# replace with the new file
self.path.unlink(missing_ok=True)
zout.close() # Required on windows
shutil.move(cast(str, zout.filename), self.path)
except (zipfile.BadZipfile, OSError) as e:
logger.error("Error rebuilding zip file [%s]: %s", e, self.path)
return False
return True
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""Replace the current zip with one copied from another archive"""
self._filename_list = []
try:
with ZipFile(self.path, mode="w", allowZip64=True) as zout:
for filename in other_archive.get_filename_list():
data = other_archive.read_file(filename)
if data is not None:
zout.writestr(filename, data)
# preserve the old comment
comment = other_archive.get_comment()
if comment is not None:
if not self.set_comment(comment):
return False
except Exception as e:
logger.error("Error while copying to zip archive [%s]: from %s to %s", e, other_archive.path, self.path)
return False
else:
return True
def is_writable(self) -> bool:
return True
def extension(self) -> str:
return ".cbz"
def name(self) -> str:
return "ZIP"
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
return zipfile.is_zipfile(path) # only checks central directory ot the end of the archive

473
comicapi/comicarchive.py Normal file
View File

@ -0,0 +1,473 @@
"""A class to represent a single comic, be it file or folder of images"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import hashlib
import importlib.util
import inspect
import io
import itertools
import logging
import os
import pathlib
import shutil
import sys
from collections.abc import Iterable
from comicapi import utils
from comicapi.archivers import Archiver, UnknownArchiver, ZipArchiver
from comicapi.genericmetadata import FileHash, GenericMetadata
from comicapi.tags import Tag
from comictaggerlib.ctversion import version
logger = logging.getLogger(__name__)
archivers: list[type[Archiver]] = []
tags: dict[str, Tag] = {}
def load_archive_plugins(local_plugins: Iterable[type[Archiver]] = tuple()) -> None:
if archivers:
return
if sys.version_info < (3, 10):
from importlib_metadata import entry_points
else:
from importlib.metadata import entry_points
builtin: list[type[Archiver]] = []
archive_plugins: list[type[Archiver]] = []
# A list is used first matching plugin wins
for ep in itertools.chain(entry_points(group="comicapi.archiver")):
try:
spec = importlib.util.find_spec(ep.module)
except ValueError:
spec = None
try:
archiver: type[Archiver] = ep.load()
if ep.module.startswith("comicapi"):
builtin.append(archiver)
else:
archive_plugins.append(archiver)
except Exception:
if spec and spec.has_location:
logger.exception("Failed to load archive plugin: %s from %s", ep.name, spec.origin)
else:
logger.exception("Failed to load archive plugin: %s", ep.name)
archivers.clear()
archivers.extend(local_plugins)
archivers.extend(archive_plugins)
archivers.extend(builtin)
def load_tag_plugins(version: str = f"ComicAPI/{version}", local_plugins: Iterable[type[Tag]] = tuple()) -> None:
if tags:
return
if sys.version_info < (3, 10):
from importlib_metadata import entry_points
else:
from importlib.metadata import entry_points
builtin: dict[str, Tag] = {}
tag_plugins: dict[str, tuple[Tag, str]] = {}
# A dict is used, last plugin wins
for ep in entry_points(group="comicapi.tags"):
location = "Unknown"
try:
_spec = importlib.util.find_spec(ep.module)
if _spec and _spec.has_location and _spec.origin:
location = _spec.origin
except ValueError:
location = "Unknown"
try:
tag: type[Tag] = ep.load()
if ep.module.startswith("comicapi"):
builtin[tag.id] = tag(version)
else:
if tag.id in tag_plugins:
logger.warning(
"Plugin %s from %s is overriding the existing plugin for %s tags",
ep.module,
location,
tag.id,
)
tag_plugins[tag.id] = (tag(version), location)
except Exception:
logger.exception("Failed to load tag plugin: %s from %s", ep.name, location)
# A dict is used, last plugin wins
for tag in local_plugins:
tag_plugins[tag.id] = (tag(version), "Local")
for tag_id in set(builtin.keys()).intersection(tag_plugins):
location = tag_plugins[tag_id][1]
logger.warning("Builtin plugin for %s tags are being overridden by a plugin from %s", tag_id, location)
tags.clear()
tags.update(builtin)
tags.update({s[0]: s[1][0] for s in tag_plugins.items()})
class ComicArchive:
logo_data = b""
pil_available: bool | None = None
def __init__(
self,
path: pathlib.Path | str | Archiver,
default_image_path: pathlib.Path | str | None = None,
hash_archive: str = "",
) -> None:
self.md: dict[str, GenericMetadata] = {}
self.page_count: int | None = None
self.page_list: list[str] = []
self.hash_archive = hash_archive
self.reset_cache()
self.default_image_path = default_image_path
if isinstance(path, Archiver):
self.path = path.path
self.archiver: Archiver = path
else:
self.path = pathlib.Path(path).absolute()
self.archiver = UnknownArchiver.open(self.path)
load_archive_plugins()
load_tag_plugins()
archiver_missing = True
for archiver in archivers:
if self.path.suffix in archiver.supported_extensions and archiver.is_valid(self.path):
self.archiver = archiver.open(self.path)
archiver_missing = False
break
if archiver_missing:
for archiver in archivers:
if archiver.enabled and archiver.is_valid(self.path):
self.archiver = archiver.open(self.path)
break
if not ComicArchive.logo_data and self.default_image_path:
with open(self.default_image_path, mode="rb") as fd:
ComicArchive.logo_data = fd.read()
def reset_cache(self) -> None:
"""Clears the cached data"""
self.page_count = None
self.page_list.clear()
self.md.clear()
def load_cache(self, tag_ids: Iterable[str]) -> None:
for tag_id in tag_ids:
if tag_id not in tags:
continue
tag = tags[tag_id]
if not tag.enabled:
continue
md = tag.read_tags(self.archiver)
if not md.is_empty:
self.md[tag_id] = md
def get_supported_tags(self) -> list[str]:
return [tag_id for tag_id, tag in tags.items() if tag.enabled and tag.supports_tags(self.archiver)]
def rename(self, path: pathlib.Path | str) -> None:
new_path = pathlib.Path(path).absolute()
if new_path == self.path:
return
os.makedirs(new_path.parent, 0o777, True)
shutil.move(self.path, new_path)
self.path = new_path
self.archiver.path = pathlib.Path(path)
def is_writable(self, check_archive_status: bool = True) -> bool:
if isinstance(self.archiver, UnknownArchiver):
return False
if check_archive_status and not self.archiver.is_writable():
return False
if not (os.access(self.path, os.W_OK) or os.access(self.path.parent, os.W_OK)):
return False
return True
def is_zip(self) -> bool:
return self.archiver.name() == "ZIP"
def seems_to_be_a_comic_archive(self) -> bool:
if (
not (isinstance(self.archiver, UnknownArchiver))
and self.get_number_of_pages() > 0
and self.archiver.is_valid(self.path)
):
return True
return False
def extension(self) -> str:
return self.archiver.extension()
def read_tags(self, tag_id: str) -> GenericMetadata:
if tag_id in self.md:
return self.md[tag_id]
md = GenericMetadata()
tag = tags[tag_id]
if tag.enabled and tag.has_tags(self.archiver):
md = tag.read_tags(self.archiver)
md.apply_default_page_list(self.get_page_name_list())
return md
def read_raw_tags(self, tag_id: str) -> str:
if not tags[tag_id].enabled:
return ""
return tags[tag_id].read_raw_tags(self.archiver)
def write_tags(self, metadata: GenericMetadata, tag_id: str) -> bool:
if tag_id in self.md:
del self.md[tag_id]
if not tags[tag_id].enabled:
logger.warning("%s tags not enabled", tags[tag_id].name())
return False
self.apply_archive_info_to_metadata(metadata, True, True, hash_archive=self.hash_archive)
return tags[tag_id].write_tags(metadata, self.archiver)
def has_tags(self, tag_id: str) -> bool:
if tag_id in self.md:
return True
if not tags[tag_id].enabled:
return False
return tags[tag_id].has_tags(self.archiver)
def remove_tags(self, tag_id: str) -> bool:
if tag_id in self.md:
del self.md[tag_id]
if not tags[tag_id].enabled:
return False
return tags[tag_id].remove_tags(self.archiver)
def get_page(self, index: int) -> bytes:
image_data = b""
filename = self.get_page_name(index)
if filename:
try:
image_data = self.archiver.read_file(filename) or b""
except Exception:
logger.exception("Error reading in page %d. Substituting logo page.", index)
image_data = ComicArchive.logo_data
return image_data
def get_page_name(self, index: int) -> str:
if index is None:
return ""
page_list = self.get_page_name_list()
num_pages = len(page_list)
if num_pages == 0 or index >= num_pages:
return ""
return page_list[index]
def get_scanner_page_index(self) -> int | None:
scanner_page_index = None
# make a guess at the scanner page
name_list = self.get_page_name_list()
count = self.get_number_of_pages()
# too few pages to really know
if count < 5:
return None
# count the length of every filename, and count occurrences
length_buckets: dict[int, int] = {}
for name in name_list:
fname = os.path.split(name)[1]
length = len(fname)
if length in length_buckets:
length_buckets[length] += 1
else:
length_buckets[length] = 1
# sort by most common
sorted_buckets = sorted(length_buckets.items(), key=lambda tup: (tup[1], tup[0]), reverse=True)
# statistical mode occurrence is first
mode_length = sorted_buckets[0][0]
# we are only going to consider the final image file:
final_name = os.path.split(name_list[count - 1])[1]
common_length_list = []
for name in name_list:
if len(os.path.split(name)[1]) == mode_length:
common_length_list.append(os.path.split(name)[1])
prefix = os.path.commonprefix(common_length_list)
if mode_length <= 7 and prefix == "":
# probably all numbers
if len(final_name) > mode_length:
scanner_page_index = count - 1
# see if the last page doesn't start with the same prefix as most others
elif not final_name.startswith(prefix):
scanner_page_index = count - 1
return scanner_page_index
def get_page_name_list(self) -> list[str]:
if not self.page_list:
self.__import_pil__() # Import pillow for list of supported extensions
self.page_list = utils.get_page_name_list(self.archiver.get_filename_list())
return self.page_list
def get_number_of_pages(self) -> int:
if self.page_count is None:
self.page_count = len(self.get_page_name_list())
return self.page_count
def __import_pil__(self) -> bool:
if self.pil_available is not None:
return self.pil_available
try:
from PIL import Image
Image.init()
utils.KNOWN_IMAGE_EXTENSIONS.update([ext for ext, typ in Image.EXTENSION.items() if typ in Image.OPEN])
self.pil_available = True
except Exception:
self.pil_available = False
logger.exception("Failed to load Pillow")
return False
return True
def apply_archive_info_to_metadata(
self,
md: GenericMetadata,
calc_page_sizes: bool = False,
detect_double_page: bool = False,
*,
hash_archive: str = "",
) -> None:
hash_archive = hash_archive
md.page_count = self.get_number_of_pages()
md.apply_default_page_list(self.get_page_name_list())
if not self.seems_to_be_a_comic_archive():
return
if hash_archive in hashlib.algorithms_available and not md.original_hash:
hasher = getattr(hashlib, hash_archive, hash_archive)
try:
with self.archiver.path.open("b+r") as archive:
digest = utils.file_digest(archive, hasher)
if len(inspect.signature(digest.hexdigest).parameters) > 0:
length = digest.name.rpartition("_")[2]
if not length.isdigit():
length = "128"
md.original_hash = FileHash(digest.name, digest.hexdigest(int(length) // 8)) # type: ignore[call-arg]
else:
md.original_hash = FileHash(digest.name, digest.hexdigest())
except Exception:
logger.exception("Failed to calculate original hash for '%s'", self.archiver.path)
if not calc_page_sizes:
return
for p in md.pages:
if p.byte_size is None or p.height is None or p.width is None or p.double_page is None:
try:
data = self.get_page(p.archive_index)
p.byte_size = len(data)
if not data or not self.__import_pil__():
continue
from PIL import Image
im = Image.open(io.BytesIO(data))
w, h = im.size
p.height = h
p.width = w
if detect_double_page:
p.double_page = p.is_double_page()
except Exception as e:
logger.exception("Error decoding image [%s] %s :: image %s", e, self.path, p.archive_index)
def metadata_from_filename(
self,
parser: utils.Parser = utils.Parser.ORIGINAL,
remove_c2c: bool = False,
remove_fcbd: bool = False,
remove_publisher: bool = False,
split_words: bool = False,
allow_issue_start_with_letter: bool = False,
protofolius_issue_number_scheme: bool = False,
) -> GenericMetadata:
metadata = GenericMetadata()
filename_info = utils.parse_filename(
self.path.name,
parser=parser,
remove_c2c=remove_c2c,
remove_fcbd=remove_fcbd,
remove_publisher=remove_publisher,
split_words=split_words,
allow_issue_start_with_letter=allow_issue_start_with_letter,
protofolius_issue_number_scheme=protofolius_issue_number_scheme,
)
metadata.alternate_number = utils.xlate(filename_info.get("alternate", None))
metadata.issue = utils.xlate(filename_info.get("issue", None))
metadata.issue_count = utils.xlate_int(filename_info.get("issue_count", None))
metadata.publisher = utils.xlate(filename_info.get("publisher", None))
metadata.series = utils.xlate(filename_info.get("series", None))
metadata.title = utils.xlate(filename_info.get("title", None))
metadata.volume = utils.xlate_int(filename_info.get("volume", None))
metadata.volume_count = utils.xlate_int(filename_info.get("volume_count", None))
metadata.year = utils.xlate_int(filename_info.get("year", None))
metadata.scan_info = utils.xlate(filename_info.get("remainder", None))
if filename_info.get("fcbd", None):
metadata.format = "FCBD"
metadata.tags.add("FCBD")
if filename_info.get("c2c", None):
metadata.tags.add("c2c")
if filename_info.get("annual", None):
metadata.format = "Annual"
if filename_info.get("format", None):
metadata.format = filename_info["format"]
metadata.is_empty = False
return metadata
def export_as_zip(self, zip_filename: pathlib.Path) -> bool:
if self.archiver.name() == "ZIP":
# nothing to do, we're already a zip
return True
zip_archiver = ZipArchiver.open(zip_filename)
return zip_archiver.copy_from_archive(self.archiver)

View File

@ -0,0 +1,5 @@
from __future__ import annotations
import importlib.resources
data_path = importlib.resources.files(__package__)

View File

@ -0,0 +1,143 @@
{
"Marvel":{
"marvel comics": "",
"aircel comics": "Aircel Comics",
"aircel": "Aircel Comics",
"atlas comics": "Atlas Comics",
"atlas": "Atlas Comics",
"crossgen comics": "CrossGen comics",
"crossgen": "CrossGen comics",
"curtis magazines": "Curtis Magazines",
"disney books group": "Disney Books Group",
"disney books": "Disney Books Group",
"disney kingdoms": "Disney Kingdoms",
"epic comics group": "Epic Comics",
"epic comics": "Epic Comics",
"epic": "Epic Comics",
"eternity comics": "Eternity Comics",
"humorama": "Humorama",
"icon comics": "Icon Comics",
"infinite comics": "Infinite Comics",
"malibu comics": "Malibu Comics",
"malibu": "Malibu Comics",
"marvel 2099": "Marvel 2099",
"marvel absurd": "Marvel Absurd",
"marvel adventures": "Marvel Adventures",
"marvel age": "Marvel Age",
"marvel books": "Marvel Books",
"marvel comics 2": "Marvel Comics 2",
"marvel digital comics unlimited": "Marvel Unlimited",
"marvel edge": "Marvel Edge",
"marvel frontier": "Marvel Frontier",
"marvel illustrated": "Marvel Illustrated",
"marvel knights": "Marvel Knights",
"marvel magazine group": "Marvel Magazine Group",
"marvel mangaverse": "Marvel Mangaverse",
"marvel monsters group": "Marvel Monsters Group",
"marvel music": "Marvel Music",
"marvel next": "Marvel Next",
"marvel noir": "Marvel Noir",
"marvel press": "Marvel Press",
"marvel uk": "Marvel UK",
"marvel unlimited": "Marvel Unlimited",
"max": "MAX",
"mc2": "Marvel Comics 2",
"new universe": "New Universe",
"non-pareil publishing corp.": "Non-Pareil Publishing Corp.",
"paramount comics": "Paramount Comics",
"power comics": "Power Comics",
"razorline": "Razorline",
"star comics": "Star Comics",
"timely comics": "Timely Comics",
"timely": "Timely Comics",
"tsunami": "Tsunami",
"ultimate comics": "Ultimate Comics",
"ultimate marvel": "Ultimate Marvel",
"vital publications, inc.": "Vital Publications, Inc."
},
"DC Comics":{
"dc_comics": "",
"dc": "",
"dccomics": "",
"!mpact comics": "Impact Comics",
"all star dc": "All-Star",
"all star": "All-Star",
"all-star dc": "All-Star",
"all-star": "All-Star",
"america's best comics": "America's Best Comics",
"black label": "DC Black Label",
"cliffhanger": "Cliffhanger",
"cmx manga": "CMX Manga",
"dc black label": "DC Black Label",
"dc focus": "DC Focus",
"dc ink": "DC Ink",
"dc zoom": "DC Zoom",
"earth m": "Earth M",
"earth one": "Earth One",
"earth-m": "Earth M",
"elseworlds": "Elseworlds",
"eo": "Earth One",
"first wave": "First Wave",
"focus": "DC Focus",
"helix": "Helix",
"homage comics": "Homage Comics",
"impact comics": "Impact Comics",
"impact! comics": "Impact Comics",
"johnny dc": "Johnny DC",
"mad": "Mad",
"minx": "Minx",
"paradox press": "Paradox Press",
"piranha press": "Piranha Press",
"sandman universe": "Sandman Universe",
"tangent comics": "Tangent Comics",
"tsr": "TSR",
"vertigo": "Vertigo",
"wildstorm productions": "WildStorm Productions",
"wildstorm signature": "WildStorm Productions",
"wildstorm": "WildStorm Productions",
"wonder comics": "Wonder Comics",
"young animal": "Young Animal",
"zuda comics": "Zuda Comics",
"zuda": "Zuda Comics"
},
"Dark Horse Comics":{
"berger books": "Berger Books",
"comics' greatest world": "Dark Horse Heroes",
"dark horse digital": "Dark Horse Digital",
"dark horse heroes": "Dark Horse Heroes",
"dark horse manga": "Dark Horse Manga",
"dh deluxe": "DH Deluxe",
"dh press": "DH Press",
"kitchen sink books": "Kitchen Sink Books",
"legend": "Legend",
"m press": "M Press",
"maverick": "Maverick"
},
"Archie Comics":{
"archie action": "Archie Action",
"archie adventure Series": "Archie Adventure Series",
"archie horror": "Archie Horror",
"dark circle Comics": "Dark Circle Comics",
"dark circle": "Dark Circle Comics",
"mighty comics Group": "Mighty Comics Group",
"radio comics": "Mighty Comics Group",
"red circle Comics": "Dark Circle Comics",
"red circle": "Dark Circle Comics"
},
"Image Comics": {
"Image": "",
"avalon studios": "Avalon Studios",
"desperado publishing": "Desperado Publishing",
"extreme studios": "Extreme Studios",
"gorilla comics": "Gorilla Comics",
"highbrow entertainment": "Highbrow Entertainment",
"shadowline": "Shadowline",
"skybound entertainment": "Skybound Entertainment",
"todd mcfarlane productions": "Todd McFarlane Productions",
"top cow productions": "Top Cow Productions"
}
}

419
comicapi/filenamelexer.py Normal file
View File

@ -0,0 +1,419 @@
# Extracted and mutilated from https://github.com/lordwelch/wsfmt
# Which was extracted and mutilated from https://github.com/golang/go/tree/master/src/text/template/parse
from __future__ import annotations
import calendar
import os
import unicodedata
from enum import Enum, auto
from itertools import chain
from typing import Any, Callable, Protocol
class ItemType(Enum):
Error = auto() # Error occurred; value is text of error
EOF = auto()
Text = auto() # Text
LeftParen = auto()
Number = auto() # Simple number
IssueNumber = auto() # Preceded by a # Symbol
RightParen = auto()
Space = auto() # Run of spaces separating arguments
Dot = auto()
LeftBrace = auto()
RightBrace = auto()
LeftSBrace = auto()
RightSBrace = auto()
Symbol = auto()
Skip = auto() # __ or -- no title, issue or series information beyond
Operator = auto()
Calendar = auto()
InfoSpecifier = auto() # Specifies type of info e.g. v1 for 'volume': 1
ArchiveType = auto()
Honorific = auto()
Publisher = auto()
Keywords = auto()
FCBD = auto()
ComicType = auto()
C2C = auto()
braces = [
ItemType.LeftBrace,
ItemType.LeftParen,
ItemType.LeftSBrace,
ItemType.RightBrace,
ItemType.RightParen,
ItemType.RightSBrace,
]
eof = chr(0)
key = {
"fcbd": ItemType.FCBD,
"freecomicbookday": ItemType.FCBD,
"cbr": ItemType.ArchiveType,
"cbz": ItemType.ArchiveType,
"cbt": ItemType.ArchiveType,
"cb7": ItemType.ArchiveType,
"rar": ItemType.ArchiveType,
"zip": ItemType.ArchiveType,
"tar": ItemType.ArchiveType,
"7z": ItemType.ArchiveType,
"annual": ItemType.ComicType,
"volume": ItemType.InfoSpecifier,
"vol.": ItemType.InfoSpecifier,
"vol": ItemType.InfoSpecifier,
"v": ItemType.InfoSpecifier,
"of": ItemType.InfoSpecifier,
"dc": ItemType.Publisher,
"marvel": ItemType.Publisher,
"covers": ItemType.InfoSpecifier,
"c2c": ItemType.C2C,
"mr": ItemType.Honorific,
"ms": ItemType.Honorific,
"mrs": ItemType.Honorific,
"dr": ItemType.Honorific,
}
class Item:
def __init__(self, typ: ItemType, pos: int, val: str) -> None:
self.typ: ItemType = typ
self.pos: int = pos
self.val: str = val
self.no_space = False
def __repr__(self) -> str:
return f"{self.val}: index: {self.pos}: {self.typ}"
class LexerFunc(Protocol):
def __call__(self, __origin: Lexer) -> LexerFunc | None: ...
class Lexer:
def __init__(self, string: str, allow_issue_start_with_letter: bool = False) -> None:
self.input: str = string # The string being scanned
# The next lexing function to enter
self.state: LexerFunc | None = None
self.pos: int = -1 # Current position in the input
self.start: int = 0 # Start position of this item
self.lastPos: int = 0 # Position of most recent item returned by nextItem
self.paren_depth: int = 0 # Nesting depth of ( ) exprs
self.brace_depth: int = 0 # Nesting depth of { }
self.sbrace_depth: int = 0 # Nesting depth of [ ]
self.items: list[Item] = []
self.allow_issue_start_with_letter = allow_issue_start_with_letter
# Next returns the next rune in the input.
def get(self) -> str:
if int(self.pos) >= len(self.input) - 1:
self.pos += 1
return eof
self.pos += 1
return self.input[self.pos]
# Peek returns but does not consume the next rune in the input.
def peek(self) -> str:
if int(self.pos) >= len(self.input) - 1:
return eof
return self.input[self.pos + 1]
def backup(self) -> None:
self.pos -= 1
# Emit passes an item back to the client.
def emit(self, t: ItemType) -> None:
self.items.append(Item(t, self.start, self.input[self.start : self.pos + 1]))
self.start = self.pos + 1
# Ignore skips over the pending input before this point.
def ignore(self) -> None:
self.start = self.pos
# Accept consumes the next rune if it's from the valid se:
def accept(self, valid: str | Callable[[str], bool]) -> bool:
if isinstance(valid, str):
if self.get() in valid:
return True
else:
if valid(self.get()):
return True
self.backup()
return False
# AcceptRun consumes a run of runes from the valid set.
def accept_run(self, valid: str | Callable[[str], bool]) -> bool:
initial = self.pos
if isinstance(valid, str):
while self.get() in valid:
continue
else:
while valid(self.get()):
continue
self.backup()
return initial != self.pos
def scan_number(self) -> bool:
digits = "0123456789.,"
if not self.accept_run(lambda x: x.isnumeric() or x in digits):
return False
if self.input[self.pos] == ".":
self.backup()
self.accept_run(str.isalpha)
return True
# Runs the state machine for the lexer.
def run(self) -> None:
self.state = lex_filename
while self.state is not None:
self.state = self.state(self)
# Errorf returns an error token and terminates the scan by passing
# Back a nil pointer that will be the next state, terminating self.nextItem.
def errorf(lex: Lexer, message: str) -> Any:
lex.items.append(Item(ItemType.Error, lex.start, message))
return None
# Scans the elements inside action delimiters.
def lex_filename(lex: Lexer) -> LexerFunc | None:
r = lex.get()
if r == eof:
if lex.paren_depth != 0:
errorf(lex, "unclosed left paren")
return None
if lex.brace_depth != 0:
errorf(lex, "unclosed left paren")
return None
lex.emit(ItemType.EOF)
return None
elif is_space(r):
if r == "_" and lex.peek() == "_":
lex.get()
lex.emit(ItemType.Skip)
else:
return lex_space
elif r == ".":
r = lex.peek()
if r.isnumeric() and lex.pos > 0 and is_space(lex.input[lex.pos - 1]):
return lex_number
lex.emit(ItemType.Dot)
return lex_filename
elif r == "'":
r = lex.peek()
if r.isdigit():
return lex_number
if is_symbol(r):
lex.accept_run(is_symbol)
lex.emit(ItemType.Symbol)
else:
return lex_text
elif r.isnumeric():
lex.backup()
return lex_number
elif r == "#":
if lex.allow_issue_start_with_letter and is_alpha_numeric(lex.peek()):
return lex_issue_number
elif lex.peek().isnumeric() or lex.peek() in "-+.":
return lex_issue_number
lex.emit(ItemType.Symbol)
elif is_operator(r):
if r == "-" and lex.peek() == "-":
lex.get()
lex.emit(ItemType.Skip)
else:
return lex_operator
elif is_alpha_numeric(r):
lex.backup()
return lex_text
elif r == "(":
lex.emit(ItemType.LeftParen)
lex.paren_depth += 1
elif r == ")":
lex.emit(ItemType.RightParen)
lex.paren_depth -= 1
if lex.paren_depth < 0:
errorf(lex, "unexpected right paren " + r)
return None
elif r == "{":
lex.emit(ItemType.LeftBrace)
lex.brace_depth += 1
elif r == "}":
lex.emit(ItemType.RightBrace)
lex.brace_depth -= 1
if lex.brace_depth < 0:
errorf(lex, "unexpected right brace " + r)
return None
elif r == "[":
lex.emit(ItemType.LeftSBrace)
lex.sbrace_depth += 1
elif r == "]":
lex.emit(ItemType.RightSBrace)
lex.sbrace_depth -= 1
if lex.sbrace_depth < 0:
errorf(lex, "unexpected right brace " + r)
return None
elif is_symbol(r):
if unicodedata.category(r) == "Sc":
return lex_currency
lex.accept_run(is_symbol)
lex.emit(ItemType.Symbol)
else:
errorf(lex, "unrecognized character in action: " + repr(r))
return None
return lex_filename
def lex_currency(lex: Lexer) -> LexerFunc:
orig = lex.pos
lex.accept_run(is_space)
if lex.peek().isnumeric():
return lex_number
else:
lex.pos = orig
# We don't have a number with this currency symbol. Don't treat it special
lex.emit(ItemType.Symbol)
return lex_filename
def lex_operator(lex: Lexer) -> LexerFunc:
lex.accept_run("-|:;")
lex.emit(ItemType.Operator)
return lex_filename
# LexSpace scans a run of space characters.
# One space has already been seen.
def lex_space(lex: Lexer) -> LexerFunc:
lex.accept_run(is_space)
lex.emit(ItemType.Space)
return lex_filename
# Lex_text scans an alphanumeric.
def lex_text(lex: Lexer) -> LexerFunc:
while True:
r = lex.get()
if is_alpha_numeric(r) or r in "'":
if r.isnumeric(): # E.g. v1
word = lex.input[lex.start : lex.pos]
if key.get(word.casefold(), None) == ItemType.InfoSpecifier:
lex.backup()
lex.emit(key[word.casefold()])
return lex_filename
else:
lex.backup()
word = lex.input[lex.start : lex.pos + 1]
if word.casefold() in key:
if key[word.casefold()] in (ItemType.Honorific, ItemType.InfoSpecifier):
lex.accept(".")
lex.emit(key[word.casefold()])
elif cal(word):
lex.emit(ItemType.Calendar)
else:
lex.emit(ItemType.Text)
break
return lex_filename
def cal(value: str) -> bool:
return value.title() in set(chain(calendar.month_abbr, calendar.month_name, calendar.day_abbr, calendar.day_name))
def lex_number(lex: Lexer) -> LexerFunc | None:
if not lex.scan_number():
return errorf(lex, "bad number syntax: " + lex.input[lex.start : lex.pos])
# Complex number logic removed. Messes with math operations without space
if lex.input[lex.start] == "#":
lex.emit(ItemType.IssueNumber)
elif not lex.input[lex.pos].isnumeric():
# Assume that 80th is just text and not a number
lex.emit(ItemType.Text)
else:
# Used to check for a '$'
endNumber = lex.pos
# Consume any spaces
lex.accept_run(is_space)
# This number starts with a '$' emit it as Text instead of a Number
if "Sc" == unicodedata.category(lex.input[lex.start]):
lex.pos = endNumber
lex.emit(ItemType.Text)
# This number ends in a '$' if there is a number on the other side we assume it belongs to the following number
elif "Sc" == unicodedata.category(lex.get()):
# Store the end of the number '$'. We still need to check to see if there is a number coming up
endCurrency = lex.pos
# Consume any spaces
lex.accept_run(is_space)
# This is a number
if lex.peek().isnumeric():
# We go back to the original number before the '$' and emit a number
lex.pos = endNumber
lex.emit(ItemType.Number)
else:
# There was no following number, reset to the '$' and emit a number
lex.pos = endCurrency
lex.emit(ItemType.Text)
else:
# We go back to the original number there is no '$'
lex.pos = endNumber
lex.emit(ItemType.Number)
return lex_filename
def lex_issue_number(lex: Lexer) -> LexerFunc:
# Only called when lex.input[lex.start] == "#"
original_start = lex.pos
lex.accept_run(str.isalpha)
if lex.peek().isnumeric():
return lex_number
else:
lex.pos = original_start
lex.emit(ItemType.Symbol)
return lex_filename
def is_space(character: str) -> bool:
return character in "_ \t"
# IsAlphaNumeric reports whether r is an alphabetic, digit, or underscore.
def is_alpha_numeric(character: str) -> bool:
return character.isalpha() or character.isnumeric()
def is_operator(character: str) -> bool:
return character in "-|:;/\\"
def is_symbol(character: str) -> bool:
return unicodedata.category(character)[0] in "PS" and character != "."
def Lex(filename: str, allow_issue_start_with_letter: bool = False) -> Lexer:
lex = Lexer(os.path.basename(filename), allow_issue_start_with_letter)
lex.run()
return lex

1280
comicapi/filenameparser.py Normal file

File diff suppressed because it is too large Load Diff

884
comicapi/genericmetadata.py Normal file
View File

@ -0,0 +1,884 @@
"""A class for internal metadata storage
The goal of this class is to handle ALL the data that might come from various
tagging schemes and databases, such as ComicVine or GCD. This makes conversion
possible, however lossy it might be
"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import copy
import dataclasses
import hashlib
import logging
from collections.abc import Sequence
from typing import TYPE_CHECKING, Any, Union, overload
from typing_extensions import NamedTuple
from comicapi import merge, utils
from comicapi._url import Url, parse_url
from comicapi.utils import norm_fold
# needed for runtime type guessing
if TYPE_CHECKING:
Union
logger = logging.getLogger(__name__)
REMOVE = object()
Credit = merge.Credit
class PageType(merge.StrEnum):
"""
These page info classes are exactly the same as the CIX scheme, since
it's unique
"""
FrontCover = "FrontCover"
InnerCover = "InnerCover"
Roundup = "Roundup"
Story = "Story"
Advertisement = "Advertisement"
Editorial = "Editorial"
Letters = "Letters"
Preview = "Preview"
BackCover = "BackCover"
Other = "Other"
Deleted = "Deleted"
@dataclasses.dataclass
class PageMetadata:
filename: str
type: str
bookmark: str
display_index: int
archive_index: int
# These are optional because getting this info requires reading in each page
double_page: bool | None = None
byte_size: int | None = None
height: int | None = None
width: int | None = None
def set_type(self, value: str) -> None:
values = {x.casefold(): x for x in PageType}
self.type = values.get(value.casefold(), value)
def is_double_page(self) -> bool:
w = self.width or 0
h = self.height or 0
return self.double_page or (w >= h and w > 0 and h > 0)
def __lt__(self, other: Any) -> bool:
if not isinstance(other, PageMetadata):
return False
return self.archive_index < other.archive_index
def __eq__(self, other: Any) -> bool:
if not isinstance(other, PageMetadata):
return False
return self.archive_index == other.archive_index
def _get_clean_metadata(self, *attributes: str) -> PageMetadata:
return PageMetadata(
filename=self.filename if "filename" in attributes else "",
type=self.type if "type" in attributes else "",
bookmark=self.bookmark if "bookmark" in attributes else "",
display_index=self.display_index if "display_index" in attributes else 0,
archive_index=self.archive_index if "archive_index" in attributes else 0,
double_page=self.double_page if "double_page" in attributes else None,
byte_size=self.byte_size if "byte_size" in attributes else None,
height=self.height if "height" in attributes else None,
width=self.width if "width" in attributes else None,
)
@dataclasses.dataclass
class ComicSeries:
id: str
name: str
aliases: set[str]
count_of_issues: int | None
count_of_volumes: int | None
description: str
image_url: str
publisher: str
start_year: int | None
format: str | None
def copy(self) -> ComicSeries:
return copy.deepcopy(self)
class MetadataOrigin(NamedTuple):
id: str
name: str
def __str__(self) -> str:
return self.name
class ImageHash(NamedTuple):
"""
A valid ImageHash requires at a minimum a Hash and Kind or a URL
If only a URL is given, it will be used for cover matching otherwise Hash is used
The URL is also required for the GUI to display covers
Available Kind's are "ahash" and "phash"
"""
Hash: int
Kind: str
URL: str
class FileHash(NamedTuple):
name: str
hash: str
def __str__(self) -> str:
return self.name + ":" + self.hash
@classmethod
def parse(cls, string: str) -> FileHash:
name, _, parsed_hash = string.partition(":")
if name in hashlib.algorithms_available:
return FileHash(name, parsed_hash)
return FileHash("", "")
def __bool__(self) -> bool:
return all(self)
@dataclasses.dataclass
class GenericMetadata:
writer_synonyms = ("writer", "plotter", "scripter", "script")
penciller_synonyms = ("artist", "penciller", "penciler", "breakdowns", "pencils", "painting")
inker_synonyms = ("inker", "artist", "finishes", "inks", "painting")
colorist_synonyms = ("colorist", "colourist", "colorer", "colourer", "colors", "painting")
letterer_synonyms = ("letterer", "letters")
cover_synonyms = ("cover", "covers", "coverartist", "cover artist")
editor_synonyms = ("editor", "edits", "editing")
translator_synonyms = ("translator", "translation")
is_empty: bool = True
data_origin: MetadataOrigin | None = None
issue_id: str | None = None
series_id: str | None = None
original_hash: FileHash | None = None
series: str | None = None
series_aliases: set[str] = dataclasses.field(default_factory=set)
issue: str | None = None
issue_count: int | None = None
title: str | None = None
title_aliases: set[str] = dataclasses.field(default_factory=set)
volume: int | None = None
volume_count: int | None = None
genres: set[str] = dataclasses.field(default_factory=set)
description: str | None = None # use same way as Summary in CIX
notes: str | None = None
alternate_series: str | None = None
alternate_number: str | None = None
alternate_count: int | None = None
story_arcs: list[str] = dataclasses.field(default_factory=list)
series_groups: list[str] = dataclasses.field(default_factory=list)
publisher: str | None = None
imprint: str | None = None
day: int | None = None
month: int | None = None
year: int | None = None
language: str | None = None # 2 letter iso code
country: str | None = None
web_links: list[Url] = dataclasses.field(default_factory=list)
format: str | None = None
manga: str | None = None
black_and_white: bool | None = None
maturity_rating: str | None = None
critical_rating: float | None = None # rating in CBL; CommunityRating in CIX
scan_info: str | None = None
tags: set[str] = dataclasses.field(default_factory=set)
pages: list[PageMetadata] = dataclasses.field(default_factory=list)
page_count: int | None = None
characters: set[str] = dataclasses.field(default_factory=set)
teams: set[str] = dataclasses.field(default_factory=set)
locations: set[str] = dataclasses.field(default_factory=set)
credits: list[Credit] = dataclasses.field(default_factory=list)
# Some CoMet-only items
price: float | None = None
is_version_of: str | None = None
rights: str | None = None
identifier: str | None = None
last_mark: str | None = None
# urls to cover image, not generally part of the metadata
_cover_image: ImageHash | None = None
_alternate_images: list[ImageHash] = dataclasses.field(default_factory=list)
def __post_init__(self) -> None:
for key, value in self.__dict__.items():
if value and key != "is_empty":
self.is_empty = False
break
def copy(self) -> GenericMetadata:
return copy.deepcopy(self)
def replace(self, /, **kwargs: Any) -> GenericMetadata:
tmp = self.copy()
tmp.__dict__.update(kwargs)
return tmp
def _get_clean_metadata(self, *attributes: str) -> GenericMetadata:
new_md = GenericMetadata()
list_handled = []
for attr in sorted(attributes):
if "." in attr:
lst, _, name = attr.partition(".")
if lst in list_handled:
continue
old_value = getattr(self, lst)
new_value = getattr(new_md, lst)
if old_value:
if hasattr(old_value[0], "_get_clean_metadata"):
list_attributes = [x.removeprefix(lst + ".") for x in attributes if x.startswith(lst)]
for x in old_value:
new_value.append(x._get_clean_metadata(*list_attributes))
list_handled.append(lst)
continue
if not new_value:
for x in old_value:
new_value.append(x.__class__())
for i, x in enumerate(old_value):
if isinstance(x, dict):
if name in x:
new_value[i][name] = x[name]
else:
setattr(new_value[i], name, getattr(x, name))
else:
old_value = getattr(self, attr)
if isinstance(old_value, list):
continue
setattr(new_md, attr, old_value)
new_md.__post_init__()
return new_md
def overlay(
self, new_md: GenericMetadata, mode: merge.Mode = merge.Mode.OVERLAY, merge_lists: bool = False
) -> None:
"""Overlay a new metadata object on this one"""
attribute_merge = merge.attribute[mode]
list_merge = merge.lists[mode]
def assign(old: Any, new: Any, attribute_merge: Any = attribute_merge) -> Any:
if new is REMOVE:
return None
return attribute_merge(old, new)
def assign_list(old: list[Any] | set[Any], new: list[Any] | set[Any], list_merge: Any = list_merge) -> Any:
if new is REMOVE:
old.clear()
return old
if merge_lists:
return list_merge(old, new)
else:
return assign(old, new)
if not new_md.is_empty:
self.is_empty = False
self.data_origin = assign(self.data_origin, new_md.data_origin) # TODO use and purpose now?
self.issue_id = assign(self.issue_id, new_md.issue_id)
self.series_id = assign(self.series_id, new_md.series_id)
# This should not usually be set by a talker or other online datasource
self.original_hash = assign(self.original_hash, new_md.original_hash)
self.series = assign(self.series, new_md.series)
self.series_aliases = assign_list(self.series_aliases, new_md.series_aliases)
self.issue = assign(self.issue, new_md.issue)
self.issue_count = assign(self.issue_count, new_md.issue_count)
self.title = assign(self.title, new_md.title)
self.title_aliases = assign_list(self.title_aliases, new_md.title_aliases)
self.volume = assign(self.volume, new_md.volume)
self.volume_count = assign(self.volume_count, new_md.volume_count)
self.genres = assign_list(self.genres, new_md.genres)
self.description = assign(self.description, new_md.description)
self.notes = assign(self.notes, new_md.notes)
self.alternate_series = assign(self.alternate_series, new_md.alternate_series)
self.alternate_number = assign(self.alternate_number, new_md.alternate_number)
self.alternate_count = assign(self.alternate_count, new_md.alternate_count)
self.story_arcs = assign_list(self.story_arcs, new_md.story_arcs)
self.series_groups = assign_list(self.series_groups, new_md.series_groups)
self.publisher = assign(self.publisher, new_md.publisher)
self.imprint = assign(self.imprint, new_md.imprint)
self.day = assign(self.day, new_md.day)
self.month = assign(self.month, new_md.month)
self.year = assign(self.year, new_md.year)
self.language = assign(self.language, new_md.language)
self.country = assign(self.country, new_md.country)
self.web_links = assign_list(self.web_links, new_md.web_links)
self.format = assign(self.format, new_md.format)
self.manga = assign(self.manga, new_md.manga)
self.black_and_white = assign(self.black_and_white, new_md.black_and_white)
self.maturity_rating = assign(self.maturity_rating, new_md.maturity_rating)
self.critical_rating = assign(self.critical_rating, new_md.critical_rating)
self.scan_info = assign(self.scan_info, new_md.scan_info)
self.tags = assign_list(self.tags, new_md.tags)
self.characters = assign_list(self.characters, new_md.characters)
self.teams = assign_list(self.teams, new_md.teams)
self.locations = assign_list(self.locations, new_md.locations)
# credits are added through add_credit so that some standard checks are observed
# which means that we needs self.credits to be empty
tmp_credits = self.credits
self.credits = []
for c in assign_list(tmp_credits, new_md.credits):
self.add_credit(c)
self.price = assign(self.price, new_md.price)
self.is_version_of = assign(self.is_version_of, new_md.is_version_of)
self.rights = assign(self.rights, new_md.rights)
self.identifier = assign(self.identifier, new_md.identifier)
self.last_mark = assign(self.last_mark, new_md.last_mark)
self._cover_image = assign(self._cover_image, new_md._cover_image)
self._alternate_images = assign_list(self._alternate_images, new_md._alternate_images)
# pages doesn't get merged, if we did merge we would end up with duplicate pages
self.pages = assign(self.pages, new_md.pages)
self.page_count = assign(self.page_count, new_md.page_count)
def apply_default_page_list(self, page_list: Sequence[str]) -> None:
"""apply a default page list, with the first page marked as the cover"""
# Create a dictionary in the weird case that the metadata doesn't match the archive
pages = {p.archive_index: p for p in self.pages}
cover_set = False
# It might be a good idea to validate that each page in `pages` is found in page_list
for i, filename in enumerate(page_list):
page = pages.get(i, PageMetadata(archive_index=i, display_index=i, filename="", type="", bookmark=""))
page.filename = filename
pages[i] = page
# Check if we know what the cover is
cover_set = page.type == PageType.FrontCover or cover_set
self.pages = sorted(pages.values())
self.page_count = len(self.pages)
if self.page_count != len(page_list):
logger.warning("Wrong count of pages: expected %d got %d", len(self.pages), len(page_list))
# Set the cover to the first image acording to hte display index if we don't know what the cover is
if not cover_set:
first_page = self.get_archive_page_index(0)
self.pages[first_page].type = PageType.FrontCover
def get_archive_page_index(self, pagenum: int) -> int:
"""convert the displayed page number to the page index of the file in the archive"""
if pagenum < len(self.pages):
return int(sorted(self.pages, key=lambda p: p.display_index)[pagenum].archive_index)
return 0
def get_cover_page_index_list(self) -> list[int]:
# return a list of archive page indices of cover pages
if not self.pages:
return [0]
coverlist = []
for p in self.pages:
if p.type == PageType.FrontCover:
coverlist.append(p.archive_index)
if len(coverlist) == 0:
coverlist.append(self.get_archive_page_index(0))
return coverlist
@overload
def add_credit(self, person: Credit) -> None: ...
@overload
def add_credit(self, person: str, role: str, primary: bool = False, language: str = "") -> None: ...
def add_credit(
self, person: str | Credit, role: str | None = None, primary: bool = False, language: str = ""
) -> None:
credit: Credit
if isinstance(person, Credit):
credit = person
else:
assert role is not None
credit = Credit(person=person, role=role, primary=primary, language=language)
if credit.role is None:
raise TypeError("GenericMetadata.add_credit takes either a Credit object or a person name and role")
if credit.person == "":
return
person = norm_fold(credit.person)
role = norm_fold(credit.role)
# look to see if it's not already there...
found = False
for c in self.credits:
if norm_fold(c.person) == person and norm_fold(c.role) == role:
# no need to add it. just adjust the "primary" flag as needed
c.primary = c.primary or primary
found = True
break
if not found:
self.credits.append(credit)
def get_primary_credit(self, role: str) -> str:
primary = ""
for credit in self.credits:
if (primary == "" and credit.role.casefold() == role.casefold()) or (
credit.role.casefold() == role.casefold() and credit.primary
):
primary = credit.person
return primary
def __str__(self) -> str:
vals: list[tuple[str, Any]] = []
if self.is_empty:
return "No metadata"
def add_string(tag: str, val: Any) -> None:
if isinstance(val, (Sequence, set)):
if val:
vals.append((tag, val))
elif val is not None:
vals.append((tag, val))
add_string("data_origin", self.data_origin)
add_string("series", self.series)
add_string("original_hash", self.original_hash)
add_string("series_aliases", ",".join(self.series_aliases))
add_string("issue", self.issue)
add_string("issue_count", self.issue_count)
add_string("title", self.title)
add_string("title_aliases", ",".join(self.title_aliases))
add_string("publisher", self.publisher)
add_string("year", self.year)
add_string("month", self.month)
add_string("day", self.day)
add_string("volume", self.volume)
add_string("volume_count", self.volume_count)
add_string("genres", ", ".join(self.genres))
add_string("language", self.language)
add_string("country", self.country)
add_string("critical_rating", self.critical_rating)
add_string("alternate_series", self.alternate_series)
add_string("alternate_number", self.alternate_number)
add_string("alternate_count", self.alternate_count)
add_string("imprint", self.imprint)
add_string("web_links", [str(x) for x in self.web_links])
add_string("format", self.format)
add_string("manga", self.manga)
add_string("price", self.price)
add_string("is_version_of", self.is_version_of)
add_string("rights", self.rights)
add_string("identifier", self.identifier)
add_string("last_mark", self.last_mark)
if self.black_and_white:
add_string("black_and_white", self.black_and_white)
add_string("maturity_rating", self.maturity_rating)
add_string("story_arcs", self.story_arcs)
add_string("series_groups", self.series_groups)
add_string("scan_info", self.scan_info)
add_string("characters", ", ".join(self.characters))
add_string("teams", ", ".join(self.teams))
add_string("locations", ", ".join(self.locations))
add_string("description", self.description)
add_string("notes", self.notes)
add_string("tags", ", ".join(self.tags))
for c in self.credits:
primary = ""
if c.primary:
primary = " [P]"
add_string("credit", f"{c}{primary}")
# find the longest field name
flen = 0
for i in vals:
flen = max(flen, len(i[0]))
flen += 1
# format the data nicely
outstr = ""
fmt_str = "{0: <" + str(flen) + "} {1}\n"
for i in vals:
outstr += fmt_str.format(i[0] + ":", i[1])
return outstr
def fix_publisher(self) -> None:
if self.publisher is None:
return
if self.imprint is None:
self.imprint = ""
imprint, publisher = utils.get_publisher(self.publisher)
self.publisher = publisher
if self.imprint.casefold() in publisher.casefold():
self.imprint = None
if self.imprint is None or self.imprint == "":
self.imprint = imprint
elif self.imprint.casefold() in imprint.casefold():
self.imprint = imprint
md_test: GenericMetadata = GenericMetadata(
is_empty=False,
data_origin=MetadataOrigin("comicvine", "Comic Vine"),
series="Cory Doctorow's Futuristic Tales of the Here and Now",
series_id="23437",
issue="1",
issue_id="140529",
title="Anda's Game",
publisher="IDW Publishing",
month=10,
year=2007,
day=1,
issue_count=6,
volume=1,
genres={"Sci-Fi"},
language="en",
description=(
"For 12-year-old Anda, getting paid real money to kill the characters of players who were cheating"
" in her favorite online computer game was a win-win situation. Until she found out who was paying her,"
" and what those characters meant to the livelihood of children around the world."
),
volume_count=None,
critical_rating=3.0,
country=None,
alternate_series="Tales",
alternate_number="2",
alternate_count=7,
imprint="craphound.com",
notes="Tagged with ComicTagger 1.3.2a5 using info from Comic Vine on 2022-04-16 15:52:26. [Issue ID 140529]",
web_links=[
parse_url("https://comicvine.gamespot.com/cory-doctorows-futuristic-tales-of-the-here-and-no/4000-140529/")
],
format="Series",
manga="No",
black_and_white=None,
page_count=24,
maturity_rating="Everyone 10+",
story_arcs=["Here and Now"],
series_groups=["Futuristic Tales"],
scan_info="(CC BY-NC-SA 3.0)",
characters={"Anda"},
teams={"Fahrenheit"},
locations=set(utils.split("lonely cottage ", ",")),
credits=[
Credit(primary=False, person="Dara Naraghi", role="Writer"),
Credit(primary=False, person="Esteve Polls", role="Penciller"),
Credit(primary=False, person="Esteve Polls", role="Inker"),
Credit(primary=False, person="Neil Uyetake", role="Letterer"),
Credit(primary=False, person="Sam Kieth", role="Cover"),
Credit(primary=False, person="Ted Adams", role="Editor"),
],
tags=set(),
pages=[
PageMetadata(
archive_index=0,
display_index=0,
height=1280,
byte_size=195977,
width=800,
type=PageType.FrontCover,
filename="!cover.jpg",
bookmark="",
),
PageMetadata(
archive_index=1,
display_index=1,
height=2039,
byte_size=611993,
width=1327,
filename="01.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=2,
display_index=2,
height=2039,
byte_size=783726,
width=1327,
filename="02.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=3,
display_index=3,
height=2039,
byte_size=679584,
width=1327,
filename="03.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=4,
display_index=4,
height=2039,
byte_size=788179,
width=1327,
filename="04.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=5,
display_index=5,
height=2039,
byte_size=864433,
width=1327,
filename="05.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=6,
display_index=6,
height=2039,
byte_size=765606,
width=1327,
filename="06.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=7,
display_index=7,
height=2039,
byte_size=876427,
width=1327,
filename="07.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=8,
display_index=8,
height=2039,
byte_size=852622,
width=1327,
filename="08.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=9,
display_index=9,
height=2039,
byte_size=800205,
width=1327,
filename="09.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=10,
display_index=10,
height=2039,
byte_size=746243,
width=1326,
filename="10.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=11,
display_index=11,
height=2039,
byte_size=718062,
width=1327,
filename="11.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=12,
display_index=12,
height=2039,
byte_size=532179,
width=1326,
filename="12.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=13,
display_index=13,
height=2039,
byte_size=686708,
width=1327,
filename="13.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=14,
display_index=14,
height=2039,
byte_size=641907,
width=1327,
filename="14.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=15,
display_index=15,
height=2039,
byte_size=805388,
width=1327,
filename="15.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=16,
display_index=16,
height=2039,
byte_size=668927,
width=1326,
filename="16.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=17,
display_index=17,
height=2039,
byte_size=710605,
width=1327,
filename="17.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=18,
display_index=18,
height=2039,
byte_size=761398,
width=1326,
filename="18.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=19,
display_index=19,
height=2039,
byte_size=743807,
width=1327,
filename="19.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=20,
display_index=20,
height=2039,
byte_size=552911,
width=1326,
filename="20.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=21,
display_index=21,
height=2039,
byte_size=556827,
width=1327,
filename="21.jpg",
bookmark="",
type="",
),
PageMetadata(
archive_index=22,
display_index=22,
height=2039,
byte_size=675078,
width=1326,
filename="22.jpg",
bookmark="",
type="",
),
PageMetadata(
bookmark="Interview",
archive_index=23,
display_index=23,
height=2032,
byte_size=800965,
width=1338,
type=PageType.Letters,
filename="23.jpg",
),
],
price=None,
is_version_of=None,
rights=None,
identifier=None,
last_mark=None,
_cover_image=None,
)
__all__ = (
"Url",
"parse_url",
"PageType",
"PageMetadata",
"Credit",
"ComicSeries",
"MetadataOrigin",
"GenericMetadata",
)

130
comicapi/issuestring.py Normal file
View File

@ -0,0 +1,130 @@
"""Support for mixed digit/string type Issue field
Class for handling the odd permutations of an 'issue number' that the
comics industry throws at us.
e.g.: "12", "12.1", "0", "-1", "5AU", "100-2"
"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import unicodedata
logger = logging.getLogger(__name__)
class IssueString:
def __init__(self, text: str | None) -> None:
# break up the issue number string into 2 parts: the numeric and suffix string.
# (assumes that the numeric portion is always first)
self.num = None
self.suffix = ""
self.prefix = ""
if text is None:
return
text = str(text)
if len(text) == 0:
return
for idx, r in enumerate(text):
if not r.isalpha():
break
self.prefix = text[:idx]
self.num, self.suffix = self.get_number(text[idx:])
def get_number(self, text: str) -> tuple[float | None, str]:
num, suffix = None, ""
start = 0
# skip the minus sign if it's first
if text[0] in ("-", "+"):
start = 1
# if it's still not numeric at start skip it
if text[start].isdigit() or text[start] == ".":
# walk through the string, look for split point (the first non-numeric)
decimal_count = 0
for idx in range(start, len(text)):
if not (text[idx].isdigit() or text[idx] in "."):
break
# special case: also split on second "."
if text[idx] == ".":
decimal_count += 1
if decimal_count > 1:
break
else:
idx = len(text)
# move trailing numeric decimal to suffix
# (only if there is other junk after )
if text[idx - 1] == "." and len(text) != idx:
idx = idx - 1
# if there is no numeric after the minus, make the minus part of the suffix
if idx == 1 and start == 1:
idx = 0
if text[0:idx]:
num = float(text[0:idx])
suffix = text[idx : len(text)]
else:
suffix = text
return num, suffix
def as_string(self, pad: int = 0) -> str:
"""return the number, left side zero-padded, with suffix attached"""
# if there is no number return the text
if self.num is None:
return self.prefix + self.suffix
# negative is added back in last
negative = self.num < 0
num_f = abs(self.num)
# used for padding
num_int = int(num_f)
if num_f.is_integer():
num_s = str(num_int)
else:
num_s = str(num_f)
# create padding
padding = ""
# we only pad the whole number part, we don't care about the decimal
length = len(str(num_int))
if length < pad:
padding = "0" * (pad - length)
# add the padding to the front
num_s = padding + num_s
# finally add the negative back in
if negative:
num_s = "-" + num_s
# return the prefix + formatted number + suffix
return self.prefix + num_s + self.suffix
def as_float(self) -> float | None:
# return the float, with no suffix
if len(self.suffix) == 1 and self.suffix.isnumeric():
return (self.num or 0) + unicodedata.numeric(self.suffix)
return self.num

72
comicapi/merge.py Normal file
View File

@ -0,0 +1,72 @@
from __future__ import annotations
import dataclasses
from collections.abc import Collection
from enum import auto
from typing import Any, Callable
from comicapi.utils import DefaultDict, StrEnum, norm_fold
@dataclasses.dataclass
class Credit:
person: str = ""
role: str = ""
primary: bool = False
language: str = "" # Should be ISO 639 language code
def __str__(self) -> str:
lang = ""
if self.language:
lang = f" [{self.language}]"
return f"{self.role}: {self.person}{lang}"
class Mode(StrEnum):
OVERLAY = auto()
ADD_MISSING = auto()
def merge_lists(old: Collection[Any], new: Collection[Any]) -> list[Any] | set[Any]:
"""Dedupes normalised (NFKD), casefolded values using 'new' values on collisions"""
if len(new) == 0:
return old if isinstance(old, set) else list(old)
if len(old) == 0:
return new if isinstance(new, set) else list(new)
# Create dict to preserve case
new_dict = {norm_fold(str(n)): n for n in new}
old_dict = {norm_fold(str(c)): c for c in old}
old_dict.update(new_dict)
if isinstance(old, set):
return set(old_dict.values())
return list(old_dict.values())
def overlay(old: Any, new: Any) -> Any:
"""overlay - When the `new` object is not empty, replace `old` with `new`."""
if new is None or (isinstance(new, Collection) and len(new) == 0):
return old
return new
attribute: DefaultDict[Mode, Callable[[Any, Any], Any]] = DefaultDict(
{
Mode.OVERLAY: overlay,
Mode.ADD_MISSING: lambda old, new: overlay(new, old),
},
default=lambda x: overlay,
)
lists: DefaultDict[Mode, Callable[[Collection[Any], Collection[Any]], list[Any] | set[Any]]] = DefaultDict(
{
Mode.OVERLAY: merge_lists,
Mode.ADD_MISSING: lambda old, new: merge_lists(new, old),
},
default=lambda x: overlay,
)

View File

@ -0,0 +1,5 @@
from __future__ import annotations
from comicapi.tags.tag import Tag
__all__ = ["Tag"]

416
comicapi/tags/comicrack.py Normal file
View File

@ -0,0 +1,416 @@
"""A class to encapsulate ComicRack's ComicInfo.xml data"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import xml.etree.ElementTree as ET
from typing import Any
from comicapi import utils
from comicapi.archivers import Archiver
from comicapi.genericmetadata import FileHash, GenericMetadata, PageMetadata
from comicapi.tags import Tag
logger = logging.getLogger(__name__)
class ComicRack(Tag):
enabled = True
id = "cr"
def __init__(self, version: str) -> None:
super().__init__(version)
self.file = "ComicInfo.xml"
self.supported_attributes = {
"original_hash",
"series",
"issue",
"issue_count",
"title",
"volume",
"genres",
"description",
"notes",
"alternate_series",
"alternate_number",
"alternate_count",
"story_arcs",
"series_groups",
"publisher",
"imprint",
"day",
"month",
"year",
"language",
"web_links",
"format",
"manga",
"black_and_white",
"maturity_rating",
"critical_rating",
"scan_info",
"pages",
"pages.bookmark",
"pages.double_page",
"pages.height",
"pages.image_index",
"pages.size",
"pages.type",
"pages.width",
"page_count",
"characters",
"teams",
"locations",
"credits",
"credits.person",
"credits.role",
}
def supports_credit_role(self, role: str) -> bool:
return role.casefold() in self._get_parseable_credits()
def supports_tags(self, archive: Archiver) -> bool:
return archive.supports_files()
def has_tags(self, archive: Archiver) -> bool:
try: # read_file can cause an exception
return (
self.supports_tags(archive)
and self.file in archive.get_filename_list()
and self._validate_bytes(archive.read_file(self.file))
)
except Exception:
return False
def remove_tags(self, archive: Archiver) -> bool:
return self.has_tags(archive) and archive.remove_file(self.file)
def read_tags(self, archive: Archiver) -> GenericMetadata:
if self.has_tags(archive):
try: # read_file can cause an exception
metadata = archive.read_file(self.file) or b""
if self._validate_bytes(metadata):
return self._metadata_from_bytes(metadata)
except Exception:
...
return GenericMetadata()
def read_raw_tags(self, archive: Archiver) -> str:
try: # read_file can cause an exception
if self.has_tags(archive):
b = archive.read_file(self.file)
# ET.fromstring is used as xml can declare the encoding
return ET.tostring(ET.fromstring(b), encoding="unicode", xml_declaration=True)
except Exception:
...
return ""
def write_tags(self, metadata: GenericMetadata, archive: Archiver) -> bool:
if self.supports_tags(archive):
xml = b""
try: # read_file can cause an exception
if self.has_tags(archive):
xml = archive.read_file(self.file)
return archive.write_file(self.file, self._bytes_from_metadata(metadata, xml))
except Exception:
...
else:
logger.warning(f"Archive ({archive.name()}) does not support {self.name()} metadata")
return False
def name(self) -> str:
return "Comic Rack"
@classmethod
def _get_parseable_credits(cls) -> list[str]:
parsable_credits: list[str] = []
parsable_credits.extend(GenericMetadata.writer_synonyms)
parsable_credits.extend(GenericMetadata.penciller_synonyms)
parsable_credits.extend(GenericMetadata.inker_synonyms)
parsable_credits.extend(GenericMetadata.colorist_synonyms)
parsable_credits.extend(GenericMetadata.letterer_synonyms)
parsable_credits.extend(GenericMetadata.cover_synonyms)
parsable_credits.extend(GenericMetadata.editor_synonyms)
return parsable_credits
def _metadata_from_bytes(self, string: bytes) -> GenericMetadata:
root = ET.fromstring(string)
return self._convert_xml_to_metadata(root)
def _bytes_from_metadata(self, metadata: GenericMetadata, xml: bytes = b"") -> bytes:
root = self._convert_metadata_to_xml(metadata, xml)
return ET.tostring(root, encoding="utf-8", xml_declaration=True)
def _convert_metadata_to_xml(self, metadata: GenericMetadata, xml: bytes = b"") -> ET.Element:
# shorthand for the metadata
md = metadata
if xml:
root = ET.fromstring(xml)
else:
# build a tree structure
root = ET.Element("ComicInfo")
root.attrib["xmlns:xsi"] = "http://www.w3.org/2001/XMLSchema-instance"
root.attrib["xmlns:xsd"] = "http://www.w3.org/2001/XMLSchema"
# helper func
def assign(cr_entry: str, md_entry: Any) -> None:
if md_entry:
text = str(md_entry)
if isinstance(md_entry, (list, set)):
text = ",".join(md_entry)
et_entry = root.find(cr_entry)
if et_entry is not None:
et_entry.text = text
else:
ET.SubElement(root, cr_entry).text = text
else:
et_entry = root.find(cr_entry)
if et_entry is not None:
root.remove(et_entry)
# need to specially process the credits, since they are structured
# differently than CIX
credit_writer_list = []
credit_penciller_list = []
credit_inker_list = []
credit_colorist_list = []
credit_letterer_list = []
credit_cover_list = []
credit_editor_list = []
# first, loop thru credits, and build a list for each role that CIX
# supports
for credit in metadata.credits:
if credit.role.casefold() in set(GenericMetadata.writer_synonyms):
credit_writer_list.append(credit.person.replace(",", ""))
if credit.role.casefold() in set(GenericMetadata.penciller_synonyms):
credit_penciller_list.append(credit.person.replace(",", ""))
if credit.role.casefold() in set(GenericMetadata.inker_synonyms):
credit_inker_list.append(credit.person.replace(",", ""))
if credit.role.casefold() in set(GenericMetadata.colorist_synonyms):
credit_colorist_list.append(credit.person.replace(",", ""))
if credit.role.casefold() in set(GenericMetadata.letterer_synonyms):
credit_letterer_list.append(credit.person.replace(",", ""))
if credit.role.casefold() in set(GenericMetadata.cover_synonyms):
credit_cover_list.append(credit.person.replace(",", ""))
if credit.role.casefold() in set(GenericMetadata.editor_synonyms):
credit_editor_list.append(credit.person.replace(",", ""))
assign("Series", md.series)
assign("Number", md.issue)
assign("Count", md.issue_count)
assign("Title", md.title)
assign("Volume", md.volume)
assign("Genre", md.genres)
assign("Summary", md.description)
assign("Notes", md.notes)
assign("AlternateSeries", md.alternate_series)
assign("AlternateNumber", md.alternate_number)
assign("AlternateCount", md.alternate_count)
assign("StoryArc", md.story_arcs)
assign("SeriesGroup", md.series_groups)
assign("Publisher", md.publisher)
assign("Imprint", md.imprint)
assign("Day", md.day)
assign("Month", md.month)
assign("Year", md.year)
assign("LanguageISO", md.language)
assign("Web", " ".join(u.url for u in md.web_links))
assign("Format", md.format)
assign("Manga", md.manga)
assign("BlackAndWhite", "Yes" if md.black_and_white else None)
assign("AgeRating", md.maturity_rating)
assign("CommunityRating", md.critical_rating)
scan_info = md.scan_info or ""
if md.original_hash:
scan_info += f" sum:{md.original_hash}"
assign("ScanInformation", scan_info)
assign("PageCount", md.page_count)
assign("Characters", md.characters)
assign("Teams", md.teams)
assign("Locations", md.locations)
assign("Writer", ", ".join(credit_writer_list))
assign("Penciller", ", ".join(credit_penciller_list))
assign("Inker", ", ".join(credit_inker_list))
assign("Colorist", ", ".join(credit_colorist_list))
assign("Letterer", ", ".join(credit_letterer_list))
assign("CoverArtist", ", ".join(credit_cover_list))
assign("Editor", ", ".join(credit_editor_list))
# loop and add the page entries under pages node
pages_node = root.find("Pages")
if pages_node is not None:
pages_node.clear()
else:
pages_node = ET.SubElement(root, "Pages")
for page in sorted(md.pages, key=lambda x: x.archive_index):
page_node = ET.SubElement(pages_node, "Page")
page_node.attrib = {"Image": str(page.display_index)}
if page.bookmark:
page_node.attrib["Bookmark"] = page.bookmark
if page.type:
page_node.attrib["Type"] = page.type
if page.double_page is not None:
page_node.attrib["DoublePage"] = str(page.double_page)
if page.height is not None:
page_node.attrib["ImageHeight"] = str(page.height)
if page.byte_size is not None:
page_node.attrib["ImageSize"] = str(page.byte_size)
if page.width is not None:
page_node.attrib["ImageWidth"] = str(page.width)
page_node.attrib = dict(sorted(page_node.attrib.items()))
ET.indent(root)
return root
def _convert_xml_to_metadata(self, root: ET.Element) -> GenericMetadata:
if root.tag != "ComicInfo":
raise Exception("Not a ComicInfo file")
def get(name: str) -> str | None:
tag = root.find(name)
if tag is None:
return None
return tag.text
md = GenericMetadata()
md.series = utils.xlate(get("Series"))
md.issue = utils.xlate(get("Number"))
md.issue_count = utils.xlate_int(get("Count"))
md.title = utils.xlate(get("Title"))
md.volume = utils.xlate_int(get("Volume"))
md.genres = set(utils.split(get("Genre"), ","))
md.description = utils.xlate(get("Summary"))
md.notes = utils.xlate(get("Notes"))
md.alternate_series = utils.xlate(get("AlternateSeries"))
md.alternate_number = utils.xlate(get("AlternateNumber"))
md.alternate_count = utils.xlate_int(get("AlternateCount"))
md.story_arcs = utils.split(get("StoryArc"), ",")
md.series_groups = utils.split(get("SeriesGroup"), ",")
md.publisher = utils.xlate(get("Publisher"))
md.imprint = utils.xlate(get("Imprint"))
md.day = utils.xlate_int(get("Day"))
md.month = utils.xlate_int(get("Month"))
md.year = utils.xlate_int(get("Year"))
md.language = utils.xlate(get("LanguageISO"))
md.web_links = utils.split_urls(utils.xlate(get("Web")))
md.format = utils.xlate(get("Format"))
md.manga = utils.xlate(get("Manga"))
md.maturity_rating = utils.xlate(get("AgeRating"))
md.critical_rating = utils.xlate_float(get("CommunityRating"))
scan_info_list = (utils.xlate(get("ScanInformation")) or "").split()
for word in scan_info_list.copy():
if not word.startswith("sum:"):
continue
original_hash = FileHash.parse(word[4:])
if original_hash:
md.original_hash = original_hash
scan_info_list.remove(word)
break
if scan_info_list:
md.scan_info = " ".join(scan_info_list)
md.is_empty = False
md.page_count = utils.xlate_int(get("PageCount"))
md.characters = set(utils.split(get("Characters"), ","))
md.teams = set(utils.split(get("Teams"), ","))
md.locations = set(utils.split(get("Locations"), ","))
tmp = utils.xlate(get("BlackAndWhite"))
if tmp is not None:
md.black_and_white = tmp.casefold() in ["yes", "true", "1"]
# Now extract the credit info
for n in root:
if any(
[
n.tag == "Writer",
n.tag == "Penciller",
n.tag == "Inker",
n.tag == "Colorist",
n.tag == "Letterer",
n.tag == "Editor",
]
):
if n.text is not None:
for name in utils.split(n.text, ","):
md.add_credit(name.strip(), n.tag)
if n.tag == "CoverArtist":
if n.text is not None:
for name in utils.split(n.text, ","):
md.add_credit(name.strip(), "Cover")
# parse page data now
pages_node = root.find("Pages")
if pages_node is not None:
for i, page in enumerate(pages_node):
p: dict[str, Any] = page.attrib
md_page = PageMetadata(
filename="", # cr doesn't record the filename it just assumes it's always ordered the same
display_index=int(p.get("Image", i)),
archive_index=i,
bookmark=p.get("Bookmark", ""),
type="",
)
md_page.set_type(p.get("Type", ""))
if isinstance(p.get("DoublePage", None), str):
md_page.double_page = p["DoublePage"].casefold() in ("yes", "true", "1")
if p.get("ImageHeight", "").isnumeric():
md_page.height = int(float(p["ImageHeight"]))
if p.get("ImageWidth", "").isnumeric():
md_page.width = int(float(p["ImageWidth"]))
if p.get("ImageSize", "").isnumeric():
md_page.byte_size = int(float(p["ImageSize"]))
md.pages.append(md_page)
md.is_empty = False
return md
def _validate_bytes(self, string: bytes) -> bool:
"""verify that the string actually contains CIX data in XML format"""
try:
root = ET.fromstring(string)
if root.tag != "ComicInfo":
return False
except ET.ParseError:
return False
return True

125
comicapi/tags/tag.py Normal file
View File

@ -0,0 +1,125 @@
from __future__ import annotations
from comicapi.archivers import Archiver
from comicapi.genericmetadata import GenericMetadata
class Tag:
enabled: bool = False
id: str = ""
def __init__(self, version: str) -> None:
self.version: str = version
self.supported_attributes = {
"data_origin",
"issue_id",
"series_id",
"original_hash",
"series",
"series_aliases",
"issue",
"issue_count",
"title",
"title_aliases",
"volume",
"volume_count",
"genres",
"description",
"notes",
"alternate_series",
"alternate_number",
"alternate_count",
"story_arcs",
"series_groups",
"publisher",
"imprint",
"day",
"month",
"year",
"language",
"country",
"web_link",
"format",
"manga",
"black_and_white",
"maturity_rating",
"critical_rating",
"scan_info",
"tags",
"pages",
"pages.type",
"pages.bookmark",
"pages.double_page",
"pages.image_index",
"pages.size",
"pages.height",
"pages.width",
"page_count",
"characters",
"teams",
"locations",
"credits",
"credits.person",
"credits.role",
"credits.primary",
"credits.language",
"price",
"is_version_of",
"rights",
"identifier",
"last_mark",
}
def supports_credit_role(self, role: str) -> bool:
return False
def supports_tags(self, archive: Archiver) -> bool:
"""
Checks the given archive for the ability to save these tags.
Should always return a bool. Failures should return False.
Typically consists of a call to either `archive.supports_comment` or `archive.supports_file`
"""
return False
def has_tags(self, archive: Archiver) -> bool:
"""
Checks the given archive for tags.
Should always return a bool. Failures should return False.
"""
return False
def remove_tags(self, archive: Archiver) -> bool:
"""
Removes the tags from the given archive.
Should always return a bool. Failures should return False.
"""
return False
def read_tags(self, archive: Archiver) -> GenericMetadata:
"""
Returns a GenericMetadata representing the tags saved in the given archive.
Should always return a GenericMetadata. Failures should return an empty metadata object.
"""
return GenericMetadata()
def read_raw_tags(self, archive: Archiver) -> str:
"""
Returns the raw tags as a string.
If the tags are a binary format a roughly similar text format should be used.
Should always return a string. Failures should return the empty string.
"""
return ""
def write_tags(self, metadata: GenericMetadata, archive: Archiver) -> bool:
"""
Saves the given metadata to the given archive.
Should always return a bool. Failures should return False.
"""
return False
def name(self) -> str:
"""
Returns the name of these tags for display purposes eg "Comic Rack".
Should always return a string. Failures should return the empty string.
"""
return ""

714
comicapi/utils.py Normal file
View File

@ -0,0 +1,714 @@
"""Some generic utilities"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import difflib
import hashlib
import json
import logging
import os
import pathlib
import platform
import sys
import unicodedata
from collections.abc import Iterable, Mapping, Sequence
from enum import Enum, auto
from shutil import which # noqa: F401
from typing import Any, Callable, TypeVar, cast
from comicfn2dict import comicfn2dict
import comicapi.data
from comicapi import filenamelexer, filenameparser
from comicapi._url import LocationParseError as LocationParseError # noqa: F401
from comicapi._url import Url as Url
from comicapi._url import parse_url as parse_url
try:
import icu
del icu
icu_available = True
except ImportError:
icu_available = False
if sys.version_info < (3, 11):
def file_digest(fileobj, digest, /, *, _bufsize=2**18): # type: ignore[no-untyped-def]
"""Hash the contents of a file-like object. Returns a digest object.
*fileobj* must be a file-like object opened for reading in binary mode.
It accepts file objects from open(), io.BytesIO(), and SocketIO objects.
The function may bypass Python's I/O and use the file descriptor *fileno*
directly.
*digest* must either be a hash algorithm name as a *str*, a hash
constructor, or a callable that returns a hash object.
"""
# On Linux we could use AF_ALG sockets and sendfile() to archive zero-copy
# hashing with hardware acceleration.
if isinstance(digest, str):
digestobj = hashlib.new(digest)
else:
digestobj = digest()
if hasattr(fileobj, "getbuffer"):
# io.BytesIO object, use zero-copy buffer
digestobj.update(fileobj.getbuffer())
return digestobj
# Only binary files implement readinto().
if not (hasattr(fileobj, "readinto") and hasattr(fileobj, "readable") and fileobj.readable()):
raise ValueError(f"'{fileobj!r}' is not a file-like object in binary reading mode.")
# binary file, socket.SocketIO object
# Note: socket I/O uses different syscalls than file I/O.
buf = bytearray(_bufsize) # Reusable buffer to reduce allocations.
view = memoryview(buf)
while True:
size = fileobj.readinto(buf)
if size == 0:
break # EOF
digestobj.update(view[:size])
return digestobj
class StrEnum(str, Enum):
"""
Enum where members are also (and must be) strings
"""
def __new__(cls, *values: Any) -> Any:
"values must already be of type `str`"
if len(values) > 3:
raise TypeError(f"too many arguments for str(): {values!r}")
if len(values) == 1:
# it must be a string
if not isinstance(values[0], str):
raise TypeError(f"{values[0]!r} is not a string")
if len(values) >= 2:
# check that encoding argument is a string
if not isinstance(values[1], str):
raise TypeError(f"encoding must be a string, not {values[1]!r}")
if len(values) == 3:
# check that errors argument is a string
if not isinstance(values[2], str):
raise TypeError("errors must be a string, not %r" % (values[2]))
value = str(*values)
member = str.__new__(cls, value)
member._value_ = value
return member
@staticmethod
def _generate_next_value_(name: str, start: int, count: int, last_values: Any) -> str:
"""
Return the lower-cased version of the member name.
"""
return name.lower()
@classmethod
def _missing_(cls, value: Any) -> str | None:
if not isinstance(value, str):
return None
if not hasattr(cls, "_lower_members"):
cls._lower_members = {x.casefold(): x for x in cls} # type: ignore[attr-defined]
return cls._lower_members.get(value.casefold(), None) # type: ignore[attr-defined]
def __str__(self) -> str:
return self.value
else:
from enum import StrEnum as _StrEnum
from hashlib import file_digest
class StrEnum(_StrEnum):
@classmethod
def _missing_(cls, value: Any) -> str | None:
if not isinstance(value, str):
return None
if not hasattr(cls, "_lower_members"):
cls._lower_members = {x.casefold(): x for x in cls} # type: ignore[attr-defined]
return cls._lower_members.get(value.casefold(), None) # type: ignore[attr-defined]
logger = logging.getLogger(__name__)
_KT = TypeVar("_KT")
_VT = TypeVar("_VT")
class DefaultDict(dict[_KT, _VT]):
def __init__(self, *args, default: Callable[[_KT], _VT | _KT] | None = None, **kwargs) -> None: # type: ignore[no-untyped-def]
super().__init__(*args, **kwargs)
self.default = default
def __missing__(self, key: _KT) -> _VT | _KT:
if self.default is None:
return key
return self.default(key)
class Parser(StrEnum):
ORIGINAL = auto()
COMPLICATED = auto()
COMICFN2DICT = auto()
def _custom_key(tup: Any) -> Any:
import natsort
lst = []
for x in natsort.os_sort_keygen()(tup):
ret = x
if isinstance(x, Sequence) and len(x) > 1 and isinstance(x[1], int) and isinstance(x[0], str) and x[0] == "":
ret = ("a", *x[1:])
lst.append(ret)
return tuple(lst)
T = TypeVar("T")
def os_sorted(lst: Iterable[T]) -> list[T]:
import natsort
key = _custom_key
if icu_available or platform.system() == "Windows":
key = natsort.os_sort_keygen()
return sorted(sorted(lst), key=key) # type: ignore[type-var]
KNOWN_IMAGE_EXTENSIONS = {".jpg", ".jpeg", ".png", ".gif", ".webp", ".avif"}
def parse_filename(
filename: str,
parser: Parser = Parser.ORIGINAL,
remove_c2c: bool = False,
remove_fcbd: bool = False,
remove_publisher: bool = False,
split_words: bool = False,
allow_issue_start_with_letter: bool = False,
protofolius_issue_number_scheme: bool = False,
) -> filenameparser.FilenameInfo:
fni = filenameparser.FilenameInfo(
alternate="",
annual=False,
archive="",
c2c=False,
fcbd=False,
format="",
issue="",
issue_count="",
publisher="",
remainder="",
series="",
title="",
volume="",
volume_count="",
year="",
)
if not filename:
return fni
if split_words:
import wordninja
filename, ext = os.path.splitext(filename)
filename = " ".join(wordninja.split(filename)) + ext
if parser == Parser.COMPLICATED:
lex = filenamelexer.Lex(filename, allow_issue_start_with_letter)
p = filenameparser.Parse(
lex.items,
remove_c2c=remove_c2c,
remove_fcbd=remove_fcbd,
remove_publisher=remove_publisher,
protofolius_issue_number_scheme=protofolius_issue_number_scheme,
)
if p.error:
logger.info("Issue parsing filename: '%s': %s ", filename, p.error.val)
fni = p.filename_info
elif parser == Parser.COMICFN2DICT:
fn2d = comicfn2dict(filename)
fni = filenameparser.FilenameInfo(
alternate="",
annual=False,
archive=fn2d.get("ext", ""),
c2c=False,
fcbd=False,
issue=fn2d.get("issue", ""),
issue_count=fn2d.get("issue_count", ""),
publisher=fn2d.get("publisher", ""),
remainder=fn2d.get("scan_info", ""),
series=fn2d.get("series", ""),
title=fn2d.get("title", ""),
volume=fn2d.get("volume", ""),
volume_count=fn2d.get("volume_count", ""),
year=fn2d.get("year", ""),
format=fn2d.get("original_format", ""),
)
else:
fnp = filenameparser.FileNameParser()
fnp.parse_filename(filename)
fni = filenameparser.FilenameInfo(
alternate="",
annual=False,
archive="",
c2c=False,
fcbd=False,
issue=fnp.issue,
issue_count=fnp.issue_count,
publisher="",
remainder=fnp.remainder,
series=fnp.series,
title="",
volume=fnp.volume,
volume_count="",
year=fnp.year,
format="",
)
return fni
def norm_fold(string: str) -> str:
"""Normalise and casefold string"""
return unicodedata.normalize("NFKD", string).casefold()
def combine_notes(existing_notes: str | None, new_notes: str | None, split: str) -> str:
split_notes, split_str, untouched_notes = (existing_notes or "").rpartition(split)
if split_notes or split_str:
return (split_notes + (new_notes or "")).strip()
else:
return (untouched_notes + "\n" + (new_notes or "")).strip()
def parse_date_str(date_str: str | None) -> tuple[int | None, int | None, int | None]:
day = None
month = None
year = None
if date_str:
parts = date_str.split("-")
year = xlate_int(parts[0])
if len(parts) > 1:
month = xlate_int(parts[1])
if len(parts) > 2:
day = xlate_int(parts[2])
return day, month, year
def shorten_path(path: pathlib.Path, path2: pathlib.Path | None = None) -> tuple[pathlib.Path, pathlib.Path]:
if path2:
path2 = path2.absolute()
path = path.absolute()
shortened_path: pathlib.Path = path
relative_path = pathlib.Path(path.anchor)
if path.is_relative_to(path.home()):
relative_path = path.home()
shortened_path = path.relative_to(path.home())
if path.is_relative_to(path.cwd()):
relative_path = path.cwd()
shortened_path = path.relative_to(path.cwd())
if path2 and shortened_path.is_relative_to(path2.parent):
relative_path = path2
shortened_path = shortened_path.relative_to(path2)
return relative_path, shortened_path
def path_to_short_str(original_path: pathlib.Path, renamed_path: pathlib.Path | None = None) -> str:
rel, _original_path = shorten_path(original_path)
path_str = str(_original_path)
if rel.samefile(rel.cwd()):
path_str = f"./{_original_path}"
elif rel.samefile(rel.home()):
path_str = f"~/{_original_path}"
if renamed_path:
rel, path = shorten_path(renamed_path, original_path.parent)
rename_str = f" -> {path}"
if rel.samefile(rel.cwd()):
rename_str = f" -> ./{_original_path}"
elif rel.samefile(rel.home()):
rename_str = f" -> ~/{_original_path}"
path_str += rename_str
return path_str
def get_page_name_list(files: list[str]) -> list[str]:
# get the list file names in the archive, and sort
files = cast(list[str], os_sorted(files))
# make a sub-list of image files
page_list = []
for name in files:
if os.path.splitext(name)[1].casefold() in KNOWN_IMAGE_EXTENSIONS and os.path.basename(name)[0] != ".":
page_list.append(name)
return page_list
def get_recursive_filelist(pathlist: list[str]) -> list[str]:
"""Get a recursive list of of all files under all path items in the list"""
filelist: list[str] = []
for p in pathlist:
if os.path.isdir(p):
for root, _, files in os.walk(p):
for f in files:
filelist.append(os.path.join(root, f))
elif os.path.exists(p):
filelist.append(p)
return filelist
def add_to_path(dirname: str) -> None:
if dirname:
dirname = os.path.abspath(dirname)
paths = [os.path.normpath(x) for x in split(os.environ["PATH"], os.pathsep)]
if dirname not in paths:
paths.insert(0, dirname)
os.environ["PATH"] = os.pathsep.join(paths)
def remove_from_path(dirname: str) -> None:
if dirname:
dirname = os.path.abspath(dirname)
paths = [os.path.normpath(x) for x in split(os.environ["PATH"], os.pathsep) if dirname != os.path.normpath(x)]
os.environ["PATH"] = os.pathsep.join(paths)
def xlate_int(data: Any) -> int | None:
data = xlate_float(data)
if data is None:
return None
return int(data)
def xlate_float(data: Any) -> float | None:
if isinstance(data, str):
data = data.strip()
if data is None or data == "":
return None
i: str | int | float
if isinstance(data, (int, float)):
i = data
else:
i = str(data).translate(
DefaultDict(zip((ord(c) for c in "1234567890."), "1234567890."), default=lambda x: None)
)
if i == "":
return None
try:
return float(i)
except ValueError:
return None
def xlate(data: Any) -> str | None:
if data is None or isinstance(data, str) and data.strip() == "":
return None
return str(data).strip()
def split(s: str | None, c: str) -> list[str]:
s = xlate(s)
if s:
return [x.strip() for x in s.strip().split(c) if x.strip()]
return []
def split_urls(s: str | None) -> list[Url]:
if s is None:
return []
# Find occurences of ' http'
if s.count("http") > 1 and s.count(" http") >= 1:
urls = []
# Split urls out
url_strings = split(s, " http")
# Return the scheme 'http' and parse the url
for i, url_string in enumerate(url_strings):
if not url_string.startswith("http"):
url_string = "http" + url_string
urls.append(parse_url(url_string))
return urls
else:
return [parse_url(s)]
def remove_articles(text: str) -> str:
text = text.casefold()
articles = [
"&",
"a",
"am",
"an",
"and",
"as",
"at",
"be",
"but",
"by",
"for",
"if",
"is",
"issue",
"it",
"it's",
"its",
"itself",
"of",
"or",
"so",
"the",
"the",
"with",
]
new_text = ""
for word in text.split():
if word not in articles:
new_text += word + " "
new_text = new_text[:-1]
return new_text
def sanitize_title(text: str, basic: bool = False) -> str:
# normalize unicode and convert to ascii. Does not work for everything eg ½ to 12 not 1/2
text = unicodedata.normalize("NFKD", text).casefold()
# comicvine keeps apostrophes a part of the word
text = text.replace("'", "")
text = text.replace('"', "")
if not basic:
# comicvine ignores punctuation and accents
# remove all characters that are not a letter, separator (space) or number
# replace any "dash punctuation" with a space
# makes sure that batman-superman and self-proclaimed stay separate words
text = "".join(
c if unicodedata.category(c)[0] not in "P" else " " for c in text if unicodedata.category(c)[0] in "LZNP"
)
# remove extra space and articles and all lower case
text = remove_articles(text).strip()
return text
def titles_match(search_title: str, record_title: str, threshold: int = 90) -> bool:
log_msg = "search title: %s ; record title: %s ; ratio: %d ; match threshold: %d"
thresh = threshold / 100
sanitized_search = sanitize_title(search_title)
sanitized_record = sanitize_title(record_title)
s = difflib.SequenceMatcher(None, sanitized_search, sanitized_record)
ratio = s.real_quick_ratio()
if ratio < thresh:
logger.debug(log_msg, search_title, record_title, ratio * 100, threshold)
return False
ratio = s.quick_ratio()
if ratio < thresh:
logger.debug(log_msg, search_title, record_title, ratio * 100, threshold)
return False
ratio = s.ratio()
if ratio < thresh:
logger.debug(log_msg, search_title, record_title, ratio * 100, threshold)
return False
logger.debug(log_msg, search_title, record_title, ratio * 100, threshold)
return True
def unique_file(file_name: pathlib.Path) -> pathlib.Path:
name = file_name.stem
counter = 1
while True:
if not file_name.exists():
return file_name
file_name = file_name.with_stem(name + " (" + str(counter) + ")")
counter += 1
def parse_version(s: str) -> tuple[int, int, int]:
str_parts = s.split(".")[:3]
parts = [int(x) if x.isdigit() else 0 for x in str_parts]
parts.extend([0] * (3 - len(parts))) # Ensure exactly three elements in the resulting list
return (parts[0], parts[1], parts[2])
_languages: dict[str | None, str | None] = DefaultDict(default=lambda x: None)
_countries: dict[str | None, str | None] = DefaultDict(default=lambda x: None)
def countries() -> dict[str | None, str | None]:
if not _countries:
import isocodes
for alpha_2, c in isocodes.countries.by_alpha_2:
_countries[alpha_2] = c["name"]
return _countries.copy()
def languages() -> dict[str | None, str | None]:
if not _languages:
import isocodes
for alpha_2, lng in isocodes.extendend_languages._sorted_by_index(index="alpha_2"):
_languages[alpha_2] = lng["name"]
return _languages.copy()
def get_language_from_iso(iso: str | None) -> str | None:
if not _languages:
return languages()[iso]
return _languages[iso]
def get_language_iso(string: str | None) -> str | None:
if string is None:
return None
import isocodes
# Return current string if all else fails
lang = string.casefold()
found = None
for lng in isocodes.extendend_languages.items:
for x in ("alpha_2", "alpha_3", "bibliographic", "common_name", "name"):
if x in lng and lng[x].casefold() == lang:
found = lng
# break
if found:
break
if found:
return found.get("alpha_2", None)
return lang
def get_country_from_iso(iso: str | None) -> str | None:
if not _countries:
return countries()[iso]
return _countries[iso]
def get_publisher(publisher: str) -> tuple[str, str]:
imprint = ""
for pub in publishers.values():
imprint, publisher, ok = pub[publisher]
if ok:
break
return imprint, publisher
def update_publishers(new_publishers: Mapping[str, Mapping[str, str]]) -> None:
for publisher in new_publishers:
if publisher in publishers:
publishers[publisher].update(new_publishers[publisher])
else:
publishers[publisher] = ImprintDict(publisher, new_publishers[publisher])
class ImprintDict(dict[str, str]):
"""
ImprintDict takes a publisher and a dict or mapping of lowercased
imprint names to the proper imprint name. Retrieving a value from an
ImprintDict returns a tuple of (imprint, publisher, keyExists).
if the key does not exist the key is returned as the publisher unchanged
"""
def __init__(self, publisher: str, mapping: Mapping[str, str] = {}, **kwargs) -> None: # type: ignore[no-untyped-def]
super().__init__(mapping, **kwargs)
self.publisher = publisher
def __missing__(self, key: str) -> None:
return None
def __getitem__(self, k: str) -> tuple[str, str, bool]: # type: ignore[override]
item = super().__getitem__(k.casefold())
if k.casefold() == self.publisher.casefold():
return "", self.publisher, True
if item is None:
return "", k, False
else:
return item, self.publisher, True
def copy(self) -> ImprintDict:
return ImprintDict(self.publisher, super().copy())
publishers: dict[str, ImprintDict] = {}
def load_publishers() -> None:
try:
update_publishers(json.loads((comicapi.data.data_path / "publishers.json").read_text("utf-8")))
except Exception:
logger.exception("Failed to load publishers.json; The are no publishers or imprints loaded")
__all__ = (
"load_publishers",
"file_digest",
"Parser",
"ImprintDict",
"os_sorted",
"parse_filename",
"norm_fold",
"combine_notes",
"parse_date_str",
"shorten_path",
"path_to_short_str",
"get_page_name_list",
"get_recursive_filelist",
"add_to_path",
"remove_from_path",
"xlate_int",
"xlate_float",
"xlate",
"split",
"split_urls",
"remove_articles",
"sanitize_title",
"titles_match",
"unique_file",
"parse_version",
"countries",
"languages",
"get_language_from_iso",
"get_language_iso",
"get_country_from_iso",
"get_publisher",
"update_publishers",
"load_publishers",
)

View File

@ -1,978 +0,0 @@
"""
A python class to represent a single comic, be it file or folder of images
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import zipfile
import os
import struct
import sys
import tempfile
import subprocess
import platform
if platform.system() == "Windows":
import _subprocess
import time
import StringIO
try:
import Image
pil_available = True
except ImportError:
pil_available = False
sys.path.insert(0, os.path.abspath(".") )
import UnRAR2
from UnRAR2.rar_exceptions import *
from options import Options, MetaDataStyle
from comicinfoxml import ComicInfoXml
from comicbookinfo import ComicBookInfo
from comet import CoMet
from genericmetadata import GenericMetadata, PageType
from filenameparser import FileNameParser
from settings import ComicTaggerSettings
class ZipArchiver:
def __init__( self, path ):
self.path = path
def getArchiveComment( self ):
zf = zipfile.ZipFile( self.path, 'r' )
comment = zf.comment
zf.close()
return comment
def setArchiveComment( self, comment ):
return self.writeZipComment( self.path, comment )
def readArchiveFile( self, archive_file ):
data = ""
zf = zipfile.ZipFile( self.path, 'r' )
try:
data = zf.read( archive_file )
except zipfile.BadZipfile as e:
print "bad zipfile [{0}]: {1} :: {2}".format(e, self.path, archive_file)
zf.close()
raise IOError
except Exception as e:
zf.close()
print "bad zipfile [{0}]: {1} :: {2}".format(e, self.path, archive_file)
raise IOError
finally:
zf.close()
return data
def removeArchiveFile( self, archive_file ):
try:
self.rebuildZipFile( [ archive_file ] )
except:
return False
else:
return True
def writeArchiveFile( self, archive_file, data ):
# At the moment, no other option but to rebuild the whole
# zip archive w/o the indicated file. Very sucky, but maybe
# another solution can be found
try:
self.rebuildZipFile( [ archive_file ] )
#now just add the archive file as a new one
zf = zipfile.ZipFile(self.path, mode='a', compression=zipfile.ZIP_DEFLATED )
zf.writestr( archive_file, data )
zf.close()
return True
except:
return False
def getArchiveFilenameList( self ):
zf = zipfile.ZipFile( self.path, 'r' )
namelist = zf.namelist()
zf.close()
return namelist
# zip helper func
def rebuildZipFile( self, exclude_list ):
# this recompresses the zip archive, without the files in the exclude_list
#print "Rebuilding zip {0} without {1}".format( self.path, exclude_list )
# generate temp file
tmp_fd, tmp_name = tempfile.mkstemp( dir=os.path.dirname(self.path) )
os.close( tmp_fd )
zin = zipfile.ZipFile (self.path, 'r')
zout = zipfile.ZipFile (tmp_name, 'w')
for item in zin.infolist():
buffer = zin.read(item.filename)
if ( item.filename not in exclude_list ):
zout.writestr(item, buffer)
#preserve the old comment
zout.comment = zin.comment
zout.close()
zin.close()
# replace with the new file
os.remove( self.path )
os.rename( tmp_name, self.path )
def writeZipComment( self, filename, comment ):
"""
This is a custom function for writing a comment to a zip file,
since the built-in one doesn't seem to work on Windows and Mac OS/X
Fortunately, the zip comment is at the end of the file, and it's
easy to manipulate. See this website for more info:
see: http://en.wikipedia.org/wiki/Zip_(file_format)#Structure
"""
#get file size
statinfo = os.stat(filename)
file_length = statinfo.st_size
try:
fo = open(filename, "r+b")
#the starting position, relative to EOF
pos = -4
found = False
value = bytearray()
# walk backwards to find the "End of Central Directory" record
while ( not found ) and ( -pos != file_length ):
# seek, relative to EOF
fo.seek( pos, 2)
value = fo.read( 4 )
#look for the end of central directory signature
if bytearray(value) == bytearray([ 0x50, 0x4b, 0x05, 0x06 ]):
found = True
else:
# not found, step back another byte
pos = pos - 1
#print pos,"{1} int: {0:x}".format(bytearray(value)[0], value)
if found:
# now skip forward 20 bytes to the comment length word
pos += 20
fo.seek( pos, 2)
# Pack the length of the comment string
format = "H" # one 2-byte integer
comment_length = struct.pack(format, len(comment)) # pack integer in a binary string
# write out the length
fo.write( comment_length )
fo.seek( pos+2, 2)
# write out the comment itself
fo.write( comment )
fo.truncate()
fo.close()
else:
raise Exception('Failed to write comment to zip file!')
except:
return False
else:
return True
def copyFromArchive( self, otherArchive ):
# Replace the current zip with one copied from another archive
try:
zout = zipfile.ZipFile (self.path, 'w')
for fname in otherArchive.getArchiveFilenameList():
data = otherArchive.readArchiveFile( fname )
if data is not None:
zout.writestr( fname, data )
zout.close()
#preserve the old comment
comment = otherArchive.getArchiveComment()
if comment is not None:
if not self.writeZipComment( self.path, comment ):
return False
except Exception as e:
print "Error while copying to {0}: {1}".format(self.path, e)
return False
else:
return True
#------------------------------------------
# RAR implementation
class RarArchiver:
devnull = None
def __init__( self, path ):
self.path = path
self.rar_exe_path = None
if RarArchiver.devnull is None:
RarArchiver.devnull = open(os.devnull, "w")
# windows only, keeps the cmd.exe from popping up
if platform.system() == "Windows":
self.startupinfo = subprocess.STARTUPINFO()
self.startupinfo.dwFlags |= _subprocess.STARTF_USESHOWWINDOW
else:
self.startupinfo = None
def __del__(self):
#RarArchiver.devnull.close()
pass
def getArchiveComment( self ):
rarc = self.getRARObj()
return rarc.comment
def setArchiveComment( self, comment ):
if self.rar_exe_path is not None:
try:
# write comment to temp file
tmp_fd, tmp_name = tempfile.mkstemp()
f = os.fdopen(tmp_fd, 'w+b')
f.write( comment )
f.close()
working_dir = os.path.dirname( os.path.abspath( self.path ) )
# use external program to write comment to Rar archive
subprocess.call([self.rar_exe_path, 'c', '-w' + working_dir , '-c-', '-z' + tmp_name, self.path],
startupinfo=self.startupinfo,
stdout=RarArchiver.devnull)
if platform.system() == "Darwin":
time.sleep(1)
os.remove( tmp_name)
except:
return False
else:
return True
else:
return False
def readArchiveFile( self, archive_file ):
# Make sure to escape brackets, since some funky stuff is going on
# underneath with "fnmatch"
archive_file = archive_file.replace("[", '[[]')
entries = []
rarc = self.getRARObj()
tries = 0
while tries < 7:
try:
tries = tries+1
entries = rarc.read_files( archive_file )
if entries[0][0].size != len(entries[0][1]):
print "readArchiveFile(): [file is not expected size: {0} vs {1}] {2}:{3} [attempt # {4}]".format(
entries[0][0].size,len(entries[0][1]), self.path, archive_file, tries)
continue
except (OSError, IOError) as e:
print "readArchiveFile(): [{0}] {1}:{2} attempt#{3}".format(str(e), self.path, archive_file, tries)
time.sleep(1)
except Exception as e:
print "Unexpected exception in readArchiveFile(): [{0}] for {1}:{2} attempt#{3}".format(str(e), self.path, archive_file, tries)
break
else:
#Success"
#entries is a list of of tuples: ( rarinfo, filedata)
if tries > 1:
print "Attempted read_files() {0} times".format(tries)
if (len(entries) == 1):
return entries[0][1]
else:
raise IOError
raise IOError
def writeArchiveFile( self, archive_file, data ):
if self.rar_exe_path is not None:
try:
tmp_folder = tempfile.mkdtemp()
tmp_file = os.path.join( tmp_folder, archive_file )
working_dir = os.path.dirname( os.path.abspath( self.path ) )
# TODO: will this break if 'archive_file' is in a subfolder. i.e. "foo/bar.txt"
# will need to create the subfolder above, I guess...
f = open(tmp_file, 'w')
f.write( data )
f.close()
# use external program to write file to Rar archive
subprocess.call([self.rar_exe_path, 'a', '-w' + working_dir ,'-c-', '-ep', self.path, tmp_file],
startupinfo=self.startupinfo,
stdout=RarArchiver.devnull)
if platform.system() == "Darwin":
time.sleep(1)
os.remove( tmp_file)
os.rmdir( tmp_folder)
except:
return False
else:
return True
else:
return False
def removeArchiveFile( self, archive_file ):
if self.rar_exe_path is not None:
try:
# use external program to remove file from Rar archive
subprocess.call([self.rar_exe_path, 'd','-c-', self.path, archive_file],
startupinfo=self.startupinfo,
stdout=RarArchiver.devnull)
if platform.system() == "Darwin":
time.sleep(1)
except:
return False
else:
return True
else:
return False
def getArchiveFilenameList( self ):
rarc = self.getRARObj()
#namelist = [ item.filename for item in rarc.infolist() ]
#return namelist
tries = 0
while tries < 7:
try:
tries = tries+1
#namelist = [ item.filename for item in rarc.infolist() ]
namelist = []
for item in rarc.infolist():
if item.size != 0:
namelist.append( item.filename )
except (OSError, IOError) as e:
print "getArchiveFilenameList(): [{0}] {1} attempt#{2}".format(str(e), self.path, tries)
time.sleep(1)
else:
#Success"
return namelist
raise e
def getRARObj( self ):
tries = 0
while tries < 7:
try:
tries = tries+1
rarc = UnRAR2.RarFile( self.path )
except (OSError, IOError) as e:
print "getRARObj(): [{0}] {1} attempt#{2}".format(str(e), self.path, tries)
time.sleep(1)
else:
#Success"
return rarc
raise e
#------------------------------------------
# Folder implementation
class FolderArchiver:
def __init__( self, path ):
self.path = path
self.comment_file_name = "ComicTaggerFolderComment.txt"
def getArchiveComment( self ):
return self.readArchiveFile( self.comment_file_name )
def setArchiveComment( self, comment ):
return self.writeArchiveFile( self.comment_file_name, comment )
def readArchiveFile( self, archive_file ):
data = ""
fname = os.path.join( self.path, archive_file )
try:
with open( fname, 'rb' ) as f:
data = f.read()
f.close()
except IOError as e:
pass
return data
def writeArchiveFile( self, archive_file, data ):
fname = os.path.join( self.path, archive_file )
try:
with open(fname, 'w+') as f:
f.write( data )
f.close()
except:
return False
else:
return True
def removeArchiveFile( self, archive_file ):
fname = os.path.join( self.path, archive_file )
try:
os.remove( fname )
except:
return False
else:
return True
def getArchiveFilenameList( self ):
return self.listFiles( self.path )
def listFiles( self, folder ):
itemlist = list()
for item in os.listdir( folder ):
itemlist.append( item )
if os.path.isdir( item ):
itemlist.extend( self.listFiles( os.path.join( folder, item ) ))
return itemlist
#------------------------------------------
# Unknown implementation
class UnknownArchiver:
def __init__( self, path ):
self.path = path
def getArchiveComment( self ):
return ""
def setArchiveComment( self, comment ):
return False
def readArchiveFile( self ):
return ""
def writeArchiveFile( self, archive_file, data ):
return False
def removeArchiveFile( self, archive_file ):
return False
def getArchiveFilenameList( self ):
return []
#------------------------------------------------------------------
class ComicArchive:
logo_data = None
class ArchiveType:
Zip, Rar, Folder, Unknown = range(4)
def __init__( self, path ):
self.path = path
self.ci_xml_filename = 'ComicInfo.xml'
self.comet_default_filename = 'CoMet.xml'
self.resetCache()
if self.zipTest():
self.archive_type = self.ArchiveType.Zip
self.archiver = ZipArchiver( self.path )
elif self.rarTest():
self.archive_type = self.ArchiveType.Rar
self.archiver = RarArchiver( self.path )
elif os.path.isdir( self.path ):
self.archive_type = self.ArchiveType.Folder
self.archiver = FolderArchiver( self.path )
else:
self.archive_type = self.ArchiveType.Unknown
self.archiver = UnknownArchiver( self.path )
if ComicArchive.logo_data is None:
fname = os.path.join(ComicTaggerSettings.baseDir(), 'graphics','nocover.png' )
with open(fname, 'rb') as fd:
ComicArchive.logo_data = fd.read()
# Clears the cached data
def resetCache( self ):
self.has_cix = None
self.has_cbi = None
self.has_comet = None
self.comet_filename = None
self.page_count = None
self.page_list = None
self.cix_md = None
self.cbi_md = None
self.comet_md = None
def loadCache( self, style_list ):
for style in style_list:
self.readMetadata(style)
def rename( self, path ):
self.path = path
self.archiver.path = path
def setExternalRarProgram( self, rar_exe_path ):
if self.isRar():
self.archiver.rar_exe_path = rar_exe_path
def zipTest( self ):
return zipfile.is_zipfile( self.path )
def rarTest( self ):
try:
rarc = UnRAR2.RarFile( self.path )
except: # InvalidRARArchive:
return False
else:
return True
def isZip( self ):
return self.archive_type == self.ArchiveType.Zip
def isRar( self ):
return self.archive_type == self.ArchiveType.Rar
def isFolder( self ):
return self.archive_type == self.ArchiveType.Folder
def isWritable( self, check_rar_status=True ):
if self.archive_type == self.ArchiveType.Unknown :
return False
elif check_rar_status and self.isRar() and self.archiver.rar_exe_path is None:
return False
elif not os.access(self.path, os.W_OK):
return False
elif ((self.archive_type != self.ArchiveType.Folder) and
(not os.access( os.path.dirname( os.path.abspath(self.path)), os.W_OK ))):
return False
return True
def isWritableForStyle( self, data_style ):
if self.isRar() and data_style == MetaDataStyle.CBI:
return False
return self.isWritable()
def seemsToBeAComicArchive( self ):
# Do we even care about extensions??
ext = os.path.splitext(self.path)[1].lower()
if (
( self.isZip() or self.isRar() or self.isFolder() )
and
( self.getNumberOfPages() > 2)
):
return True
else:
return False
def readMetadata( self, style ):
if style == MetaDataStyle.CIX:
return self.readCIX()
elif style == MetaDataStyle.CBI:
return self.readCBI()
elif style == MetaDataStyle.COMET:
return self.readCoMet()
else:
return GenericMetadata()
def writeMetadata( self, metadata, style ):
retcode = None
if style == MetaDataStyle.CIX:
retcode = self.writeCIX( metadata )
elif style == MetaDataStyle.CBI:
retcode = self.writeCBI( metadata )
elif style == MetaDataStyle.COMET:
retcode = self.writeCoMet( metadata )
return retcode
def hasMetadata( self, style ):
if style == MetaDataStyle.CIX:
return self.hasCIX()
elif style == MetaDataStyle.CBI:
return self.hasCBI()
elif style == MetaDataStyle.COMET:
return self.hasCoMet()
else:
return False
def removeMetadata( self, style ):
retcode = True
if style == MetaDataStyle.CIX:
retcode = self.removeCIX()
elif style == MetaDataStyle.CBI:
retcode = self.removeCBI()
elif style == MetaDataStyle.COMET:
retcode = self.removeCoMet()
return retcode
def getPage( self, index ):
image_data = None
filename = self.getPageName( index )
if filename is not None:
try:
image_data = self.archiver.readArchiveFile( filename )
except IOError:
print "Error reading in page. Substituting logo page."
image_data = ComicArchive.logo_data
return image_data
def getPageName( self, index ):
page_list = self.getPageNameList()
num_pages = len( page_list )
if num_pages == 0 or index >= num_pages:
return None
return page_list[index]
def getPageNameList( self , sort_list=True):
if self.page_list is None:
# get the list file names in the archive, and sort
files = self.archiver.getArchiveFilenameList()
# seems like some archive creators are on Windows, and don't know about case-sensitivity!
if sort_list:
files.sort(key=lambda x: x.lower())
# make a sub-list of image files
self.page_list = []
for name in files:
if ( name[-4:].lower() in [ ".jpg", "jpeg", ".png" ] and os.path.basename(name)[0] != "." ):
self.page_list.append(name)
return self.page_list
def getNumberOfPages( self ):
if self.page_count is None:
self.page_count = len( self.getPageNameList( ) )
return self.page_count
def readCBI( self ):
if self.cbi_md is None:
raw_cbi = self.readRawCBI()
if raw_cbi is None:
self.cbi_md = GenericMetadata()
else:
self.cbi_md = ComicBookInfo().metadataFromString( raw_cbi )
self.cbi_md.setDefaultPageList( self.getNumberOfPages() )
return self.cbi_md
def readRawCBI( self ):
if ( not self.hasCBI() ):
return None
return self.archiver.getArchiveComment()
def hasCBI(self):
if self.has_cbi is None:
#if ( not ( self.isZip() or self.isRar()) or not self.seemsToBeAComicArchive() ):
if not self.seemsToBeAComicArchive():
self.has_cbi = False
else:
comment = self.archiver.getArchiveComment()
self.has_cbi = ComicBookInfo().validateString( comment )
return self.has_cbi
def writeCBI( self, metadata ):
if metadata is not None:
self.applyArchiveInfoToMetadata( metadata )
cbi_string = ComicBookInfo().stringFromMetadata( metadata )
write_success = self.archiver.setArchiveComment( cbi_string )
if write_success:
self.has_cbi = True
self.cbi_md = metadata
self.resetCache()
return write_success
else:
return False
def removeCBI( self ):
if self.hasCBI():
write_success = self.archiver.setArchiveComment( "" )
if write_success:
self.has_cbi = False
self.cbi_md = None
self.resetCache()
return write_success
return True
def readCIX( self ):
if self.cix_md is None:
raw_cix = self.readRawCIX()
if raw_cix is None or raw_cix == "":
self.cix_md = GenericMetadata()
else:
self.cix_md = ComicInfoXml().metadataFromString( raw_cix )
#validate the existing page list (make sure count is correct)
if len ( self.cix_md.pages ) != 0 :
if len ( self.cix_md.pages ) != self.getNumberOfPages():
# pages array doesn't match the actual number of images we're seeing
# in the archive, so discard the data
self.cix_md.pages = []
if len( self.cix_md.pages ) == 0:
self.cix_md.setDefaultPageList( self.getNumberOfPages() )
return self.cix_md
def readRawCIX( self ):
if not self.hasCIX():
return None
try:
raw_cix = self.archiver.readArchiveFile( self.ci_xml_filename )
except IOError:
print "Error reading in raw CIX!"
raw_cix = ""
return raw_cix
def writeCIX(self, metadata):
if metadata is not None:
self.applyArchiveInfoToMetadata( metadata, calc_page_sizes=True )
cix_string = ComicInfoXml().stringFromMetadata( metadata )
write_success = self.archiver.writeArchiveFile( self.ci_xml_filename, cix_string )
if write_success:
self.has_cix = True
self.cix_md = metadata
self.resetCache()
return write_success
else:
return False
def removeCIX( self ):
if self.hasCIX():
write_success = self.archiver.removeArchiveFile( self.ci_xml_filename )
if write_success:
self.has_cix = False
self.cix_md = None
self.resetCache()
return write_success
return True
def hasCIX(self):
if self.has_cix is None:
if not self.seemsToBeAComicArchive():
self.has_cix = False
elif self.ci_xml_filename in self.archiver.getArchiveFilenameList():
self.has_cix = True
else:
self.has_cix = False
return self.has_cix
def readCoMet( self ):
if self.comet_md is None:
raw_comet = self.readRawCoMet()
if raw_comet is None or raw_comet == "":
self.comet_md = GenericMetadata()
else:
self.comet_md = CoMet().metadataFromString( raw_comet )
self.comet_md.setDefaultPageList( self.getNumberOfPages() )
#use the coverImage value from the comet_data to mark the cover in this struct
# walk through list of images in file, and find the matching one for md.coverImage
# need to remove the existing one in the default
if self.comet_md.coverImage is not None:
cover_idx = 0
for idx,f in enumerate(self.getPageNameList()):
if self.comet_md.coverImage == f:
cover_idx = idx
break
if cover_idx != 0:
del (self.comet_md.pages[0]['Type'] )
self.comet_md.pages[ cover_idx ]['Type'] = PageType.FrontCover
return self.comet_md
def readRawCoMet( self ):
if not self.hasCoMet():
print self.path, "doesn't have CoMet data!"
return None
try:
raw_comet = self.archiver.readArchiveFile( self.comet_filename )
except IOError:
print "Error reading in raw CoMet!"
raw_comet = ""
return raw_comet
def writeCoMet(self, metadata):
if metadata is not None:
if not self.hasCoMet():
self.comet_filename = self.comet_default_filename
self.applyArchiveInfoToMetadata( metadata )
# Set the coverImage value, if it's not the first page
cover_idx = int(metadata.getCoverPageIndexList()[0])
if cover_idx != 0:
metadata.coverImage = self.getPageName( cover_idx )
comet_string = CoMet().stringFromMetadata( metadata )
write_success = self.archiver.writeArchiveFile( self.comet_filename, comet_string )
if write_success:
self.has_comet = True
self.comet_md = metadata
self.resetCache()
return write_success
else:
return False
def removeCoMet( self ):
if self.hasCoMet():
write_success = self.archiver.removeArchiveFile( self.comet_filename )
if write_success:
self.has_comet = False
self.comet_md = None
self.resetCache()
return write_success
return True
def hasCoMet(self):
if self.has_comet is None:
self.has_comet = False
if not self.seemsToBeAComicArchive():
return self.has_comet
#look at all xml files in root, and search for CoMet data, get first
for n in self.archiver.getArchiveFilenameList():
if ( os.path.dirname(n) == "" and
os.path.splitext(n)[1].lower() == '.xml'):
# read in XML file, and validate it
try:
data = self.archiver.readArchiveFile( n )
except:
data = ""
print "Error reading in Comet XML for validation!"
if CoMet().validateString( data ):
# since we found it, save it!
self.comet_filename = n
self.has_comet = True
break
return self.has_comet
def applyArchiveInfoToMetadata( self, md, calc_page_sizes=False):
md.pageCount = self.getNumberOfPages()
if calc_page_sizes:
for p in md.pages:
idx = int( p['Image'] )
if pil_available:
if 'ImageSize' not in p or 'ImageHeight' not in p or 'ImageWidth' not in p:
data = self.getPage( idx )
if data is not None:
try:
im = Image.open(StringIO.StringIO(data))
w,h = im.size
p['ImageSize'] = str(len(data))
p['ImageHeight'] = str(h)
p['ImageWidth'] = str(w)
except IOError:
p['ImageSize'] = str(len(data))
else:
if 'ImageSize' not in p:
data = self.getPage( idx )
p['ImageSize'] = str(len(data))
def metadataFromFilename( self ):
metadata = GenericMetadata()
fnp = FileNameParser()
fnp.parseFilename( self.path )
if fnp.issue != "":
metadata.issue = fnp.issue
if fnp.series != "":
metadata.series = fnp.series
if fnp.volume != "":
metadata.volume = fnp.volume
if fnp.year != "":
metadata.year = fnp.year
if fnp.issue_count != "":
metadata.issueCount = fnp.issue_count
metadata.isEmpty = False
return metadata
def exportAsZip( self, zipfilename ):
if self.archive_type == self.ArchiveType.Zip:
# nothing to do, we're already a zip
return True
zip_archiver = ZipArchiver( zipfilename )
return zip_archiver.copyFromArchive( self.archiver )

View File

@ -1,152 +0,0 @@
"""
A python class to encapsulate the ComicBookInfo data
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import json
from datetime import datetime
import zipfile
from genericmetadata import GenericMetadata
import utils
import ctversion
class ComicBookInfo:
def metadataFromString( self, string ):
cbi_container = json.loads( unicode(string, 'utf-8') )
metadata = GenericMetadata()
cbi = cbi_container[ 'ComicBookInfo/1.0' ]
#helper func
# If item is not in CBI, return None
def xlate( cbi_entry):
if cbi_entry in cbi:
return cbi[cbi_entry]
else:
return None
metadata.series = xlate( 'series' )
metadata.title = xlate( 'title' )
metadata.issue = xlate( 'issue' )
metadata.publisher = xlate( 'publisher' )
metadata.month = xlate( 'publicationMonth' )
metadata.year = xlate( 'publicationYear' )
metadata.issueCount = xlate( 'numberOfIssues' )
metadata.comments = xlate( 'comments' )
metadata.credits = xlate( 'credits' )
metadata.genre = xlate( 'genre' )
metadata.volume = xlate( 'volume' )
metadata.volumeCount = xlate( 'numberOfVolumes' )
metadata.language = xlate( 'language' )
metadata.country = xlate( 'country' )
metadata.criticalRating = xlate( 'rating' )
metadata.tags = xlate( 'tags' )
# make sure credits and tags are at least empty lists and not None
if metadata.credits is None:
metadata.credits = []
if metadata.tags is None:
metadata.tags = []
#need to massage the language string to be ISO
if metadata.language is not None:
# reverse look-up
pattern = metadata.language
metadata.language = None
for key in utils.getLanguageDict():
if utils.getLanguageDict()[ key ] == pattern.encode('utf-8'):
metadata.language = key
break
metadata.isEmpty = False
return metadata
def stringFromMetadata( self, metadata ):
cbi_container = self.createJSONDictionary( metadata )
return json.dumps( cbi_container )
#verify that the string actually contains CBI data in JSON format
def validateString( self, string ):
try:
cbi_container = json.loads( string )
except:
return False
return ( 'ComicBookInfo/1.0' in cbi_container )
def createJSONDictionary( self, metadata ):
# Create the dictionary that we will convert to JSON text
cbi = dict()
cbi_container = {'appID' : 'ComicTagger/' + ctversion.version,
'lastModified' : str(datetime.now()),
'ComicBookInfo/1.0' : cbi }
#helper func
def assign( cbi_entry, md_entry):
if md_entry is not None:
cbi[cbi_entry] = md_entry
#helper func
def toInt(s):
i = None
if type(s) in [ str, unicode, int ]:
try:
i = int(s)
except ValueError:
pass
return i
assign( 'series', metadata.series )
assign( 'title', metadata.title )
assign( 'issue', metadata.issue )
assign( 'publisher', metadata.publisher )
assign( 'publicationMonth', toInt(metadata.month) )
assign( 'publicationYear', toInt(metadata.year) )
assign( 'numberOfIssues', toInt(metadata.issueCount) )
assign( 'comments', metadata.comments )
assign( 'genre', metadata.genre )
assign( 'volume', toInt(metadata.volume) )
assign( 'numberOfVolumes', toInt(metadata.volumeCount) )
assign( 'language', utils.getLanguageFromISO(metadata.language) )
assign( 'country', metadata.country )
assign( 'rating', metadata.criticalRating )
assign( 'credits', metadata.credits )
assign( 'tags', metadata.tags )
return cbi_container
def writeToExternalFile( self, filename, metadata ):
cbi_container = self.createJSONDictionary(metadata)
f = open(filename, 'w')
f.write(json.dumps(cbi_container, indent=4))
f.close

View File

@ -1,289 +0,0 @@
"""
A python class to encapsulate ComicRack's ComicInfo.xml data
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from datetime import datetime
import zipfile
from pprint import pprint
import xml.etree.ElementTree as ET
from genericmetadata import GenericMetadata
import utils
class ComicInfoXml:
writer_synonyms = ['writer', 'plotter', 'scripter']
penciller_synonyms = [ 'artist', 'penciller', 'penciler', 'breakdowns' ]
inker_synonyms = [ 'inker', 'artist', 'finishes' ]
colorist_synonyms = [ 'colorist', 'colourist', 'colorer', 'colourer' ]
letterer_synonyms = [ 'letterer']
cover_synonyms = [ 'cover', 'covers', 'coverartist', 'cover artist' ]
editor_synonyms = [ 'editor']
def getParseableCredits( self ):
parsable_credits = []
parsable_credits.extend( self.writer_synonyms )
parsable_credits.extend( self.penciller_synonyms )
parsable_credits.extend( self.inker_synonyms )
parsable_credits.extend( self.colorist_synonyms )
parsable_credits.extend( self.letterer_synonyms )
parsable_credits.extend( self.cover_synonyms )
parsable_credits.extend( self.editor_synonyms )
return parsable_credits
def metadataFromString( self, string ):
tree = ET.ElementTree(ET.fromstring( string ))
return self.convertXMLToMetadata( tree )
def stringFromMetadata( self, metadata ):
header = '<?xml version="1.0"?>\n'
tree = self.convertMetadataToXML( self, metadata )
return header + ET.tostring(tree.getroot())
def indent( self, elem, level=0 ):
# for making the XML output readable
i = "\n" + level*" "
if len(elem):
if not elem.text or not elem.text.strip():
elem.text = i + " "
if not elem.tail or not elem.tail.strip():
elem.tail = i
for elem in elem:
self.indent( elem, level+1 )
if not elem.tail or not elem.tail.strip():
elem.tail = i
else:
if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i
def convertMetadataToXML( self, filename, metadata ):
#shorthand for the metadata
md = metadata
# build a tree structure
root = ET.Element("ComicInfo")
root.attrib['xmlns:xsi']="http://www.w3.org/2001/XMLSchema-instance"
root.attrib['xmlns:xsd']="http://www.w3.org/2001/XMLSchema"
#helper func
def assign( cix_entry, md_entry):
if md_entry is not None:
ET.SubElement(root, cix_entry).text = u"{0}".format(md_entry)
assign( 'Series', md.series )
assign( 'Number', md.issue )
assign( 'Title', md.title )
assign( 'Count', md.issueCount )
assign( 'Volume', md.volume )
assign( 'AlternateSeries', md.alternateSeries )
assign( 'AlternateNumber', md.alternateNumber )
assign( 'AlternateCount', md.alternateCount )
assign( 'Summary', md.comments )
assign( 'Notes', md.notes )
assign( 'Year', md.year )
assign( 'Month', md.month )
assign( 'Publisher', md.publisher )
assign( 'Imprint', md.imprint )
assign( 'Genre', md.genre )
assign( 'Web', md.webLink )
assign( 'PageCount', md.pageCount )
assign( 'Format', md.format )
assign( 'LanguageISO', md.language )
assign( 'Manga', md.manga )
assign( 'Characters', md.characters )
assign( 'Teams', md.teams )
assign( 'Locations', md.locations )
assign( 'ScanInformation', md.scanInfo )
assign( 'StoryArc', md.storyArc )
assign( 'SeriesGroup', md.seriesGroup )
assign( 'AgeRating', md.maturityRating )
if md.blackAndWhite is not None and md.blackAndWhite:
ET.SubElement(root, 'BlackAndWhite').text = "Yes"
# need to specially process the credits, since they are structured differently than CIX
credit_writer_list = list()
credit_penciller_list = list()
credit_inker_list = list()
credit_colorist_list = list()
credit_letterer_list = list()
credit_cover_list = list()
credit_editor_list = list()
# first, loop thru credits, and build a list for each role that CIX supports
for credit in metadata.credits:
if credit['role'].lower() in set( self.writer_synonyms ):
credit_writer_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.penciller_synonyms ):
credit_penciller_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.inker_synonyms ):
credit_inker_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.colorist_synonyms ):
credit_colorist_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.letterer_synonyms ):
credit_letterer_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.cover_synonyms ):
credit_cover_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.editor_synonyms ):
credit_editor_list.append(credit['person'].replace(",",""))
# second, convert each list to string, and add to XML struct
if len( credit_writer_list ) > 0:
node = ET.SubElement(root, 'Writer')
node.text = utils.listToString( credit_writer_list )
if len( credit_penciller_list ) > 0:
node = ET.SubElement(root, 'Penciller')
node.text = utils.listToString( credit_penciller_list )
if len( credit_inker_list ) > 0:
node = ET.SubElement(root, 'Inker')
node.text = utils.listToString( credit_inker_list )
if len( credit_colorist_list ) > 0:
node = ET.SubElement(root, 'Colorist')
node.text = utils.listToString( credit_colorist_list )
if len( credit_letterer_list ) > 0:
node = ET.SubElement(root, 'Letterer')
node.text = utils.listToString( credit_letterer_list )
if len( credit_cover_list ) > 0:
node = ET.SubElement(root, 'CoverArtist')
node.text = utils.listToString( credit_cover_list )
if len( credit_editor_list ) > 0:
node = ET.SubElement(root, 'Editor')
node.text = utils.listToString( credit_editor_list )
# loop and add the page entries under pages node
if len( md.pages ) > 0:
pages_node = ET.SubElement(root, 'Pages')
for page_dict in md.pages:
page_node = ET.SubElement(pages_node, 'Page')
page_node.attrib = page_dict
# self pretty-print
self.indent(root)
# wrap it in an ElementTree instance, and save as XML
tree = ET.ElementTree(root)
return tree
def convertXMLToMetadata( self, tree ):
root = tree.getroot()
if root.tag != 'ComicInfo':
raise 1
return None
metadata = GenericMetadata()
md = metadata
# Helper function
def xlate( tag ):
node = root.find( tag )
if node is not None:
return node.text
else:
return None
md.series = xlate( 'Series' )
md.title = xlate( 'Title' )
md.issue = xlate( 'Number' )
md.issueCount = xlate( 'Count' )
md.volume = xlate( 'Volume' )
md.alternateSeries = xlate( 'AlternateSeries' )
md.alternateNumber = xlate( 'AlternateNumber' )
md.alternateCount = xlate( 'AlternateCount' )
md.comments = xlate( 'Summary' )
md.notes = xlate( 'Notes' )
md.year = xlate( 'Year' )
md.month = xlate( 'Month' )
md.publisher = xlate( 'Publisher' )
md.imprint = xlate( 'Imprint' )
md.genre = xlate( 'Genre' )
md.webLink = xlate( 'Web' )
md.language = xlate( 'LanguageISO' )
md.format = xlate( 'Format' )
md.manga = xlate( 'Manga' )
md.characters = xlate( 'Characters' )
md.teams = xlate( 'Teams' )
md.locations = xlate( 'Locations' )
md.pageCount = xlate( 'PageCount' )
md.scanInfo = xlate( 'ScanInformation' )
md.storyArc = xlate( 'StoryArc' )
md.seriesGroup = xlate( 'SeriesGroup' )
md.maturityRating = xlate( 'AgeRating' )
tmp = xlate( 'BlackAndWhite' )
md.blackAndWhite = False
if tmp is not None and tmp.lower() in [ "yes", "true", "1" ]:
md.blackAndWhite = True
# Now extract the credit info
for n in root:
if ( n.tag == 'Writer' or
n.tag == 'Penciller' or
n.tag == 'Inker' or
n.tag == 'Colorist' or
n.tag == 'Letterer' or
n.tag == 'Editor'
):
for name in n.text.split(','):
metadata.addCredit( name.strip(), n.tag )
if n.tag == 'CoverArtist':
for name in n.text.split(','):
metadata.addCredit( name.strip(), "Cover" )
# parse page data now
pages_node = root.find( "Pages" )
if pages_node is not None:
for page in pages_node:
metadata.pages.append( page.attrib )
#print page.attrib
metadata.isEmpty = False
return metadata
def writeToExternalFile( self, filename, metadata ):
tree = self.convertMetadataToXML( self, metadata )
#ET.dump(tree)
tree.write(filename, encoding='utf-8')
def readFromExternalFile( self, filename ):
tree = ET.parse( filename )
return self.convertXMLToMetadata( tree )

View File

@ -1,507 +0,0 @@
#!/usr/bin/python
"""
A python script to tag comic archives
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import sys
import signal
import os
import traceback
import time
from pprint import pprint
import json
import platform
import locale
filename_encoding = sys.getfilesystemencoding()
try:
qt_available = True
from PyQt4 import QtCore, QtGui
from taggerwindow import TaggerWindow
except ImportError as e:
qt_available = False
from settings import ComicTaggerSettings
from options import Options, MetaDataStyle
from comicarchive import ComicArchive
from issueidentifier import IssueIdentifier
from genericmetadata import GenericMetadata
from comicvinetalker import ComicVineTalker, ComicVineTalkerException
from filerenamer import FileRenamer
from cbltransformer import CBLTransformer
import utils
import codecs
class MultipleMatch():
def __init__( self, filename, match_list):
self.filename = filename
self.matches = match_list
class OnlineMatchResults():
def __init__(self):
self.goodMatches = []
self.noMatches = []
self.multipleMatches = []
self.writeFailures = []
#-----------------------------
def actual_issue_data_fetch( match, settings ):
# now get the particular issue data
try:
cv_md = ComicVineTalker().fetchIssueData( match['volume_id'], match['issue_number'], settings )
except ComicVineTalkerException:
print "Network error while getting issue details. Save aborted"
return None
if settings.apply_cbl_transform_on_cv_import:
cv_md = CBLTransformer( cv_md, settings ).apply()
return cv_md
def actual_metadata_save( ca, opts, md ):
if not opts.dryrun:
# write out the new data
if not ca.writeMetadata( md, opts.data_style ):
print "The tag save seemed to fail!"
return False
else:
print "Save complete."
else:
if opts.terse:
print "dry-run option was set, so nothing was written"
else:
print "dry-run option was set, so nothing was written, but here is the final set of tags:"
print u"{0}".format(md)
return True
def post_process_matches( match_results, opts, settings ):
# now go through the match results
if opts.show_save_summary:
if len( match_results.goodMatches ) > 0:
print "\nSuccessful matches:"
print "------------------"
for f in match_results.goodMatches:
print f
if len( match_results.noMatches ) > 0:
print "\nNo matches:"
print "------------------"
for f in match_results.noMatches:
print f
if len( match_results.writeFailures ) > 0:
print "\nFile Write Failures:"
print "------------------"
for f in match_results.writeFailures:
print f
if not opts.show_save_summary and not opts.interactive:
#jusr quit if we're not interactive or showing the summary
return
if len( match_results.multipleMatches ) > 0:
print "\nMultiple matches:"
print "------------------"
for mm in match_results.multipleMatches:
print mm.filename
for (counter,m) in enumerate(mm.matches):
print u" {0}. {1} #{2} [{3}] ({4}/{5}) - {6}".format(counter,
m['series'],
m['issue_number'],
m['publisher'],
m['month'],
m['year'],
m['issue_title'])
if opts.interactive:
while True:
i = raw_input("Choose a match #, or 's' to skip: ")
if (i.isdigit() and int(i) in range(len(mm.matches))) or i == 's':
break
if i != 's':
# save the data!
# we know at this point, that the file is all good to go
ca = ComicArchive( mm.filename )
md = create_local_metadata( opts, ca, ca.hasMetadata(opts.data_style) )
cv_md = actual_issue_data_fetch(mm.matches[int(i)], settings)
md.overlay( cv_md )
actual_metadata_save( ca, opts, md )
print
def cli_mode( opts, settings ):
if len( opts.file_list ) < 1:
print "You must specify at least one filename. Use the -h option for more info"
return
match_results = OnlineMatchResults()
for f in opts.file_list:
f = f.decode(filename_encoding, 'replace')
process_file_cli( f, opts, settings, match_results )
sys.stdout.flush()
post_process_matches( match_results, opts, settings )
def create_local_metadata( opts, ca, has_desired_tags ):
md = GenericMetadata()
md.setDefaultPageList( ca.getNumberOfPages() )
if has_desired_tags:
md = ca.readMetadata( opts.data_style )
# now, overlay the parsed filename info
if opts.parse_filename:
md.overlay( ca.metadataFromFilename() )
# finally, use explicit stuff
if opts.metadata is not None:
md.overlay( opts.metadata )
return md
def process_file_cli( filename, opts, settings, match_results ):
batch_mode = len( opts.file_list ) > 1
ca = ComicArchive(filename)
if settings.rar_exe_path != "":
ca.setExternalRarProgram( settings.rar_exe_path )
if not ca.seemsToBeAComicArchive():
print "Sorry, but "+ filename + " is not a comic archive!"
return
#if not ca.isWritableForStyle( opts.data_style ) and ( opts.delete_tags or opts.save_tags or opts.rename_file ):
if not ca.isWritable( ) and ( opts.delete_tags or opts.copy_tags or opts.save_tags or opts.rename_file ):
print "This archive is not writable for that tag type"
return
has = [ False, False, False ]
if ca.hasCIX(): has[ MetaDataStyle.CIX ] = True
if ca.hasCBI(): has[ MetaDataStyle.CBI ] = True
if ca.hasCoMet(): has[ MetaDataStyle.COMET ] = True
if opts.print_tags:
if opts.data_style is None:
page_count = ca.getNumberOfPages()
brief = ""
if batch_mode:
brief = "{0}: ".format(filename)
if ca.isZip(): brief += "ZIP archive "
elif ca.isRar(): brief += "RAR archive "
elif ca.isFolder(): brief += "Folder archive "
brief += "({0: >3} pages)".format(page_count)
brief += " tags:[ "
if not ( has[ MetaDataStyle.CBI ] or has[ MetaDataStyle.CIX ] or has[ MetaDataStyle.COMET ] ):
brief += "none "
else:
if has[ MetaDataStyle.CBI ]: brief += "CBL "
if has[ MetaDataStyle.CIX ]: brief += "CR "
if has[ MetaDataStyle.COMET ]: brief += "CoMet "
brief += "]"
print brief
if opts.terse:
return
print
if opts.data_style is None or opts.data_style == MetaDataStyle.CIX:
if has[ MetaDataStyle.CIX ]:
print "------ComicRack tags--------"
if opts.raw:
print u"{0}".format(unicode(ca.readRawCIX(), errors='ignore'))
else:
print u"{0}".format(ca.readCIX())
if opts.data_style is None or opts.data_style == MetaDataStyle.CBI:
if has[ MetaDataStyle.CBI ]:
print "------ComicBookLover tags--------"
if opts.raw:
pprint(json.loads(ca.readRawCBI()))
else:
print u"{0}".format(ca.readCBI())
if opts.data_style is None or opts.data_style == MetaDataStyle.COMET:
if has[ MetaDataStyle.COMET ]:
print "------CoMet tags--------"
if opts.raw:
print u"{0}".format(ca.readRawCoMet())
else:
print u"{0}".format(ca.readCoMet())
elif opts.delete_tags:
style_name = MetaDataStyle.name[ opts.data_style ]
if has[ opts.data_style ]:
if not opts.dryrun:
if not ca.removeMetadata( opts.data_style ):
print "{0}: Tag removal seemed to fail!".format( filename )
else:
print "{0}: Removed {1} tags.".format( filename, style_name )
else:
print "{0}: dry-run. {1} tags not removed".format( filename, style_name )
else:
print "{0}: This archive doesn't have {1} tags to remove.".format( filename, style_name )
elif opts.copy_tags:
dst_style_name = MetaDataStyle.name[ opts.data_style ]
if opts.no_overwrite and has[ opts.data_style ]:
print "{0}: Already has {1} tags. Not overwriting.".format(filename, dst_style_name)
return
if opts.copy_source == opts.data_style:
print "{0}: Destination and source are same: {1}. Nothing to do.".format(filename, dst_style_name)
return
src_style_name = MetaDataStyle.name[ opts.copy_source ]
if has[ opts.copy_source ]:
if not opts.dryrun:
md = ca.readMetadata( opts.copy_source )
if settings.apply_cbl_transform_on_bulk_operation and opts.data_style == MetaDataStyle.CBI:
md = CBLTransformer( md, settings ).apply()
if not ca.writeMetadata( md, opts.data_style ):
print u"{0}: Tag copy seemed to fail!".format( filename )
else:
print u"{0}: Copied {1} tags to {2} .".format( filename, src_style_name, dst_style_name )
else:
print u"{0}: dry-run. {1} tags not copied".format( filename, src_style_name )
else:
print u"{0}: This archive doesn't have {1} tags to copy.".format( filename, src_style_name )
elif opts.save_tags:
if opts.no_overwrite and has[ opts.data_style ]:
print u"{0}: Already has {1} tags. Not overwriting.".format(filename, MetaDataStyle.name[ opts.data_style ])
return
if batch_mode:
print u"Processing {0}: ".format(filename)
md = create_local_metadata( opts, ca, has[ opts.data_style ] )
# now, search online
if opts.search_online:
if opts.issue_id is not None:
# we were given the actual ID to search with
try:
cv_md = ComicVineTalker().fetchIssueDataByIssueID( opts.issue_id, settings )
except ComicVineTalkerException:
print "Network error while getting issue details. Save aborted"
return None
if cv_md is None:
print "No match for ID {0} was found.".format(opts.issue_id)
return None
if settings.apply_cbl_transform_on_cv_import:
cv_md = CBLTransformer( cv_md, settings ).apply()
else:
ii = IssueIdentifier( ca, settings )
if md is None or md.isEmpty:
print "No metadata given to search online with!"
return
def myoutput( text ):
if opts.verbose:
IssueIdentifier.defaultWriteOutput( text )
# use our overlayed MD struct to search
ii.setAdditionalMetadata( md )
ii.onlyUseAdditionalMetaData = True
ii.setOutputFunction( myoutput )
ii.cover_page_index = md.getCoverPageIndexList()[0]
matches = ii.search()
result = ii.search_result
found_match = False
choices = False
low_confidence = False
if result == ii.ResultNoMatches:
pass
elif result == ii.ResultFoundMatchButBadCoverScore:
low_confidence = True
found_match = True
elif result == ii.ResultFoundMatchButNotFirstPage :
found_match = True
elif result == ii.ResultMultipleMatchesWithBadImageScores:
low_confidence = True
choices = True
elif result == ii.ResultOneGoodMatch:
found_match = True
elif result == ii.ResultMultipleGoodMatches:
choices = True
if choices:
print "Online search: Multiple matches. Save aborted"
match_results.multipleMatches.append(MultipleMatch(filename,matches))
return
if low_confidence and opts.abortOnLowConfidence:
print "Online search: Low confidence match. Save aborted"
match_results.noMatches.append(filename)
return
if not found_match:
print "Online search: No match found. Save aborted"
match_results.noMatches.append(filename)
return
# we got here, so we have a single match
# now get the particular issue data
cv_md = actual_issue_data_fetch(matches[0], settings)
if cv_md is None:
return
md.overlay( cv_md )
# ok, done building our metadata. time to save
if not actual_metadata_save( ca, opts, md ):
match_results.writeFailures.append(filename)
else:
match_results.goodMatches.append(filename)
elif opts.rename_file:
msg_hdr = ""
if batch_mode:
msg_hdr = u"{0}: ".format(filename)
if opts.data_style is not None:
use_tags = has[ opts.data_style ]
else:
use_tags = False
md = create_local_metadata( opts, ca, use_tags )
if md.series is None:
print msg_hdr + "Can't rename without series name"
return
new_ext = None # default
if settings.rename_extension_based_on_archive:
if ca.isZip():
new_ext = ".cbz"
elif ca.isRar():
new_ext = ".cbr"
renamer = FileRenamer( md )
renamer.setTemplate( settings.rename_template )
renamer.setIssueZeroPadding( settings.rename_issue_number_padding )
renamer.setSmartCleanup( settings.rename_use_smart_string_cleanup )
new_name = renamer.determineName( filename, ext=new_ext )
if new_name == os.path.basename(filename):
print msg_hdr + "Filename is already good!"
return
folder = os.path.dirname( os.path.abspath( filename ) )
new_abs_path = utils.unique_file( os.path.join( folder, new_name ) )
suffix = ""
if not opts.dryrun:
# rename the file
os.rename( filename, new_abs_path )
else:
suffix = " (dry-run, no change)"
print u"renamed '{0}' -> '{1}' {2}".format(os.path.basename(filename), new_name, suffix)
#-----------------------------
def main():
# try to make stdout encodings happy for unicode
if platform.system() == "Darwin":
preferred_encoding = "utf-8"
else:
preferred_encoding = locale.getpreferredencoding()
sys.stdout = codecs.getwriter(preferred_encoding)(sys.stdout)
opts = Options()
opts.parseCmdLineArgs()
settings = ComicTaggerSettings()
# make sure unrar program is in the path for the UnRAR class
utils.addtopath(os.path.dirname(settings.unrar_exe_path))
signal.signal(signal.SIGINT, signal.SIG_DFL)
if not qt_available and not opts.no_gui:
opts.no_gui = True
print "QT is not available."
if opts.no_gui:
cli_mode( opts, settings )
else:
app = QtGui.QApplication(sys.argv)
if platform.system() != "Linux":
img = QtGui.QPixmap(os.path.join(ComicTaggerSettings.baseDir(), 'graphics/tags.png' ))
splash = QtGui.QSplashScreen(img)
splash.show()
splash.raise_()
app.processEvents()
try:
tagger_window = TaggerWindow( opts.file_list, settings )
tagger_window.show()
if platform.system() != "Linux":
splash.finish( tagger_window )
sys.exit(app.exec_())
except Exception, e:
QtGui.QMessageBox.critical(QtGui.QMainWindow(), "Error", "Unhandled exception in app:\n" + traceback.format_exc() )
if __name__ == "__main__":
main()

View File

@ -0,0 +1 @@
from __future__ import annotations

View File

@ -0,0 +1,5 @@
from __future__ import annotations
from comictaggerlib.main import main
main()

View File

@ -0,0 +1,11 @@
from __future__ import annotations
import os
import comicapi.__pyinstaller
def get_hook_dirs() -> list[str]:
hooks = [os.path.dirname(__file__)]
hooks.extend(comicapi.__pyinstaller.get_hook_dirs())
return hooks

View File

@ -0,0 +1,8 @@
from __future__ import annotations
from PyInstaller.utils.hooks import collect_data_files, collect_entry_point, collect_submodules
datas, hiddenimports = collect_entry_point("comictagger.talker")
hiddenimports += collect_submodules("comictaggerlib")
datas += collect_data_files("comictaggerlib.ui")
datas += collect_data_files("comictaggerlib.graphics")

View File

@ -0,0 +1,7 @@
from __future__ import annotations
import os
from PyInstaller.utils.hooks import get_module_file_attribute
datas = [(os.path.join(os.path.dirname(get_module_file_attribute("wordninja")), "wordninja"), "wordninja")]

View File

@ -0,0 +1,57 @@
from __future__ import annotations
import logging
import pathlib
from PyQt6 import QtCore, QtGui, QtWidgets, uic
from comictaggerlib.ui import ui_path
logger = logging.getLogger(__name__)
class QTextEditLogger(QtCore.QObject, logging.Handler):
qlog = QtCore.pyqtSignal(str)
def __init__(self, formatter: logging.Formatter, level: int) -> None:
super().__init__()
self.setFormatter(formatter)
self.setLevel(level)
def emit(self, record: logging.LogRecord) -> None:
msg = self.format(record)
self.qlog.emit(msg.strip())
class ApplicationLogWindow(QtWidgets.QDialog):
def __init__(
self, log_folder: pathlib.Path, log_handler: QTextEditLogger, parent: QtCore.QObject | None = None
) -> None:
super().__init__(parent)
with (ui_path / "applicationlogwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.log_handler = log_handler
self.log_handler.qlog.connect(self.textEdit.append)
f = QtGui.QFont("menlo")
f.setStyleHint(QtGui.QFont.StyleHint.Monospace)
self.setFont(f)
self._button = QtWidgets.QPushButton(self)
self._button.setText("Test Me")
self.log_folder = log_folder
self.lblLogLocation.setText(f'Log Location: <a href="file://{log_folder}">{log_folder}</a>')
layout = self.layout()
layout.addWidget(self._button)
# Connect signal to slot
self._button.clicked.connect(self.test)
self.textEdit.setTabStopDistance(self.textEdit.tabStopDistance() * 2)
def test(self) -> None:
logger.debug("damn, a bug")
logger.info("something to remember")
logger.warning("that's not right")
logger.error("foobar")

View File

@ -0,0 +1,278 @@
"""A PyQT4 dialog to select from automated issue matches"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import os
from typing import Callable
from PyQt6 import QtCore, QtGui, QtWidgets, uic
from comicapi.comicarchive import ComicArchive, tags
from comicapi.genericmetadata import GenericMetadata
from comictaggerlib.coverimagewidget import CoverImageWidget
from comictaggerlib.ctsettings import ct_ns
from comictaggerlib.md import prepare_metadata
from comictaggerlib.resulttypes import IssueResult, Result
from comictaggerlib.ui import ui_path
from comictalker.comictalker import ComicTalker, TalkerError
logger = logging.getLogger(__name__)
class AutoTagMatchWindow(QtWidgets.QDialog):
def __init__(
self,
parent: QtWidgets.QWidget,
match_set_list: list[Result],
read_tags: list[str],
fetch_func: Callable[[IssueResult], GenericMetadata],
config: ct_ns,
talker: ComicTalker,
) -> None:
super().__init__(parent)
with (ui_path / "matchselectionwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.config = config
self.current_match_set: Result = match_set_list[0]
self.altCoverWidget = CoverImageWidget(
self.altCoverContainer, CoverImageWidget.AltCoverMode, config.Runtime_Options__config.user_cache_dir
)
gridlayout = QtWidgets.QGridLayout(self.altCoverContainer)
gridlayout.addWidget(self.altCoverWidget)
gridlayout.setContentsMargins(0, 0, 0, 0)
self.archiveCoverWidget = CoverImageWidget(self.archiveCoverContainer, CoverImageWidget.ArchiveMode, None)
gridlayout = QtWidgets.QGridLayout(self.archiveCoverContainer)
gridlayout.addWidget(self.archiveCoverWidget)
gridlayout.setContentsMargins(0, 0, 0, 0)
self.setWindowFlags(
QtCore.Qt.WindowType(
self.windowFlags()
| QtCore.Qt.WindowType.WindowSystemMenuHint
| QtCore.Qt.WindowType.WindowMaximizeButtonHint
)
)
self.skipButton = QtWidgets.QPushButton("Skip to Next")
self.buttonBox.addButton(self.skipButton, QtWidgets.QDialogButtonBox.ButtonRole.ActionRole)
self.buttonBox.button(QtWidgets.QDialogButtonBox.StandardButton.Ok).setText("Accept and Write Tags")
self.match_set_list = match_set_list
self._tags = read_tags
self.fetch_func = fetch_func
self.current_match_set_idx = 0
self.twList.currentItemChanged.connect(self.current_item_changed)
self.twList.cellDoubleClicked.connect(self.cell_double_clicked)
self.skipButton.clicked.connect(self.skip_to_next)
self.update_data()
def update_data(self) -> None:
self.current_match_set = self.match_set_list[self.current_match_set_idx]
if self.current_match_set_idx + 1 == len(self.match_set_list):
self.buttonBox.button(QtWidgets.QDialogButtonBox.StandardButton.Cancel).setDisabled(True)
self.skipButton.setText("Skip")
self.set_cover_image()
self.populate_table()
self.twList.resizeColumnsToContents()
self.twList.selectRow(0)
path = self.current_match_set.original_path
self.setWindowTitle(
"Select correct match or skip ({} of {}): {}".format(
self.current_match_set_idx + 1,
len(self.match_set_list),
os.path.split(path)[1],
)
)
def populate_table(self) -> None:
if not self.current_match_set:
return
self.twList.setRowCount(0)
self.twList.setSortingEnabled(False)
for row, match in enumerate(self.current_match_set.online_results):
self.twList.insertRow(row)
item_text = match.series
item = QtWidgets.QTableWidgetItem(item_text)
item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item.setData(QtCore.Qt.ItemDataRole.UserRole, (match,))
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, 0, item)
if match.publisher is not None:
item_text = str(match.publisher)
else:
item_text = "Unknown"
item = QtWidgets.QTableWidgetItem(item_text)
item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, 1, item)
month_str = ""
year_str = "????"
if match.month is not None:
month_str = f"-{int(match.month):02d}"
if match.year is not None:
year_str = str(match.year)
item_text = year_str + month_str
item = QtWidgets.QTableWidgetItem(item_text)
item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, 2, item)
item_text = match.issue_title
if item_text is None:
item_text = ""
item = QtWidgets.QTableWidgetItem(item_text)
item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, 3, item)
self.twList.resizeColumnsToContents()
self.twList.setSortingEnabled(True)
self.twList.sortItems(2, QtCore.Qt.SortOrder.AscendingOrder)
self.twList.selectRow(0)
self.twList.resizeColumnsToContents()
self.twList.horizontalHeader().setStretchLastSection(True)
def cell_double_clicked(self, r: int, c: int) -> None:
self.accept()
def current_item_changed(self, curr: QtCore.QModelIndex, prev: QtCore.QModelIndex) -> None:
if curr is None:
return None
if prev is not None and prev.row() == curr.row():
return None
match = self.current_match()
self.altCoverWidget.set_issue_details(match.issue_id, [match.image_url, *match.alt_image_urls])
if match.description is None:
self.teDescription.setText("")
else:
self.teDescription.setText(match.description)
def set_cover_image(self) -> None:
ca = ComicArchive(
self.current_match_set.original_path, hash_archive=self.config.Runtime_Options__preferred_hash
)
self.archiveCoverWidget.set_archive(ca)
def current_match(self) -> IssueResult:
row = self.twList.currentRow()
match: IssueResult = self.twList.item(row, 0).data(QtCore.Qt.ItemDataRole.UserRole)[0]
return match
def accept(self) -> None:
self.save_match()
self.current_match_set_idx += 1
if self.current_match_set_idx == len(self.match_set_list):
# no more items
QtWidgets.QDialog.accept(self)
else:
self.update_data()
def skip_to_next(self) -> None:
self.current_match_set_idx += 1
if self.current_match_set_idx == len(self.match_set_list):
# no more items
QtWidgets.QDialog.reject(self)
else:
self.update_data()
def reject(self) -> None:
reply = QtWidgets.QMessageBox.question(
self,
"Cancel Matching",
"Are you sure you wish to cancel the matching process?",
QtWidgets.QMessageBox.StandardButton.Yes,
QtWidgets.QMessageBox.StandardButton.No,
)
if reply == QtWidgets.QMessageBox.StandardButton.No:
return
QtWidgets.QDialog.reject(self)
def save_match(self) -> None:
match = self.current_match()
ca = ComicArchive(
self.current_match_set.original_path, hash_archive=self.config.Runtime_Options__preferred_hash
)
md, error = self.parent().read_selected_tags(self._tags, ca)
if error is not None:
logger.error("Failed to load tags for %s: %s", ca.path, error)
QtWidgets.QApplication.restoreOverrideCursor()
QtWidgets.QMessageBox.critical(
self,
"Read Failed!",
f"One or more of the read tags failed to load for {ca.path}, check log for details",
)
return
if md.is_empty:
md = ca.metadata_from_filename(
self.config.Filename_Parsing__filename_parser,
self.config.Filename_Parsing__remove_c2c,
self.config.Filename_Parsing__remove_fcbd,
self.config.Filename_Parsing__remove_publisher,
)
# now get the particular issue data
try:
self.current_match_set.md = ct_md = self.fetch_func(match)
except TalkerError as e:
QtWidgets.QApplication.restoreOverrideCursor()
QtWidgets.QMessageBox.critical(self, f"{e.source} {e.code_name} Error", f"{e}")
return
if ct_md is None or ct_md.is_empty:
QtWidgets.QMessageBox.critical(self, "Network Issue", "Could not retrieve issue details!")
return
QtWidgets.QApplication.setOverrideCursor(QtGui.QCursor(QtCore.Qt.CursorShape.WaitCursor))
md = prepare_metadata(md, ct_md, self.config)
for tag_id in self._tags:
success = ca.write_tags(md, tag_id)
QtWidgets.QApplication.restoreOverrideCursor()
if not success:
QtWidgets.QMessageBox.warning(
self,
"Write Error",
f"Saving {tags[tag_id].name()} the tags to the archive seemed to fail!",
)
break
ca.reset_cache()

View File

@ -0,0 +1,71 @@
"""A PyQT4 dialog to show ID log and progress"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
from PyQt6 import QtCore, QtWidgets, uic
from comictaggerlib.coverimagewidget import CoverImageWidget
from comictaggerlib.ui import ui_path
from comictalker.comictalker import ComicTalker
logger = logging.getLogger(__name__)
class AutoTagProgressWindow(QtWidgets.QDialog):
def __init__(self, parent: QtWidgets.QWidget, talker: ComicTalker) -> None:
super().__init__(parent)
with (ui_path / "autotagprogresswindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.lblSourceName.setText(talker.attribution)
self.archiveCoverWidget = CoverImageWidget(self.archiveCoverContainer, CoverImageWidget.DataMode, None, False)
gridlayout = QtWidgets.QGridLayout(self.archiveCoverContainer)
gridlayout.addWidget(self.archiveCoverWidget)
gridlayout.setContentsMargins(0, 0, 0, 0)
self.testCoverWidget = CoverImageWidget(self.testCoverContainer, CoverImageWidget.DataMode, None, False)
gridlayout = QtWidgets.QGridLayout(self.testCoverContainer)
gridlayout.addWidget(self.testCoverWidget)
gridlayout.setContentsMargins(0, 0, 0, 0)
self.isdone = False
self.setWindowFlags(
QtCore.Qt.WindowType(
self.windowFlags()
| QtCore.Qt.WindowType.WindowSystemMenuHint
| QtCore.Qt.WindowType.WindowMaximizeButtonHint
)
)
def set_archive_image(self, img_data: bytes) -> None:
self.set_cover_image(img_data, self.archiveCoverWidget)
def set_test_image(self, img_data: bytes) -> None:
self.set_cover_image(img_data, self.testCoverWidget)
def set_cover_image(self, img_data: bytes, widget: CoverImageWidget) -> None:
widget.set_image_data(img_data)
QtCore.QCoreApplication.processEvents()
def reject(self) -> None:
QtWidgets.QDialog.reject(self)
self.isdone = True

View File

@ -0,0 +1,104 @@
"""A PyQT4 dialog to confirm and set config for auto-tag"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
from PyQt6 import QtCore, QtWidgets, uic
from comictaggerlib.ctsettings import ct_ns
from comictaggerlib.ui import ui_path
logger = logging.getLogger(__name__)
class AutoTagStartWindow(QtWidgets.QDialog):
def __init__(self, parent: QtWidgets.QWidget, config: ct_ns, msg: str) -> None:
super().__init__(parent)
with (ui_path / "autotagstartwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.label.setText(msg)
self.setWindowFlags(
QtCore.Qt.WindowType(self.windowFlags() & ~QtCore.Qt.WindowType.WindowContextHelpButtonHint)
)
self.config = config
self.cbxSpecifySearchString.setChecked(False)
self.cbxSplitWords.setChecked(False)
self.sbNameMatchSearchThresh.setValue(self.config.Issue_Identifier__series_match_identify_thresh)
self.leSearchString.setEnabled(False)
self.cbxSaveOnLowConfidence.setChecked(self.config.Auto_Tag__save_on_low_confidence)
self.cbxDontUseYear.setChecked(not self.config.Auto_Tag__use_year_when_identifying)
self.cbxAssumeIssueOne.setChecked(self.config.Auto_Tag__assume_issue_one)
self.cbxIgnoreLeadingDigitsInFilename.setChecked(self.config.Auto_Tag__ignore_leading_numbers_in_filename)
self.cbxRemoveAfterSuccess.setChecked(self.config.internal__remove_archive_after_successful_match)
self.cbxAutoImprint.setChecked(self.config.Auto_Tag__auto_imprint)
nlmt_tip = """<html>The <b>Name Match Ratio Threshold: Auto-Identify</b> is for eliminating automatic
search matches that are too long compared to your series name search. The lower
it is, the more likely to have a good match, but each search will take longer and
use more bandwidth. Too high, and only the very closest matches will be explored.</html>"""
self.sbNameMatchSearchThresh.setToolTip(nlmt_tip)
ss_tip = """<html>
The <b>series search string</b> specifies the search string to be used for all selected archives.
Use this when trying to match archives with hard-to-parse or incorrect filenames. All archives selected
should be from the same series.
</html>"""
self.leSearchString.setToolTip(ss_tip)
self.cbxSpecifySearchString.setToolTip(ss_tip)
self.cbxSpecifySearchString.stateChanged.connect(self.search_string_toggle)
self.auto_save_on_low = False
self.dont_use_year = False
self.assume_issue_one = False
self.ignore_leading_digits_in_filename = False
self.remove_after_success = False
self.search_string = ""
self.name_length_match_tolerance = self.config.Issue_Identifier__series_match_search_thresh
self.split_words = self.cbxSplitWords.isChecked()
def search_string_toggle(self) -> None:
enable = self.cbxSpecifySearchString.isChecked()
self.leSearchString.setEnabled(enable)
def accept(self) -> None:
QtWidgets.QDialog.accept(self)
self.auto_save_on_low = self.cbxSaveOnLowConfidence.isChecked()
self.dont_use_year = self.cbxDontUseYear.isChecked()
self.assume_issue_one = self.cbxAssumeIssueOne.isChecked()
self.ignore_leading_digits_in_filename = self.cbxIgnoreLeadingDigitsInFilename.isChecked()
self.remove_after_success = self.cbxRemoveAfterSuccess.isChecked()
self.name_length_match_tolerance = self.sbNameMatchSearchThresh.value()
self.split_words = self.cbxSplitWords.isChecked()
# persist some settings
self.config.Auto_Tag__save_on_low_confidence = self.auto_save_on_low
self.config.Auto_Tag__use_year_when_identifying = not self.dont_use_year
self.config.Auto_Tag__assume_issue_one = self.assume_issue_one
self.config.Auto_Tag__ignore_leading_numbers_in_filename = self.ignore_leading_digits_in_filename
self.config.internal__remove_archive_after_successful_match = self.remove_after_success
if self.cbxSpecifySearchString.isChecked():
self.search_string = self.leSearchString.text()

View File

@ -0,0 +1,90 @@
"""A class to manage modifying metadata specifically for CBL/CBI"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
from comicapi.genericmetadata import Credit, GenericMetadata
from comictaggerlib.ctsettings import ct_ns
logger = logging.getLogger(__name__)
class CBLTransformer:
def __init__(self, metadata: GenericMetadata, config: ct_ns) -> None:
self.metadata = metadata.copy()
self.config = config
def apply(self) -> GenericMetadata:
if self.config.Metadata_Options__assume_lone_credit_is_primary:
# helper
def set_lone_primary(role_list: list[str]) -> tuple[Credit | None, int]:
lone_credit: Credit | None = None
count = 0
for c in self.metadata.credits:
if c.role.casefold() in role_list:
count += 1
lone_credit = c
if count > 1:
lone_credit = None
break
if lone_credit is not None:
lone_credit.primary = True
return lone_credit, count
# need to loop three times, once for 'writer', 'artist', and then
# 'penciler' if no artist
set_lone_primary(["writer"])
c, count = set_lone_primary(["artist"])
if c is None and count == 0:
c, count = set_lone_primary(["penciler", "penciller"])
if c is not None:
c.primary = False
self.metadata.add_credit(c.person, "Artist", True)
if self.config.Metadata_Options__copy_characters_to_tags:
self.metadata.tags.update(x for x in self.metadata.characters)
if self.config.Metadata_Options__copy_teams_to_tags:
self.metadata.tags.update(x for x in self.metadata.teams)
if self.config.Metadata_Options__copy_locations_to_tags:
self.metadata.tags.update(x for x in self.metadata.locations)
if self.config.Metadata_Options__copy_storyarcs_to_tags:
self.metadata.tags.update(x for x in self.metadata.story_arcs)
if self.config.Metadata_Options__copy_notes_to_comments:
if self.metadata.notes is not None:
if self.metadata.description is None:
self.metadata.description = ""
else:
self.metadata.description += "\n\n"
if self.metadata.notes not in self.metadata.description:
self.metadata.description += self.metadata.notes
if self.config.Metadata_Options__copy_weblink_to_comments:
for web_link in self.metadata.web_links:
temp_desc = self.metadata.description
if temp_desc is None:
temp_desc = ""
else:
temp_desc += "\n\n"
if web_link.url and web_link.url not in temp_desc:
self.metadata.description = temp_desc + web_link.url
return self.metadata

836
comictaggerlib/cli.py Normal file
View File

@ -0,0 +1,836 @@
#!/usr/bin/python
"""ComicTagger CLI functions"""
#
# Copyright 2013 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import dataclasses
import functools
import json
import logging
import os
import pathlib
import re
import sys
from collections.abc import Collection
from typing import Any, TextIO
from comicapi import merge, utils
from comicapi.comicarchive import ComicArchive, tags
from comicapi.genericmetadata import GenericMetadata
from comictaggerlib.cbltransformer import CBLTransformer
from comictaggerlib.ctsettings import ct_ns
from comictaggerlib.filerenamer import FileRenamer, get_rename_dir
from comictaggerlib.graphics import graphics_path
from comictaggerlib.issueidentifier import IssueIdentifier
from comictaggerlib.md import prepare_metadata
from comictaggerlib.quick_tag import QuickTag
from comictaggerlib.resulttypes import Action, IssueResult, MatchStatus, OnlineMatchResults, Result, Status
from comictalker.comictalker import ComicTalker, TalkerError
logger = logging.getLogger(__name__)
class OutputEncoder(json.JSONEncoder):
def default(self, obj: Any) -> Any:
if isinstance(obj, pathlib.Path):
return str(obj)
if not isinstance(obj, str) and isinstance(obj, Collection):
return list(obj)
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, obj)
class CLI:
def __init__(self, config: ct_ns, talkers: dict[str, ComicTalker]) -> None:
self.config = config
self.talkers = talkers
self.batch_mode = False
self.output_file = sys.stdout
if config.Runtime_Options__json:
self.output_file = sys.stderr
def current_talker(self) -> ComicTalker:
if self.config.Sources__source in self.talkers:
return self.talkers[self.config.Sources__source]
logger.error("Could not find the '%s' talker", self.config.Sources__source)
raise SystemExit(2)
def output(
self,
*args: Any,
file: TextIO | None = None,
force_output: bool = False,
already_logged: bool = False,
**kwargs: Any,
) -> None:
if file is None:
file = self.output_file
if not args:
log_args: tuple[Any, ...] = ("",)
elif isinstance(args[0], str):
if args[0] == "":
already_logged = True
log_args = (args[0].strip("\n"), *args[1:])
else:
log_args = args
if not already_logged:
logger.info(*log_args, **kwargs)
if self.config.Runtime_Options__verbose > 0:
return
if not self.config.Runtime_Options__quiet or force_output:
print(*args, **kwargs, file=file)
def run(self) -> int:
if len(self.config.Runtime_Options__files) < 1:
if self.config.Commands__command == Action.print:
res = self.print(None)
if res.status != Status.success:
return_code = 3
if self.config.Runtime_Options__json:
print(json.dumps(dataclasses.asdict(res), cls=OutputEncoder, indent=2))
return 0
logger.error("You must specify at least one filename. Use the -h option for more info")
return 1
return_code = 0
results: list[Result] = []
match_results = OnlineMatchResults()
self.batch_mode = len(self.config.Runtime_Options__files) > 1
for f in self.config.Runtime_Options__files:
res, match_results = self.process_file_cli(self.config.Commands__command, f, match_results)
results.append(res)
self.output("")
if results[-1].status != Status.success:
return_code = 3
if self.config.Runtime_Options__json:
print(json.dumps(dataclasses.asdict(results[-1]), cls=OutputEncoder, indent=2))
sys.stdout.flush()
sys.stderr.flush()
self.post_process_matches(match_results)
if self.config.Auto_Tag__online and results and results[-1].online_results:
self.output(
f"\nFiles tagged with metadata provided by {self.current_talker().name} {self.current_talker().website}",
)
return return_code
def fetch_metadata(self, issue_id: str) -> GenericMetadata:
# now get the particular issue data
try:
ct_md = self.current_talker().fetch_comic_data(issue_id)
except TalkerError as e:
logger.exception(f"Error retrieving issue details. Save aborted.\n{e}")
return GenericMetadata()
if self.config.Metadata_Options__apply_transform_on_import:
ct_md = CBLTransformer(ct_md, self.config).apply()
return ct_md
def write_tags(self, ca: ComicArchive, md: GenericMetadata) -> bool:
if not self.config.Runtime_Options__dryrun:
for tag_id in self.config.Runtime_Options__tags_write:
# write out the new data
if not ca.write_tags(md, tag_id):
logger.error("The tag save seemed to fail for: %s!", tags[tag_id].name())
return False
self.output("Save complete.")
else:
if self.config.Runtime_Options__quiet:
self.output("dry-run option was set, so nothing was written")
else:
self.output("dry-run option was set, so nothing was written, but here is the final set of tags:")
self.output(f"{md}")
return True
def display_match_set_for_choice(self, label: str, match_set: Result) -> None:
self.output(f"{match_set.original_path} -- {label}:", force_output=True)
# sort match list by year
match_set.online_results.sort(key=lambda k: k.year or 0)
for counter, m in enumerate(match_set.online_results, 1):
self.output(
" {}. {} #{} [{}] ({}/{}) - {}".format(
counter,
m.series,
m.issue_number,
m.publisher,
m.month,
m.year,
m.issue_title,
),
force_output=True,
)
if self.config.Runtime_Options__interactive:
while True:
i = input("Choose a match #, or 's' to skip: ")
if (i.isdigit() and int(i) in range(1, len(match_set.online_results) + 1)) or i == "s":
break
if i != "s":
# save the data!
# we know at this point, that the file is all good to go
ca = ComicArchive(match_set.original_path, hash_archive=self.config.Runtime_Options__preferred_hash)
md, match_set.tags_read = self.create_local_metadata(ca, self.config.Runtime_Options__tags_read)
ct_md = self.fetch_metadata(match_set.online_results[int(i) - 1].issue_id)
match_set.md = prepare_metadata(md, ct_md, self.config)
self.write_tags(ca, match_set.md)
def post_process_matches(self, match_results: OnlineMatchResults) -> None:
def print_header(header: str) -> None:
self.output("", force_output=True)
self.output(header, force_output=True)
self.output("------------------", force_output=True)
# now go through the match results
if self.config.Runtime_Options__summary:
if len(match_results.good_matches) > 0:
print_header("Successful matches:")
for f in match_results.good_matches:
self.output(f, force_output=True)
if len(match_results.no_matches) > 0:
print_header("No matches:")
for f in match_results.no_matches:
self.output(f, force_output=True)
if len(match_results.write_failures) > 0:
print_header("File Write Failures:")
for f in match_results.write_failures:
self.output(f, force_output=True)
if len(match_results.fetch_data_failures) > 0:
print_header("Network Data Fetch Failures:")
for f in match_results.fetch_data_failures:
self.output(f, force_output=True)
if not self.config.Runtime_Options__summary and not self.config.Runtime_Options__interactive:
# just quit if we're not interactive or showing the summary
return
if len(match_results.multiple_matches) > 0:
self.output("\nArchives with multiple high-confidence matches:\n------------------", force_output=True)
for match_set in match_results.multiple_matches:
self.display_match_set_for_choice("Multiple high-confidence matches", match_set)
if len(match_results.low_confidence_matches) > 0:
self.output("\nArchives with low-confidence matches:\n------------------", force_output=True)
for match_set in match_results.low_confidence_matches:
if len(match_set.online_results) == 1:
label = "Single low-confidence match"
else:
label = "Multiple low-confidence matches"
self.display_match_set_for_choice(label, match_set)
def create_local_metadata(
self, ca: ComicArchive, tags_to_read: list[str], /, tags_only: bool = False
) -> tuple[GenericMetadata, list[str]]:
md = GenericMetadata()
md.apply_default_page_list(ca.get_page_name_list())
filename_md = GenericMetadata()
# now, overlay the parsed filename info
if self.config.Auto_Tag__parse_filename and not tags_only:
filename_md = ca.metadata_from_filename(
self.config.Filename_Parsing__filename_parser,
self.config.Filename_Parsing__remove_c2c,
self.config.Filename_Parsing__remove_fcbd,
self.config.Filename_Parsing__remove_publisher,
self.config.Filename_Parsing__split_words,
self.config.Filename_Parsing__allow_issue_start_with_letter,
self.config.Filename_Parsing__protofolius_issue_number_scheme,
)
file_md = GenericMetadata()
tags_used = []
for tag_id in tags_to_read:
if ca.has_tags(tag_id):
try:
t_md = ca.read_tags(tag_id)
if not t_md.is_empty:
file_md.overlay(
t_md,
self.config.Metadata_Options__tag_merge,
self.config.Metadata_Options__tag_merge_lists,
)
tags_used.append(tag_id)
except Exception as e:
logger.error("Failed to load metadata for %s: %s", ca.path, e)
filename_merge = merge.Mode.ADD_MISSING
if self.config.Auto_Tag__prefer_filename:
filename_merge = merge.Mode.OVERLAY
md.overlay(file_md, mode=merge.Mode.OVERLAY, merge_lists=False)
if not tags_only:
md.overlay(filename_md, mode=filename_merge, merge_lists=False)
# finally, use explicit stuff (always 'overlay' mode)
md.overlay(self.config.Auto_Tag__metadata, mode=merge.Mode.OVERLAY, merge_lists=True)
return (md, tags_used)
def print(self, ca: ComicArchive | None) -> Result:
md = None
if ca is None:
if not self.config.Auto_Tag__metadata.is_empty:
if not self.config.Auto_Tag__metadata.is_empty:
self.output("--------- CLI tags ---------")
self.output(self.config.Auto_Tag__metadata)
return Result(Action.print, Status.success, None, md=md) # type: ignore
if not self.config.Runtime_Options__tags_read:
page_count = ca.get_number_of_pages()
brief = ""
if self.batch_mode:
brief = f"{ca.path}: "
brief += ca.archiver.name() + " archive "
brief += f"({page_count: >3} pages)"
brief += " tags:[ "
tag_names = [tags[tag_id].name() for tag_id in tags if ca.has_tags(tag_id)]
brief += " ".join(tag_names)
brief += " ]"
self.output(brief)
if self.config.Runtime_Options__quiet:
return Result(Action.print, Status.success, ca.path)
self.output()
for tag_id, tag in tags.items():
if not self.config.Runtime_Options__tags_read or tag_id in self.config.Runtime_Options__tags_read:
if ca.has_tags(tag_id):
self.output(f"--------- {tag.name()} tags ---------")
try:
if self.config.Runtime_Options__raw:
self.output(ca.read_raw_tags(tag_id))
else:
md = ca.read_tags(tag_id)
self.output(md)
except Exception as e:
logger.error("Failed to read tags from %s: %s", ca.path, e)
if not self.config.Auto_Tag__metadata.is_empty and not self.config.Runtime_Options__raw:
try:
md, tags_read = self.create_local_metadata(
ca, self.config.Runtime_Options__tags_read or list(tags.keys())
)
tags_read_names = ", ".join(["CLI"] + [tags[t].name() for t in tags_read])
self.output(f"--------- Combined {tags_read_names} tags ---------")
self.output(md)
except Exception as e:
logger.error("Failed to read tags from %s: %s", ca.path, e)
return Result(Action.print, Status.success, ca.path, md=md)
def delete_tags(self, ca: ComicArchive, tag_id: str) -> Status:
tag_name = tags[tag_id].name()
if ca.has_tags(tag_id):
if not self.config.Runtime_Options__dryrun:
if ca.remove_tags(tag_id):
self.output(f"{ca.path}: Removed {tag_name} tags.")
return Status.success
else:
self.output(f"{ca.path}: Tag removal seemed to fail!")
return Status.write_failure
else:
self.output(f"{ca.path}: dry-run. {tag_name} tags not removed")
return Status.success
self.output(f"{ca.path}: This archive doesn't have {tag_name} tags to remove.")
return Status.success
def delete(self, ca: ComicArchive) -> Result:
res = Result(Action.delete, Status.success, ca.path)
for tag_id in self.config.Runtime_Options__tags_write:
status = self.delete_tags(ca, tag_id)
if status == Status.success:
res.tags_deleted.append(tag_id)
else:
res.status = status
return res
def _copy_tags(self, ca: ComicArchive, md: GenericMetadata, source_names: str, dst_tag_id: str) -> Status:
dst_tag_name = tags[dst_tag_id].name()
if self.config.Runtime_Options__skip_existing_tags and ca.has_tags(dst_tag_id):
self.output(f"{ca.path}: Already has {dst_tag_name} tags. Not overwriting.")
return Status.existing_tags
if len(self.config.Commands__copy) == 1 and dst_tag_id in self.config.Commands__copy:
self.output(f"{ca.path}: Destination and source are same: {dst_tag_name}. Nothing to do.")
return Status.existing_tags
if not self.config.Runtime_Options__dryrun:
if self.config.Metadata_Options__apply_transform_on_bulk_operation and dst_tag_id == "cbi":
md = CBLTransformer(md, self.config).apply()
if ca.write_tags(md, dst_tag_id):
self.output(f"{ca.path}: Copied {source_names} tags to {dst_tag_name}.")
else:
self.output(f"{ca.path}: Tag copy seemed to fail!")
return Status.write_failure
else:
self.output(f"{ca.path}: dry-run. {source_names} tags not copied")
return Status.success
def copy(self, ca: ComicArchive) -> Result:
res = Result(Action.copy, Status.success, ca.path)
src_tag_names = []
for src_tag_id in self.config.Commands__copy:
src_tag_names.append(tags[src_tag_id].name())
if ca.has_tags(src_tag_id):
res.tags_read.append(src_tag_id)
if not res.tags_read:
self.output(f"{ca.path}: This archive doesn't have any {', '.join(src_tag_names)} tags to copy.")
res.status = Status.read_failure
return res
try:
res.md, res.tags_read = self.create_local_metadata(ca, res.tags_read, tags_only=True)
except Exception as e:
logger.error("Failed to read tags from %s: %s", ca.path, e)
return res
for dst_tag_id in self.config.Runtime_Options__tags_write:
if dst_tag_id in self.config.Commands__copy:
continue
status = self._copy_tags(ca, res.md, ", ".join(src_tag_names), dst_tag_id)
if status == Status.success:
res.tags_written.append(dst_tag_id)
else:
res.status = status
return res
def try_quick_tag(self, ca: ComicArchive, md: GenericMetadata) -> GenericMetadata | None:
if not self.config.Runtime_Options__enable_quick_tag:
self.output("skipping quick tag")
return None
self.output("starting quick tag")
try:
qt = QuickTag(
self.config.Quick_Tag__url,
str(utils.parse_url(self.current_talker().website).host),
self.current_talker(),
self.config,
self.output,
)
ct_md = qt.id_comic(
ca,
md,
set(self.config.Quick_Tag__hash),
self.config.Quick_Tag__exact_only,
self.config.Runtime_Options__interactive,
self.config.Quick_Tag__aggressive_filtering,
self.config.Quick_Tag__max,
)
if ct_md is None:
ct_md = GenericMetadata()
return ct_md
except Exception:
logger.exception("Quick Tagging failed")
return None
def normal_tag(
self, ca: ComicArchive, tags_read: list[str], md: GenericMetadata, match_results: OnlineMatchResults
) -> tuple[GenericMetadata, list[IssueResult], Result | None, OnlineMatchResults]:
# ct_md, results, matches, match_results
if md is None or md.is_empty:
logger.error("No metadata given to search online with!")
res = Result(
Action.save,
status=Status.match_failure,
original_path=ca.path,
match_status=MatchStatus.no_match,
tags_written=self.config.Runtime_Options__tags_write,
tags_read=tags_read,
)
match_results.no_matches.append(res)
return GenericMetadata(), [], res, match_results
ii = IssueIdentifier(ca, self.config, self.current_talker())
ii.set_output_function(functools.partial(self.output, already_logged=True))
if not self.config.Auto_Tag__use_year_when_identifying:
md.year = None
if self.config.Auto_Tag__ignore_leading_numbers_in_filename and md.series is not None:
md.series = re.sub(r"^([\d.]+)(.*)", r"\2", md.series)
result, matches = ii.identify(ca, md)
found_match = False
choices = False
low_confidence = False
if result == IssueIdentifier.result_no_matches:
pass
elif result == IssueIdentifier.result_found_match_but_bad_cover_score:
low_confidence = True
found_match = True
elif result == IssueIdentifier.result_found_match_but_not_first_page:
found_match = True
elif result == IssueIdentifier.result_multiple_matches_with_bad_image_scores:
low_confidence = True
choices = True
elif result == IssueIdentifier.result_one_good_match:
found_match = True
elif result == IssueIdentifier.result_multiple_good_matches:
choices = True
if choices:
if low_confidence:
logger.error("Online search: Multiple low confidence matches. Save aborted")
res = Result(
Action.save,
status=Status.match_failure,
original_path=ca.path,
online_results=matches,
match_status=MatchStatus.low_confidence_match,
tags_written=self.config.Runtime_Options__tags_write,
tags_read=tags_read,
)
match_results.low_confidence_matches.append(res)
return GenericMetadata(), matches, res, match_results
logger.error("Online search: Multiple good matches. Save aborted")
res = Result(
Action.save,
status=Status.match_failure,
original_path=ca.path,
online_results=matches,
match_status=MatchStatus.multiple_match,
tags_written=self.config.Runtime_Options__tags_write,
tags_read=tags_read,
)
match_results.multiple_matches.append(res)
return GenericMetadata(), matches, res, match_results
if low_confidence and self.config.Runtime_Options__abort_on_low_confidence:
logger.error("Online search: Low confidence match. Save aborted")
res = Result(
Action.save,
status=Status.match_failure,
original_path=ca.path,
online_results=matches,
match_status=MatchStatus.low_confidence_match,
tags_written=self.config.Runtime_Options__tags_write,
tags_read=tags_read,
)
match_results.low_confidence_matches.append(res)
return GenericMetadata(), matches, res, match_results
if not found_match:
logger.error("Online search: No match found. Save aborted")
res = Result(
Action.save,
status=Status.match_failure,
original_path=ca.path,
online_results=matches,
match_status=MatchStatus.no_match,
tags_written=self.config.Runtime_Options__tags_write,
tags_read=tags_read,
)
match_results.no_matches.append(res)
return GenericMetadata(), matches, res, match_results
# we got here, so we have a single match
# now get the particular issue data
ct_md = self.fetch_metadata(matches[0].issue_id)
if ct_md.is_empty:
res = Result(
Action.save,
status=Status.fetch_data_failure,
original_path=ca.path,
online_results=matches,
match_status=MatchStatus.good_match,
tags_written=self.config.Runtime_Options__tags_write,
tags_read=tags_read,
)
match_results.fetch_data_failures.append(res)
return GenericMetadata(), matches, res, match_results
return ct_md, matches, None, match_results
def save(self, ca: ComicArchive, match_results: OnlineMatchResults) -> tuple[Result, OnlineMatchResults]:
if self.config.Runtime_Options__skip_existing_tags:
for tag_id in self.config.Runtime_Options__tags_write:
if ca.has_tags(tag_id):
self.output(f"{ca.path}: Already has {tags[tag_id].name()} tags. Not overwriting.")
return (
Result(
Action.save,
original_path=ca.path,
status=Status.existing_tags,
tags_written=self.config.Runtime_Options__tags_write,
),
match_results,
)
if self.batch_mode:
self.output(f"Processing {utils.path_to_short_str(ca.path)}...")
md, tags_read = self.create_local_metadata(ca, self.config.Runtime_Options__tags_read)
matches: list[IssueResult] = []
# now, search online
ct_md = GenericMetadata()
if self.config.Auto_Tag__online:
if self.config.Auto_Tag__issue_id is not None:
# we were given the actual issue ID to search with
try:
ct_md = self.current_talker().fetch_comic_data(self.config.Auto_Tag__issue_id)
except TalkerError as e:
logger.exception(f"Error retrieving issue details. Save aborted.\n{e}")
res = Result(
Action.save,
original_path=ca.path,
status=Status.fetch_data_failure,
tags_written=self.config.Runtime_Options__tags_write,
tags_read=tags_read,
)
match_results.fetch_data_failures.append(res)
return res, match_results
if ct_md is None or ct_md.is_empty:
logger.error("No match for ID %s was found.", self.config.Auto_Tag__issue_id)
res = Result(
Action.save,
status=Status.match_failure,
original_path=ca.path,
match_status=MatchStatus.no_match,
tags_written=self.config.Runtime_Options__tags_write,
tags_read=tags_read,
)
match_results.no_matches.append(res)
return res, match_results
else:
query_md = md.copy()
qt_md = self.try_quick_tag(ca, query_md)
if query_md.issue is None or query_md.issue == "":
if self.config.Auto_Tag__assume_issue_one:
query_md.issue = "1"
if qt_md is None or qt_md.is_empty:
if qt_md is not None:
self.output("Failed to find match via quick tag")
ct_md, matches, res, match_results = self.normal_tag(ca, tags_read, query_md, match_results) # type: ignore[assignment]
if res is not None:
return res, match_results
else:
self.output("Successfully matched via quick tag")
ct_md = qt_md
matches = [
IssueResult(
series=ct_md.series or "",
distance=-1,
issue_number=ct_md.issue or "",
issue_count=ct_md.issue_count,
url_image_hash=-1,
issue_title=ct_md.title or "",
issue_id=ct_md.issue_id or "",
series_id=ct_md.series_id or "",
month=ct_md.month,
year=ct_md.year,
publisher=None,
image_url=str(ct_md._cover_image) or "",
alt_image_urls=[],
description=ct_md.description or "",
)
]
res = Result(
Action.save,
status=Status.success,
original_path=ca.path,
online_results=matches,
match_status=MatchStatus.good_match,
md=prepare_metadata(md, ct_md, self.config),
tags_written=self.config.Runtime_Options__tags_write,
tags_read=tags_read,
)
assert res.md
# ok, done building our metadata. time to save
if self.write_tags(ca, res.md):
match_results.good_matches.append(res)
else:
res.status = Status.write_failure
match_results.write_failures.append(res)
return res, match_results
def rename(self, ca: ComicArchive) -> Result:
original_path = ca.path
msg_hdr = ""
if self.batch_mode:
msg_hdr = f"{ca.path}: "
md, tags_read = self.create_local_metadata(ca, self.config.Runtime_Options__tags_read)
if md.series is None:
logger.error(msg_hdr + "Can't rename without series name")
return Result(Action.rename, Status.read_failure, original_path)
new_ext = "" # default
if self.config.File_Rename__auto_extension:
new_ext = ca.extension()
renamer = FileRenamer(
None,
platform="universal" if self.config.File_Rename__strict_filenames else "auto",
replacements=self.config.File_Rename__replacements,
)
renamer.set_metadata(md, ca.path.name)
renamer.set_template(self.config.File_Rename__template)
renamer.set_issue_zero_padding(self.config.File_Rename__issue_number_padding)
renamer.set_smart_cleanup(self.config.File_Rename__use_smart_string_cleanup)
renamer.move = self.config.File_Rename__move
renamer.move_only = self.config.File_Rename__only_move
try:
new_name = renamer.determine_name(ext=new_ext)
except ValueError:
logger.exception(
msg_hdr
+ "Invalid format string!\n"
+ "Your rename template is invalid!\n\n"
+ "%s\n\n"
+ "Please consult the template help in the settings "
+ "and the documentation on the format at "
+ "https://docs.python.org/3/library/string.html#format-string-syntax",
self.config.File_Rename__template,
)
return Result(Action.rename, Status.rename_failure, original_path, md=md)
except Exception:
logger.exception("Formatter failure: %s metadata: %s", self.config.File_Rename__template, renamer.metadata)
return Result(Action.rename, Status.rename_failure, original_path, md=md)
folder = get_rename_dir(ca, self.config.File_Rename__dir if self.config.File_Rename__move else None)
full_path = folder / new_name
if full_path == ca.path:
self.output(msg_hdr + "Filename is already good!")
return Result(Action.rename, Status.success, original_path, full_path, md=md)
suffix = ""
if not self.config.Runtime_Options__dryrun:
# rename the file
try:
ca.rename(utils.unique_file(full_path))
except OSError:
logger.exception("Failed to rename comic archive: %s", ca.path)
return Result(Action.rename, Status.write_failure, original_path, full_path, md=md)
else:
suffix = " (dry-run, no change)"
self.output(f"renamed '{original_path.name}' -> '{new_name}' {suffix}")
return Result(Action.rename, Status.success, original_path, tags_read=tags_read, md=md)
def export(self, ca: ComicArchive) -> Result:
msg_hdr = ""
if self.batch_mode:
msg_hdr = f"{ca.path}: "
if ca.is_zip():
logger.error(msg_hdr + "Archive is already a zip file.")
return Result(Action.export, Status.success, ca.path)
filename_path = ca.path
new_file = filename_path.with_suffix(".cbz")
if self.config.Runtime_Options__abort_on_conflict and new_file.exists():
self.output(msg_hdr + f"{new_file.name} already exists in the that folder.")
return Result(Action.export, Status.write_failure, ca.path)
new_file = utils.unique_file(new_file)
delete_success = False
export_success = False
if not self.config.Runtime_Options__dryrun:
if ca.export_as_zip(new_file):
export_success = True
if self.config.Runtime_Options__delete_original:
try:
filename_path.unlink(missing_ok=True)
delete_success = True
except OSError:
logger.exception(msg_hdr + "Error deleting original archive after export")
else:
# last export failed, so remove the zip, if it exists
new_file.unlink(missing_ok=True)
else:
msg = msg_hdr + f"Dry-run: Would try to create {os.path.split(new_file)[1]}"
if self.config.Runtime_Options__delete_original:
msg += " and delete original."
self.output(msg)
return Result(Action.export, Status.success, ca.path, new_file)
msg = msg_hdr
if export_success:
msg += f"Archive exported successfully to: {os.path.split(new_file)[1]}"
if self.config.Runtime_Options__delete_original and delete_success:
msg += " (Original deleted) "
else:
msg += "Archive failed to export!"
self.output(msg)
return Result(Action.export, Status.success, ca.path, new_file)
def process_file_cli(
self, command: Action, filename: str, match_results: OnlineMatchResults
) -> tuple[Result, OnlineMatchResults]:
if not os.path.lexists(filename):
logger.error("Cannot find %s", filename)
return Result(command, Status.read_failure, pathlib.Path(filename)), match_results
ca = ComicArchive(
filename, str(graphics_path / "nocover.png"), hash_archive=self.config.Runtime_Options__preferred_hash
)
if not ca.seems_to_be_a_comic_archive():
logger.error("Sorry, but %s is not a comic archive!", filename)
return Result(Action.rename, Status.read_failure, ca.path), match_results
if not ca.is_writable() and (command in (Action.delete, Action.copy, Action.save, Action.rename)):
logger.error("This archive is not writable")
return Result(command, Status.write_permission_failure, ca.path), match_results
if command == Action.print:
return self.print(ca), match_results
elif command == Action.delete:
return self.delete(ca), match_results
elif command == Action.copy is not None:
return self.copy(ca), match_results
elif command == Action.save:
return self.save(ca, match_results)
elif command == Action.rename:
return self.rename(ca), match_results
elif command == Action.export:
return self.export(ca), match_results
return Result(None, Status.read_failure, ca.path), match_results # type: ignore[arg-type]

View File

@ -0,0 +1,308 @@
"""A PyQt6 widget to display cover images
Display cover images from either a local archive, or from comic source metadata.
TODO: This should be re-factored using subclasses!
"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import pathlib
from PyQt6 import QtCore, QtGui, QtWidgets, uic
from comicapi.comicarchive import ComicArchive
from comictaggerlib.imagefetcher import ImageFetcher
from comictaggerlib.imagepopup import ImagePopup
from comictaggerlib.pageloader import PageLoader
from comictaggerlib.ui import ui_path
from comictaggerlib.ui.qtutils import get_qimage_from_data
logger = logging.getLogger(__name__)
def clickable(widget: QtWidgets.QWidget) -> QtCore.pyqtBoundSignal:
"""Allow a label to be clickable"""
class Filter(QtCore.QObject):
dblclicked = QtCore.pyqtSignal()
def eventFilter(self, obj: QtCore.QObject, event: QtCore.QEvent) -> bool:
if obj == widget:
if event.type() == QtCore.QEvent.Type.MouseButtonDblClick:
self.dblclicked.emit()
return True
return False
flt = Filter(widget)
widget.installEventFilter(flt)
return flt.dblclicked
class CoverImageWidget(QtWidgets.QWidget):
ArchiveMode = 0
AltCoverMode = 1
URLMode = 1
DataMode = 3
image_fetch_complete = QtCore.pyqtSignal(str, QtCore.QByteArray)
def __init__(
self,
parent: QtWidgets.QWidget,
mode: int,
cache_folder: pathlib.Path | None,
blur: bool = False,
expand_on_click: bool = True,
) -> None:
super().__init__(parent)
if mode not in (self.AltCoverMode, self.URLMode) or cache_folder is None:
self.cover_fetcher = None
self.talker = None
else:
self.cover_fetcher = ImageFetcher(cache_folder)
self.talker = None
with (ui_path / "coverimagewidget.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.cache_folder = cache_folder
self.mode: int = mode
self.page_loader: PageLoader | None = None
self.showControls = True
self.blur = blur
self.scene = QtWidgets.QGraphicsScene(parent=self)
self.current_pixmap = QtGui.QPixmap()
self.comic_archive: ComicArchive | None = None
self.issue_id: str = ""
self.issue_url: str | None = None
self.url_list: list[str] = []
if self.page_loader is not None:
self.page_loader.abandoned = True
self.page_loader = None
self.imageIndex = -1
self.imageCount = 1
self.imageData = b""
self.btnLeft.setIcon(QtGui.QIcon(":/graphics/left.png"))
self.btnRight.setIcon(QtGui.QIcon(":/graphics/right.png"))
self.btnLeft.clicked.connect(self.decrement_image)
self.btnRight.clicked.connect(self.increment_image)
self.image_fetch_complete.connect(self.cover_remote_fetch_complete)
if expand_on_click:
clickable(self.graphicsView).connect(self.show_popup)
else:
self.graphicsView.setToolTip("")
self.graphicsView.setScene(self.scene)
self.update_content()
def reset_widget(self) -> None:
self.comic_archive = None
self.issue_id = ""
self.issue_url = None
self.url_list = []
if self.page_loader is not None:
self.page_loader.abandoned = True
self.page_loader = None
self.imageIndex = -1
self.imageCount = 1
self.imageData = b""
def clear(self) -> None:
self.reset_widget()
self.update_content()
def increment_image(self) -> None:
self.imageIndex += 1
if self.imageIndex == self.imageCount:
self.imageIndex = 0
self.update_content()
def decrement_image(self) -> None:
self.imageIndex -= 1
if self.imageIndex == -1:
self.imageIndex = self.imageCount - 1
self.update_content()
def set_archive(self, ca: ComicArchive, page: int = 0) -> None:
if self.mode == CoverImageWidget.ArchiveMode:
self.reset_widget()
self.comic_archive = ca
self.imageIndex = page
self.imageCount = ca.get_number_of_pages()
self.update_content()
def set_url(self, url: str) -> None:
if self.mode == CoverImageWidget.URLMode:
self.reset_widget()
self.update_content()
self.url_list = [url]
self.imageIndex = 0
self.imageCount = 1
self.update_content()
def set_issue_details(self, issue_id: str, url_list: list[str]) -> None:
if self.mode == CoverImageWidget.AltCoverMode:
self.reset_widget()
self.update_content()
self.issue_id = issue_id
self.set_url_list(url_list)
def set_image_data(self, image_data: bytes) -> None:
if self.mode == CoverImageWidget.DataMode:
self.reset_widget()
if image_data:
self.imageIndex = 0
self.imageData = image_data
else:
self.imageIndex = -1
self.update_content()
def set_url_list(self, url_list: list[str]) -> None:
self.url_list = url_list
self.imageIndex = 0
self.imageCount = len(self.url_list)
self.update_content()
self.update_controls()
def set_page(self, pagenum: int) -> None:
if self.mode == CoverImageWidget.ArchiveMode:
self.imageIndex = pagenum
self.update_content()
def update_content(self) -> None:
self.update_image()
self.update_controls()
def update_image(self) -> None:
if self.imageIndex == -1:
self.load_default()
elif self.mode in [CoverImageWidget.AltCoverMode, CoverImageWidget.URLMode]:
self.load_url()
elif self.mode == CoverImageWidget.DataMode:
self.cover_remote_fetch_complete("", self.imageData)
else:
self.load_page()
def update_controls(self) -> None:
if not self.showControls or self.mode == CoverImageWidget.DataMode:
self.btnLeft.hide()
self.btnRight.hide()
self.label.hide()
return
if self.imageIndex == -1 or self.imageCount == 1:
self.btnLeft.setEnabled(False)
self.btnRight.setEnabled(False)
self.btnLeft.hide()
self.btnRight.hide()
else:
self.btnLeft.setEnabled(True)
self.btnRight.setEnabled(True)
self.btnLeft.show()
self.btnRight.show()
if self.imageIndex == -1 or self.imageCount == 1:
self.label.setText("")
elif self.mode == CoverImageWidget.AltCoverMode:
self.label.setText(f"Cover {self.imageIndex + 1} (of {self.imageCount})")
else:
self.label.setText(f"Page {self.imageIndex + 1} (of {self.imageCount})")
def load_url(self) -> None:
assert isinstance(self.cache_folder, pathlib.Path)
self.load_default()
self.cover_fetcher = ImageFetcher(self.cache_folder)
ImageFetcher.image_fetch_complete = self.image_fetch_complete.emit
data = self.cover_fetcher.fetch(self.url_list[self.imageIndex])
if data:
self.cover_remote_fetch_complete(self.url_list[self.imageIndex], data)
# called when the image is done loading from internet
def cover_remote_fetch_complete(self, url: str, image_data: bytes) -> None:
if url and url not in self.url_list:
return
img = get_qimage_from_data(image_data)
self.current_pixmap = QtGui.QPixmap.fromImage(img)
self.set_display_pixmap()
def load_page(self) -> None:
if self.comic_archive is not None:
if self.page_loader is not None:
self.page_loader.abandoned = True
self.page_loader = PageLoader(self.comic_archive, self.imageIndex)
self.page_loader.loadComplete.connect(self.page_load_complete)
self.page_loader.start()
def page_load_complete(self, image_data: bytes) -> None:
img = get_qimage_from_data(image_data)
self.current_pixmap = QtGui.QPixmap.fromImage(img)
self.set_display_pixmap()
self.page_loader = None
def load_default(self) -> None:
self.current_pixmap = QtGui.QPixmap(":/graphics/nocover.png")
self.set_display_pixmap()
def resizeEvent(self, resize_event: QtGui.QResizeEvent) -> None:
if self.current_pixmap is not None:
self.set_display_pixmap()
def set_display_pixmap(self) -> None:
"""The deltas let us know what the new width and height of the label will be"""
new_w = self.frame.width()
new_h = self.frame.height()
frame_w = self.frame.width()
frame_h = self.frame.height()
new_h -= 8
new_w -= 8
new_h = max(new_h, 0)
new_w = max(new_w, 0)
# scale the pixmap to fit in the frame
scaled_pixmap = self.current_pixmap.scaled(
new_w, new_h, QtCore.Qt.AspectRatioMode.KeepAspectRatio, QtCore.Qt.TransformationMode.SmoothTransformation
)
self.scene.clear()
qpix = self.scene.addPixmap(scaled_pixmap)
assert qpix
if self.blur:
blur = QtWidgets.QGraphicsBlurEffect(parent=self)
blur.setBlurHints(QtWidgets.QGraphicsBlurEffect.BlurHint.PerformanceHint)
blur.setBlurRadius(30)
qpix.setGraphicsEffect(blur)
# move and resize the label to be centered in the fame
img_w = scaled_pixmap.width()
img_h = scaled_pixmap.height()
self.scene.setSceneRect(0, 0, img_w, img_h)
self.graphicsView.resize(img_w + 2, img_h + 2)
self.graphicsView.move(int((frame_w - img_w) / 2), int((frame_h - img_h) / 2))
def show_popup(self) -> None:
ImagePopup(self, self.current_pixmap)

View File

@ -0,0 +1,98 @@
"""A PyQT4 dialog to edit credits"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import operator
import natsort
from PyQt6 import QtCore, QtWidgets, uic
from comicapi import utils
from comicapi.genericmetadata import Credit
from comictaggerlib.ui import ui_path
logger = logging.getLogger(__name__)
class CreditEditorWindow(QtWidgets.QDialog):
ModeEdit = 0
ModeNew = 1
def __init__(self, parent: QtWidgets.QWidget, mode: int, credit: Credit) -> None:
super().__init__(parent)
with (ui_path / "crediteditorwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.mode = mode
if self.mode == self.ModeEdit:
self.setWindowTitle("Edit Credit")
else:
self.setWindowTitle("New Credit")
# Add the entries to the role combobox
self.cbRole.addItem("")
self.cbRole.addItem("Artist")
self.cbRole.addItem("Colorist")
self.cbRole.addItem("Cover Artist")
self.cbRole.addItem("Editor")
self.cbRole.addItem("Inker")
self.cbRole.addItem("Letterer")
self.cbRole.addItem("Penciller")
self.cbRole.addItem("Plotter")
self.cbRole.addItem("Scripter")
self.cbRole.addItem("Translator")
self.cbRole.addItem("Writer")
self.cbRole.addItem("Other")
self.cbLanguage.addItem("", "")
for f in natsort.humansorted(utils.languages().items(), operator.itemgetter(1)):
self.cbLanguage.addItem(f[1], f[0])
self.leName.setText(credit.person)
if credit.role is not None and credit.role != "":
i = self.cbRole.findText(credit.role)
if i == -1:
self.cbRole.setEditText(credit.role)
else:
self.cbRole.setCurrentIndex(i)
if credit.language != "":
i = (
self.cbLanguage.findData(credit.language, QtCore.Qt.ItemDataRole.UserRole)
if self.cbLanguage.findData(credit.language, QtCore.Qt.ItemDataRole.UserRole) > -1
else self.cbLanguage.findText(credit.language)
)
if i == -1:
self.cbLanguage.setEditText(credit.language)
else:
self.cbLanguage.setCurrentIndex(i)
self.cbPrimary.setChecked(credit.primary)
def get_credit(self) -> Credit:
lang = self.cbLanguage.currentData() or self.cbLanguage.currentText()
return Credit(self.leName.text(), self.cbRole.currentText(), self.cbPrimary.isChecked(), lang)
def accept(self) -> None:
if self.leName.text() == "":
QtWidgets.QMessageBox.warning(self, "Whoops", "You need to enter a name for a credit.")
else:
QtWidgets.QDialog.accept(self)

View File

@ -0,0 +1,122 @@
from __future__ import annotations
import json
import logging
import pathlib
from enum import Enum
from typing import Any
import settngs
from comictaggerlib.ctsettings.commandline import (
initial_commandline_parser,
register_commandline_settings,
validate_commandline_settings,
)
from comictaggerlib.ctsettings.file import register_file_settings, validate_file_settings
from comictaggerlib.ctsettings.plugin import group_for_plugin, register_plugin_settings, validate_plugin_settings
from comictaggerlib.ctsettings.settngs_namespace import SettngsNS as ct_ns
from comictaggerlib.ctsettings.types import ComicTaggerPaths
from comictalker import ComicTalker
logger = logging.getLogger(__name__)
talkers: dict[str, ComicTalker] = {}
__all__ = [
"initial_commandline_parser",
"register_commandline_settings",
"register_file_settings",
"register_plugin_settings",
"validate_commandline_settings",
"validate_file_settings",
"validate_plugin_settings",
"ComicTaggerPaths",
"ct_ns",
"group_for_plugin",
]
class SettingsEncoder(json.JSONEncoder):
def default(self, obj: Any) -> Any:
if isinstance(obj, pathlib.Path):
return str(obj)
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, obj)
def validate_types(config: settngs.Config[settngs.Values]) -> settngs.Config[settngs.Values]:
# Go through each setting
for group in config.definitions.values():
for setting in group.v.values():
# Get the value and if it is the default
value, default = settngs.get_option(config.values, setting)
if not default and setting.type is not None:
# If it is not the default and the type attribute is not None
# use it to convert the loaded string into the expected value
if (
isinstance(value, str)
or isinstance(default, Enum)
or (isinstance(setting.type, type) and issubclass(setting.type, Enum))
):
if isinstance(setting.type, type) and issubclass(setting.type, Enum) and isinstance(value, list):
config.values[setting.group][setting.dest] = [setting.type(x) for x in value]
else:
config.values[setting.group][setting.dest] = setting.type(value)
return config
def parse_config(
manager: settngs.Manager,
config_path: pathlib.Path,
args: list[str] | None = None,
) -> tuple[settngs.Config[settngs.Values], bool]:
"""
Function to parse options from a json file and passes the resulting Config object to parse_cmdline.
Args:
manager: settngs Manager object
config_path: A `pathlib.Path` object
args: Passed to argparse.ArgumentParser.parse_args
"""
file_options, success = settngs.parse_file(manager.definitions, config_path)
file_options = validate_types(file_options)
cmdline_options = settngs.parse_cmdline(
manager.definitions,
manager.description,
manager.epilog,
args,
file_options,
)
final_options = settngs.normalize_config(cmdline_options, file=True, cmdline=True)
return final_options, success
def save_file(
config: settngs.Config[settngs.T],
filename: pathlib.Path,
) -> bool:
"""
Helper function to save options from a json dictionary to a file
Args:
config: The options to save to a json dictionary
filename: A pathlib.Path object to save the json dictionary to
"""
file_options = settngs.clean_config(config, file=True)
if "Quick Tag" in file_options and "url" in file_options["Quick Tag"]:
file_options["Quick Tag"]["url"] = str(file_options["Quick Tag"]["url"])
try:
if not filename.exists():
filename.parent.mkdir(exist_ok=True, parents=True)
filename.touch()
json_str = json.dumps(file_options, cls=SettingsEncoder, indent=2)
filename.write_text(json_str + "\n", encoding="utf-8")
except Exception:
logger.exception("Failed to save config file: %s", filename)
return False
return True

View File

@ -0,0 +1,382 @@
"""CLI settings for ComicTagger"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import argparse
import hashlib
import logging
import os
import platform
import shlex
import subprocess
import settngs
from comicapi import comicarchive, utils
from comicapi.comicarchive import tags
from comictaggerlib import ctversion, quick_tag
from comictaggerlib.ctsettings.settngs_namespace import SettngsNS as ct_ns
from comictaggerlib.ctsettings.types import ComicTaggerPaths, tag
from comictaggerlib.resulttypes import Action
logger = logging.getLogger(__name__)
def initial_commandline_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(add_help=False)
# Ensure this stays up to date with register_runtime
parser.add_argument(
"--config",
help="Config directory for ComicTagger to use.\ndefault: %(default)s\n\n",
type=ComicTaggerPaths,
default=ComicTaggerPaths(),
)
parser.add_argument(
"-v",
"--verbose",
action="count",
default=0,
help="Be noisy when doing what it does. Use a second time to enable debug logs.\nShort option cannot be combined with other options.",
)
parser.add_argument(
"--enable-quick-tag",
action=argparse.BooleanOptionalAction,
default=False,
help='Enable the expiremental "quick tagger"',
)
return parser
def register_runtime(parser: settngs.Manager) -> None:
parser.add_setting(
"--config",
help="Config directory for ComicTagger to use.\ndefault: %(default)s\n\n",
type=ComicTaggerPaths,
default=ComicTaggerPaths(),
file=False,
)
parser.add_setting(
"-v",
"--verbose",
action="count",
default=0,
help="Be noisy when doing what it does. Use a second time to enable debug logs.\nShort option cannot be combined with other options.",
file=False,
)
parser.add_setting(
"--enable-quick-tag",
action=argparse.BooleanOptionalAction,
default=False,
help='Enable the expiremental "quick tagger"',
file=False,
)
parser.add_setting(
"--enable-embedding-hashes",
action=argparse.BooleanOptionalAction,
default=False,
help="Enable embedding hashes in metadata (currently only CR/CIX has support)",
file=False,
)
parser.add_setting(
"--preferred-hash",
default="shake_256",
choices=hashlib.algorithms_available,
help="The type of embedded hash to save when --enable-embedding-hashes is set\n\n",
file=False,
)
parser.add_setting("-q", "--quiet", action="store_true", help="Don't say much (for print mode).", file=False)
parser.add_setting(
"-j",
"--json",
action="store_true",
help="Output json on stdout. Ignored in interactive mode.\n\n",
file=False,
)
parser.add_setting(
"--raw",
action="store_true",
help="""With -p, will print out the raw tag block(s) from the file.""",
file=False,
)
parser.add_setting(
"-i",
"--interactive",
action="store_true",
help="""Interactively query the user when there are\nmultiple matches for an online search. Disabled json output\n\n""",
file=False,
)
parser.add_setting(
"--abort",
dest="abort_on_low_confidence",
action=argparse.BooleanOptionalAction,
default=True,
help="""Abort save operation when online match is of low confidence.\ndefault: %(default)s""",
file=False,
)
parser.add_setting(
"-n",
"--dryrun",
action="store_true",
help="Don't actually modify file (only relevant for -d, -s, or -r).\n\n",
file=False,
)
parser.add_setting(
"--summary",
default=True,
action=argparse.BooleanOptionalAction,
help="Show the summary after a save operation.\ndefault: %(default)s",
file=False,
)
parser.add_setting(
"-R",
"--recursive",
action="store_true",
help="Recursively include files in sub-folders.",
file=False,
)
parser.add_setting("-g", "--glob", action="store_true", help="Windows only. Enable globbing", file=False)
parser.add_setting("--darkmode", action="store_true", help="Windows only. Force a dark pallet", file=False)
parser.add_setting("--no-gui", action="store_true", help="Do not open the GUI, force the commandline", file=False)
parser.add_setting(
"--abort-on-conflict",
action="store_true",
help="""Don't export to zip if intended new filename exists\n(otherwise, creates a new unique filename).\n\n""",
file=False,
)
parser.add_setting(
"--delete-original",
action="store_true",
help="""Delete original archive after successful export to Zip.\n(only relevant for -e)\n\n""",
file=False,
)
parser.add_setting(
"-t",
"--tags-read",
metavar=f"{{{','.join(tags).upper()}}}",
default=[],
type=tag,
help="""Specify the tags to read.\nUse commas for multiple tags.\nSee --list-plugins for the available tags.\nThe tags used will be 'overlaid' in order:\ne.g. '-t cbl,cr' with no CBL tags, CR will be used if they exist and CR will overwrite any shared CBL tags.\n\n""",
file=False,
)
parser.add_setting(
"--tags-write",
metavar=f"{{{','.join(tags).upper()}}}",
default=[],
type=tag,
help="""Specify the tags to write.\nUse commas for multiple tags.\nRead tags will be used if unspecified\nSee --list-plugins for the available tags.\n\n""",
file=False,
)
parser.add_setting(
"--skip-existing-tags",
action=argparse.BooleanOptionalAction,
default=False,
help="""Skip archives that already have tags specified with -t,\notherwise merges new tags with existing tags (relevant for -s or -c).\ndefault: %(default)s""",
file=False,
)
parser.add_setting("files", nargs="*", default=[], file=False)
def register_commands(parser: settngs.Manager) -> None:
parser.add_setting("--version", action="store_true", help="Display version.", file=False)
parser.add_setting(
"-p",
"--print",
dest="command",
action="store_const",
const=Action.print,
default=Action.gui,
help="""Print out tag info from file. Specify via -t to only print specific tags.\n\n""",
file=False,
)
parser.add_setting(
"-d",
"--delete",
dest="command",
action="store_const",
const=Action.delete,
help="Deletes the tags specified via -t.",
file=False,
)
parser.add_setting(
"-c",
"--copy",
type=tag,
default=[],
metavar=f"{{{','.join(tags).upper()}}}",
help="Copy the specified source tags to\ndestination tags specified via --tags-write\n(potentially lossy operation).\n\n",
file=False,
)
parser.add_setting(
"-s",
"--save",
dest="command",
action="store_const",
const=Action.save,
help="Save out tags as specified tags (via --tags-write).\nMust specify also at least -o, -f, or -m.\n\n",
file=False,
)
parser.add_setting(
"-r",
"--rename",
dest="command",
action="store_const",
const=Action.rename,
help="Rename the file based on specified tags.",
file=False,
)
parser.add_setting(
"-e",
"--export-to-zip",
dest="command",
action="store_const",
const=Action.export,
help="Export archive to Zip format.",
file=False,
)
parser.add_setting(
"--only-save-config",
dest="command",
action="store_const",
const=Action.save_config,
help="Only save the configuration (eg, Comic Vine API key) and quit.",
file=False,
)
parser.add_setting(
"--list-plugins",
dest="command",
action="store_const",
const=Action.list_plugins,
default=Action.gui,
help="List the available plugins.\n\n",
file=False,
)
def register_commandline_settings(parser: settngs.Manager, enable_quick_tag: bool) -> None:
parser.add_group("Commands", register_commands, True)
parser.add_persistent_group("Runtime Options", register_runtime)
if enable_quick_tag:
parser.add_group("Quick Tag", quick_tag.settings)
def validate_commandline_settings(config: settngs.Config[ct_ns], parser: settngs.Manager) -> settngs.Config[ct_ns]:
if config[0].Commands__version:
parser.exit(
status=1,
message=f"ComicTagger {ctversion.version}: Copyright (c) 2012-2022 ComicTagger Team\n"
+ "Distributed under Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0)\n",
)
enabled_tags = {tag for tag in comicarchive.tags if comicarchive.tags[tag].enabled}
if (
(not config[0].Metadata_Options__cr)
and "cr" in comicarchive.tags
and comicarchive.tags["cr"].enabled
and len(enabled_tags) > 1
):
comicarchive.tags["cr"].enabled = False
config[0].Runtime_Options__no_gui = any(
(config[0].Commands__command != Action.gui, config[0].Runtime_Options__no_gui, config[0].Commands__copy)
)
if platform.system() == "Windows" and config[0].Runtime_Options__glob:
# no globbing on windows shell, so do it for them
import glob
globs = config[0].Runtime_Options__files
config[0].Runtime_Options__files = []
for item in globs:
config[0].Runtime_Options__files.extend(glob.glob(item))
if config[0].Runtime_Options__json and config[0].Runtime_Options__interactive:
config[0].Runtime_Options__json = False
if config[0].Runtime_Options__tags_read and not config[0].Runtime_Options__tags_write:
config[0].Runtime_Options__tags_write = config[0].Runtime_Options__tags_read
disabled_tags = {tag for tag in comicarchive.tags if not comicarchive.tags[tag].enabled}
to_be_removed = (
set(config[0].Runtime_Options__tags_read)
.union(config[0].Runtime_Options__tags_write)
.intersection(disabled_tags)
)
if to_be_removed:
logger.debug("Removing disabled tags: %s", to_be_removed)
config[0].Runtime_Options__tags_read = [
tag for tag in config[0].Runtime_Options__tags_read if tag not in to_be_removed
]
config[0].Runtime_Options__tags_write = [
tag for tag in config[0].Runtime_Options__tags_write if tag not in to_be_removed
]
if (
config[0].Runtime_Options__no_gui
and not [tag.id for tag in tags.values() if tag.enabled]
and config[0].Commands__command != Action.list_plugins
):
parser.exit(status=1, message="There are no tags enabled see --list-plugins\n")
if config[0].Runtime_Options__no_gui and not config[0].Runtime_Options__files:
if config[0].Commands__command == Action.print and not config[0].Auto_Tag__metadata.is_empty:
... # allow printing the metadata provided on the commandline
elif config[0].Commands__command not in (Action.save_config, Action.list_plugins):
parser.exit(message="Command requires at least one filename!\n", status=1)
if config[0].Commands__command == Action.delete and not config[0].Runtime_Options__tags_write:
parser.exit(message="Please specify the tags to delete with --tags-write\n", status=1)
if config[0].Commands__command == Action.save and not config[0].Runtime_Options__tags_write:
parser.exit(message="Please specify the tags to save with --tags-write\n", status=1)
if config[0].Commands__copy:
config[0].Commands__command = Action.copy
if not config[0].Runtime_Options__tags_write:
parser.exit(message="Please specify the tags to copy to with --tags-write\n", status=1)
if config[0].Runtime_Options__recursive:
config[0].Runtime_Options__files = utils.os_sorted(
set(utils.get_recursive_filelist(config[0].Runtime_Options__files))
)
if not config[0].Runtime_Options__enable_embedding_hashes:
config[0].Runtime_Options__preferred_hash = ""
# take a crack at finding rar exe if it's not in the path
if not utils.which("rar"):
if platform.system() == "Windows":
letters = ["C"]
letters.extend({f"{d}" for d in "ABDEFGHIJKLMNOPQRSTUVWXYZ" if os.path.exists(f"{d}:\\")})
for letter in letters:
# look in some likely places for Windows machines
utils.add_to_path(rf"{letter}:\Program Files\WinRAR")
utils.add_to_path(rf"{letter}:\Program Files (x86)\WinRAR")
else:
if platform.system() == "Darwin":
result = subprocess.run(("/usr/libexec/path_helper", "-s"), capture_output=True)
for path in reversed(
shlex.split(result.stdout.decode("utf-8", errors="ignore"))[0]
.partition("=")[2]
.rstrip(";")
.split(os.pathsep)
):
utils.add_to_path(path)
utils.add_to_path("/opt/homebrew/bin")
return config

View File

@ -0,0 +1,398 @@
from __future__ import annotations
import argparse
import uuid
import settngs
from comicapi import merge, utils
from comicapi.genericmetadata import GenericMetadata
from comictaggerlib.ctsettings.settngs_namespace import SettngsNS as ct_ns
from comictaggerlib.ctsettings.types import parse_metadata_from_string
from comictaggerlib.defaults import DEFAULT_REPLACEMENTS, Replacement, Replacements
def general(parser: settngs.Manager) -> None:
# General Settings
parser.add_setting("check_for_new_version", default=False, cmdline=False)
parser.add_setting("blur", default=False, cmdline=False)
parser.add_setting(
"--prompt-on-save",
default=True,
action=argparse.BooleanOptionalAction,
help="Prompts the user to confirm saving tags when using the GUI.\ndefault: %(default)s",
)
def internal(parser: settngs.Manager) -> None:
# automatic settings
parser.add_setting("install_id", default=uuid.uuid4().hex, cmdline=False)
parser.add_setting("embedded_hash_type", default="shake_256", cmdline=False)
parser.add_setting("write_tags", default=["cr"], cmdline=False)
parser.add_setting("read_tags", default=["cr"], cmdline=False)
parser.add_setting("last_opened_folder", default="", cmdline=False)
parser.add_setting("window_width", default=0, cmdline=False)
parser.add_setting("window_height", default=0, cmdline=False)
parser.add_setting("window_x", default=0, cmdline=False)
parser.add_setting("window_y", default=0, cmdline=False)
parser.add_setting("form_width", default=-1, cmdline=False)
parser.add_setting("list_width", default=-1, cmdline=False)
parser.add_setting("sort_column", default=-1, cmdline=False)
parser.add_setting("sort_direction", default=0, cmdline=False)
parser.add_setting("remove_archive_after_successful_match", default=False, cmdline=False)
def identifier(parser: settngs.Manager) -> None:
parser.add_setting(
"--series-match-identify-thresh",
default=91,
type=int,
help="The minimum Series name similarity needed to auto-identify an issue default: %(default)s",
)
parser.add_setting(
"--series-match-search-thresh",
default=90,
type=int,
help="The minimum Series name similarity to return from a search result default: %(default)s",
)
parser.add_setting(
"-b",
"--border-crop-percent",
default=10,
type=int,
help="ComicTagger will automatically add an additional cover that has any black borders cropped.\nIf the difference in height is less than %(default)s%% the cover will not be cropped.\ndefault: %(default)s\n\n",
)
parser.add_setting(
"--sort-series-by-year",
default=True,
action=argparse.BooleanOptionalAction,
help="Sorts series by year default: %(default)s",
)
parser.add_setting(
"--exact-series-matches-first",
default=True,
action=argparse.BooleanOptionalAction,
help="Puts series that are an exact match at the top of the list default: %(default)s",
)
def dialog(parser: settngs.Manager) -> None:
parser.add_setting("show_disclaimer", default=True, cmdline=False)
parser.add_setting("dont_notify_about_this_version", default="", cmdline=False)
parser.add_setting("notify_plugin_changes", default=True, cmdline=False)
def filename(parser: settngs.Manager) -> None:
parser.add_setting(
"--filename-parser",
default=utils.Parser.ORIGINAL,
metavar=f"{{{','.join(utils.Parser)}}}",
type=utils.Parser,
choices=utils.Parser,
help="Select the filename parser.\ndefault: %(default)s",
)
parser.add_setting(
"--remove-c2c",
default=False,
action=argparse.BooleanOptionalAction,
help="Removes c2c from filenames.\nRequires --complicated-parser\ndefault: %(default)s\n\n",
)
parser.add_setting(
"--remove-fcbd",
default=False,
action=argparse.BooleanOptionalAction,
help="Removes FCBD/free comic book day from filenames.\nRequires --complicated-parser\ndefault: %(default)s\n\n",
)
parser.add_setting(
"--remove-publisher",
default=False,
action=argparse.BooleanOptionalAction,
help="Attempts to remove publisher names from filenames, currently limited to Marvel and DC.\nRequires --complicated-parser\ndefault: %(default)s\n\n",
)
parser.add_setting(
"--split-words",
action="store_true",
help="""Splits words before parsing the filename.\ne.g. 'judgedredd' to 'judge dredd'\ndefault: %(default)s\n\n""",
file=False,
)
parser.add_setting(
"--protofolius-issue-number-scheme",
default=False,
action=argparse.BooleanOptionalAction,
help="Use an issue number scheme devised by protofolius for encoding format information as a letter in front of an issue number.\nImplies --allow-issue-start-with-letter. Requires --complicated-parser\ndefault: %(default)s\n\n",
)
parser.add_setting(
"--allow-issue-start-with-letter",
default=False,
action=argparse.BooleanOptionalAction,
help="Allows an issue number to start with a single letter (e.g. '#X01').\nRequires --complicated-parser\ndefault: %(default)s\n\n",
)
def talker(parser: settngs.Manager) -> None:
parser.add_setting(
"--source",
default="comicvine",
help="Use a specified source by source ID (use --list-plugins to list all sources).\ndefault: %(default)s",
)
def md_options(parser: settngs.Manager) -> None:
# CBL Transform settings
parser.add_setting("--assume-lone-credit-is-primary", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-characters-to-tags", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-teams-to-tags", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-locations-to-tags", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-storyarcs-to-tags", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-notes-to-comments", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-weblink-to-comments", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--apply-transform-on-import", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--apply-transform-on-bulk-operation", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting(
"--remove-html-tables",
default=False,
action=argparse.BooleanOptionalAction,
display_name="Remove HTML tables",
help="Removes html tables instead of converting them to text",
)
parser.add_setting("use_short_tag_names", default=False, action=argparse.BooleanOptionalAction, cmdline=False)
parser.add_setting(
"--cr",
default=True,
action=argparse.BooleanOptionalAction,
help="Enable ComicRack tags. Turn off to only use CIX tags.\ndefault: %(default)s",
)
parser.add_setting(
"--tag-merge",
metavar=f"{{{','.join(merge.Mode)}}}",
default=merge.Mode.OVERLAY,
choices=merge.Mode,
type=merge.Mode,
help="How to merge fields when reading enabled tags (CR, CBL, etc.) See -t, --tags-read default: %(default)s",
)
parser.add_setting(
"--metadata-merge",
metavar=f"{{{','.join(merge.Mode)}}}",
default=merge.Mode.OVERLAY,
choices=merge.Mode,
type=merge.Mode,
help="How to merge fields when downloading new metadata (CV, Metron, GCD, etc.) default: %(default)s",
)
parser.add_setting(
"--tag-merge-lists",
action=argparse.BooleanOptionalAction,
default=True,
help="Merge lists when reading enabled tags (genres, characters, etc.) default: %(default)s",
)
parser.add_setting(
"--metadata-merge-lists",
action=argparse.BooleanOptionalAction,
default=True,
help="Merge lists when downloading new metadata (genres, characters, etc.) default: %(default)s",
)
def rename(parser: settngs.Manager) -> None:
parser.add_setting(
"--template",
default="{series} #{issue} ({year})",
help="The teplate to use when renaming.\ndefault: %(default)s",
)
parser.add_setting(
"--issue-number-padding",
default=3,
type=int,
help="The minimum number of digits to use for the issue number when renaming.\ndefault: %(default)s",
)
parser.add_setting(
"--use-smart-string-cleanup",
default=True,
action=argparse.BooleanOptionalAction,
help="Attempts to intelligently cleanup whitespace when renaming.\ndefault: %(default)s",
)
parser.add_setting(
"--auto-extension",
default=True,
action=argparse.BooleanOptionalAction,
help="Automatically sets the extension based on the archive type e.g. cbr for rar, cbz for zip.\ndefault: %(default)s",
)
parser.add_setting("--dir", default="", help="The directory to move renamed files to.")
parser.add_setting(
"--move",
default=False,
action=argparse.BooleanOptionalAction,
help="Enables moving renamed files to a separate directory.\ndefault: %(default)s",
)
parser.add_setting(
"--only-move",
default=False,
action=argparse.BooleanOptionalAction,
help="Ignores the filename when moving renamed files to a separate directory.\ndefault: %(default)s",
)
parser.add_setting(
"--strict-filenames",
default=False,
action=argparse.BooleanOptionalAction,
help="Ensures that filenames are valid for all OSs.\ndefault: %(default)s",
)
parser.add_setting("replacements", default=DEFAULT_REPLACEMENTS, cmdline=False)
def autotag(parser: settngs.Manager) -> None:
parser.add_setting(
"-o",
"--online",
action="store_true",
help="""Search online and attempt to identify file\nusing existing tags and images in archive.\nMay be used in conjunction with -f and -m.\n\n""",
file=False,
)
parser.add_setting(
"--save-on-low-confidence",
default=False,
action=argparse.BooleanOptionalAction,
help="Automatically save tags on low-confidence matches.\ndefault: %(default)s",
cmdline=False,
)
parser.add_setting(
"--use-year-when-identifying",
default=True,
action=argparse.BooleanOptionalAction,
help="Use the year metadata attribute when auto-tagging a comic.\ndefault: %(default)s",
)
parser.add_setting(
"-1",
"--assume-issue-one",
action=argparse.BooleanOptionalAction,
help="Assume issue number is 1 if not found (relevant for -s).\ndefault: %(default)s\n\n",
default=False,
)
parser.add_setting(
"--ignore-leading-numbers-in-filename",
default=False,
action=argparse.BooleanOptionalAction,
help="When searching ignore leading numbers in the filename.\ndefault: %(default)s",
)
parser.add_setting(
"-f",
"--parse-filename",
action="store_true",
help="""Parse the filename to get some info,\nspecifically series name, issue number,\nvolume, and publication year.\n\n""",
file=False,
)
parser.add_setting(
"--prefer-filename",
action="store_true",
help="""Prefer metadata parsed from the filename. CLI only.\n\n""",
file=False,
)
parser.add_setting(
"--id",
dest="issue_id",
type=str,
help="""Use the issue ID when searching online.\nOverrides all other metadata.\n\n""",
file=False,
)
parser.add_setting(
"-m",
"--metadata",
default=GenericMetadata(),
type=parse_metadata_from_string,
help="""Explicitly define some metadata to be used in YAML syntax. Use @file.yaml to read from a file. e.g.:\n"series: Plastic Man, publisher: Quality Comics, year: "\n"series: 'Kickers, Inc.', issue: '1', year: 1986"\nIf you want to erase a tag leave the value blank.\nSome names that can be used: series, issue, issue_count, year,\npublisher, title\n\n""",
file=False,
)
parser.add_setting(
"--clear-tags",
default=False,
action=argparse.BooleanOptionalAction,
help="Clears all existing tags during import, default is to merge tags.\nMay be used in conjunction with -o, -f and -m.\ndefault: %(default)s\n\n",
)
parser.add_setting(
"--publisher-filter",
default=["Panini Comics", "Abril", "Planeta DeAgostini", "Editorial Televisa", "Dino Comics"],
action="extend",
nargs="+",
help="When enabled, filters the listed publishers from all search results.\nEnding a publisher with a '-' removes a publisher from this list\ndefault: %(default)s\n\n",
)
parser.add_setting(
"--use-publisher-filter",
default=False,
action=argparse.BooleanOptionalAction,
help="Enables the publisher filter.\ndefault: %(default)s",
)
parser.add_setting(
"-a",
"--auto-imprint",
default=False,
action=argparse.BooleanOptionalAction,
help="Enables the auto imprint functionality.\ne.g. if the publisher is set to 'vertigo' it\nwill be updated to 'DC Comics' and the imprint\nproperty will be set to 'Vertigo'.\ndefault: %(default)s\n\n",
)
def parse_filter(config: settngs.Config[ct_ns]) -> settngs.Config[ct_ns]:
new_filter = []
remove = []
for x in config[0].Auto_Tag__publisher_filter:
x = x.strip()
if x: # ignore empty arguments
if x[-1] == "-": # this publisher needs to be removed. We remove after all publishers have been enumerated
remove.append(x.strip("-"))
else:
if x not in new_filter:
new_filter.append(x)
for x in remove: # remove publishers
if x in new_filter:
new_filter.remove(x)
config[0].Auto_Tag__publisher_filter = new_filter
return config
def migrate_settings(config: settngs.Config[ct_ns]) -> settngs.Config[ct_ns]:
original_types = ("cbi", "cr", "comet")
write_Tags = config[0].internal__write_tags
if not isinstance(write_Tags, list):
if isinstance(write_Tags, int) and write_Tags in (0, 1, 2):
config[0].internal__write_tags = [original_types[write_Tags]]
elif isinstance(write_Tags, str):
config[0].internal__write_tags = [write_Tags]
else:
config[0].internal__write_tags = ["cr"]
read_tags = config[0].internal__read_tags
if not isinstance(read_tags, list):
if isinstance(read_tags, int) and read_tags in (0, 1, 2):
config[0].internal__read_tags = [original_types[read_tags]]
elif isinstance(read_tags, str):
config[0].internal__read_tags = [read_tags]
else:
config[0].internal__read_tags = ["cr"]
return config
def validate_file_settings(config: settngs.Config[ct_ns]) -> settngs.Config[ct_ns]:
config = parse_filter(config)
config = migrate_settings(config)
if config[0].Filename_Parsing__protofolius_issue_number_scheme:
config[0].Filename_Parsing__allow_issue_start_with_letter = True
config[0].File_Rename__replacements = Replacements(
[Replacement(x[0], x[1], x[2]) for x in config[0].File_Rename__replacements[0]],
[Replacement(x[0], x[1], x[2]) for x in config[0].File_Rename__replacements[1]],
)
return config
def register_file_settings(parser: settngs.Manager) -> None:
parser.add_group("internal", internal, False)
parser.add_group("Issue Identifier", identifier, False)
parser.add_group("Filename Parsing", filename, False)
parser.add_group("Sources", talker, False)
parser.add_group("Metadata Options", md_options, False)
parser.add_group("File Rename", rename, False)
parser.add_group("Auto-Tag", autotag, False)
parser.add_group("General", general, False)
parser.add_group("Dialog Flags", dialog, False)

View File

@ -0,0 +1,107 @@
from __future__ import annotations
import logging
import os
from typing import Any, cast
import settngs
import comicapi.comicarchive
import comicapi.utils
import comictaggerlib.ctsettings
from comicapi.comicarchive import Archiver
from comictaggerlib.ctsettings.settngs_namespace import SettngsNS as ct_ns
from comictalker.comictalker import ComicTalker
logger = logging.getLogger("comictagger")
def group_for_plugin(plugin: Archiver | ComicTalker | type[Archiver]) -> str:
if isinstance(plugin, ComicTalker):
return f"Source {plugin.id}"
if isinstance(plugin, Archiver) or plugin == Archiver:
return "Archive"
raise NotImplementedError(f"Invalid plugin received: {plugin=}")
def archiver(manager: settngs.Manager) -> None:
for archiver in comicapi.comicarchive.archivers:
if archiver.exe:
# add_setting will overwrite anything with the same name.
# So we only end up with one option even if multiple archivers use the same exe.
manager.add_setting(
f"--{settngs.sanitize_name(archiver.exe)}",
default=archiver.exe,
help="Path to the %(default)s executable",
)
def register_talker_settings(manager: settngs.Manager, talkers: dict[str, ComicTalker]) -> None:
for talker in talkers.values():
def api_options(manager: settngs.Manager) -> None:
# The default needs to be unset or None.
# This allows this setting to be unset with the empty string, allowing the default to change
manager.add_setting(
f"--{talker.id}-key",
display_name="API Key",
help=f"API Key for {talker.name} (default: {talker.default_api_key})",
)
manager.add_setting(
f"--{talker.id}-url",
display_name="URL",
help=f"URL for {talker.name} (default: {talker.default_api_url})",
)
try:
manager.add_persistent_group(group_for_plugin(talker), api_options, False)
if hasattr(talker, "register_settings"):
manager.add_persistent_group(group_for_plugin(talker), talker.register_settings, False)
except Exception:
logger.exception("Failed to register settings for %s", talker.id)
def validate_archive_settings(config: settngs.Config[ct_ns]) -> settngs.Config[ct_ns]:
cfg = settngs.normalize_config(config, file=True, cmdline=True, default=False)
for archiver in comicapi.comicarchive.archivers:
group = group_for_plugin(archiver())
exe_name = settngs.sanitize_name(archiver.exe)
if not exe_name:
continue
if exe_name in cfg[0][group] and cfg[0][group][exe_name]:
path = cfg[0][group][exe_name]
name = os.path.basename(path)
# If the path is not the basename then this is a relative or absolute path.
# Ensure it is absolute
if path != name:
path = os.path.abspath(path)
archiver.exe = path
return config
def validate_talker_settings(config: settngs.Config[ct_ns], talkers: dict[str, ComicTalker]) -> settngs.Config[ct_ns]:
# Apply talker settings from config file
cfg = cast(settngs.Config[dict[str, Any]], settngs.normalize_config(config, True, True))
for talker in list(talkers.values()):
try:
cfg[0][group_for_plugin(talker)] = talker.parse_settings(cfg[0][group_for_plugin(talker)])
except Exception as e:
# Remove talker as we failed to apply the settings
del comictaggerlib.ctsettings.talkers[talker.id]
logger.exception("Failed to initialize talker settings: %s", e)
return cast(settngs.Config[ct_ns], settngs.get_namespace(cfg, file=True, cmdline=True))
def validate_plugin_settings(config: settngs.Config[ct_ns], talkers: dict[str, ComicTalker]) -> settngs.Config[ct_ns]:
config = validate_archive_settings(config)
config = validate_talker_settings(config, talkers)
return config
def register_plugin_settings(manager: settngs.Manager, talkers: dict[str, ComicTalker]) -> None:
manager.add_persistent_group("Archive", archiver, False)
register_talker_settings(manager, talkers)

View File

@ -0,0 +1,186 @@
"""Functions related to finding and loading plugins."""
# Lifted from flake8 https://github.com/PyCQA/flake8/blob/main/src/flake8/plugins/finder.py#L127
from __future__ import annotations
import importlib.util
import logging
import pathlib
import platform
import re
import sys
from collections.abc import Generator, Iterable, Sequence
from typing import Any, NamedTuple, TypeVar
if sys.version_info < (3, 10):
import importlib_metadata
else:
import importlib.metadata as importlib_metadata
logger = logging.getLogger(__name__)
NORMALIZE_PACKAGE_NAME_RE = re.compile(r"[-_.]+")
PLUGIN_GROUPS = frozenset(("comictagger.talker", "comicapi.archiver", "comicapi.tags"))
icu_available = importlib.util.find_spec("icu") is not None
def _custom_key(tup: Any) -> Any:
import natsort
lst = []
for x in natsort.os_sort_keygen()(tup):
ret = x
if isinstance(x, Sequence) and len(x) > 1 and isinstance(x[1], int) and isinstance(x[0], str) and x[0] == "":
ret = ("a", *x[1:])
lst.append(ret)
return tuple(lst)
T = TypeVar("T")
def os_sorted(lst: Iterable[T]) -> Iterable[T]:
import natsort
key = _custom_key
if icu_available or platform.system() == "Windows":
key = natsort.os_sort_keygen()
return sorted(lst, key=key)
class FailedToLoadPlugin(Exception):
"""Exception raised when a plugin fails to load."""
FORMAT = 'ComicTagger failed to load local plugin "{name}" due to {exc}.'
def __init__(self, plugin_name: str, exception: Exception) -> None:
"""Initialize our FailedToLoadPlugin exception."""
self.plugin_name = plugin_name
self.original_exception = exception
super().__init__(plugin_name, exception)
def __str__(self) -> str:
"""Format our exception message."""
return self.FORMAT.format(
name=self.plugin_name,
exc=self.original_exception,
)
def normalize_pypi_name(s: str) -> str:
"""Normalize a distribution name according to PEP 503."""
return NORMALIZE_PACKAGE_NAME_RE.sub("-", s).lower()
class Plugin(NamedTuple):
"""A plugin before loading."""
package: str
version: str
entry_point: importlib_metadata.EntryPoint
path: pathlib.Path
def load(self) -> LoadedPlugin:
return LoadedPlugin(self, self.entry_point.load())
class LoadedPlugin(NamedTuple):
"""Represents a plugin after being imported."""
plugin: Plugin
obj: Any
@property
def entry_name(self) -> str:
"""Return the name given in the packaging metadata."""
return self.plugin.entry_point.name
@property
def display_name(self) -> str:
"""Return the name for use in user-facing / error messages."""
return f"{self.plugin.package}[{self.entry_name}]"
class Plugins(NamedTuple):
"""Classified plugins."""
archivers: list[LoadedPlugin]
tags: list[LoadedPlugin]
talkers: list[LoadedPlugin]
def all_plugins(self) -> Generator[LoadedPlugin]:
"""Return an iterator over all :class:`LoadedPlugin`s."""
yield from self.archivers
yield from self.tags
yield from self.talkers
def versions_str(self) -> str:
"""Return a user-displayed list of plugin versions."""
return ", ".join(sorted({f"{plugin.plugin.package}: {plugin.plugin.version}" for plugin in self.all_plugins()}))
def _find_local_plugins(plugin_path: pathlib.Path) -> Generator[Plugin]:
logger.debug("Checking for distributions in %s", plugin_path)
for dist in importlib_metadata.distributions(path=[str(plugin_path)]):
logger.debug("found distribution %s", dist.name)
eps = dist.entry_points
for group in PLUGIN_GROUPS:
for ep in eps.select(group=group):
logger.debug("found EntryPoint group %s %s=%s", group, ep.name, ep.value)
yield Plugin(plugin_path.name, dist.version, ep, plugin_path)
def find_plugins(plugin_folder: pathlib.Path) -> Plugins:
"""Discovers all plugins (but does not load them)."""
ret: list[LoadedPlugin] = []
if not plugin_folder.is_dir():
return _classify_plugins(ret)
zips = [x for x in plugin_folder.iterdir() if x.is_file() and x.suffix in (".zip", ".whl")]
for plugin_path in os_sorted(zips):
logger.debug("looking for plugins in %s", plugin_path)
sys_path = sys.path.copy()
try:
sys.path.append(str(plugin_path))
for plugin in _find_local_plugins(plugin_path):
logger.debug("Attempting to load %s from %s", plugin.entry_point.name, plugin.path)
ret.append(plugin.load())
except Exception as err:
logger.exception(FailedToLoadPlugin(plugin_path.name, err))
finally:
sys.path = sys_path
for mod in list(sys.modules.values()):
if (
mod is not None
and hasattr(mod, "__spec__")
and mod.__spec__
and str(plugin_path) in (mod.__spec__.origin or "")
):
sys.modules.pop(mod.__name__)
return _classify_plugins(ret)
def _classify_plugins(plugins: list[LoadedPlugin]) -> Plugins:
archivers = []
tags = []
talkers = []
for p in plugins:
if p.plugin.entry_point.group == "comictagger.talker":
talkers.append(p)
elif p.plugin.entry_point.group == "comicapi.tags":
tags.append(p)
elif p.plugin.entry_point.group == "comicapi.archiver":
archivers.append(p)
else:
logger.warning(NotImplementedError(f"what plugin type? {p}"))
return Plugins(
tags=tags,
archivers=archivers,
talkers=talkers,
)

View File

@ -0,0 +1,304 @@
from __future__ import annotations
import typing
import settngs
import urllib3.util.url
import comicapi.genericmetadata
import comicapi.merge
import comicapi.utils
import comictaggerlib.ctsettings.types
import comictaggerlib.defaults
import comictaggerlib.resulttypes
class SettngsNS(settngs.TypedNS):
Commands__version: bool
Commands__command: comictaggerlib.resulttypes.Action
Commands__copy: list[str]
Runtime_Options__config: comictaggerlib.ctsettings.types.ComicTaggerPaths
Runtime_Options__verbose: int
Runtime_Options__enable_quick_tag: bool
Runtime_Options__enable_embedding_hashes: bool
Runtime_Options__preferred_hash: str
Runtime_Options__quiet: bool
Runtime_Options__json: bool
Runtime_Options__raw: bool
Runtime_Options__interactive: bool
Runtime_Options__abort_on_low_confidence: bool
Runtime_Options__dryrun: bool
Runtime_Options__summary: bool
Runtime_Options__recursive: bool
Runtime_Options__glob: bool
Runtime_Options__darkmode: bool
Runtime_Options__no_gui: bool
Runtime_Options__abort_on_conflict: bool
Runtime_Options__delete_original: bool
Runtime_Options__tags_read: list[str]
Runtime_Options__tags_write: list[str]
Runtime_Options__skip_existing_tags: bool
Runtime_Options__files: list[str]
Quick_Tag__url: urllib3.util.url.Url
Quick_Tag__max: int
Quick_Tag__aggressive_filtering: bool
Quick_Tag__hash: list[comictaggerlib.quick_tag.HashType]
Quick_Tag__exact_only: bool
internal__install_id: str
internal__embedded_hash_type: str
internal__write_tags: list[str]
internal__read_tags: list[str]
internal__last_opened_folder: str
internal__window_width: int
internal__window_height: int
internal__window_x: int
internal__window_y: int
internal__form_width: int
internal__list_width: int
internal__sort_column: int
internal__sort_direction: int
internal__remove_archive_after_successful_match: bool
Issue_Identifier__series_match_identify_thresh: int
Issue_Identifier__series_match_search_thresh: int
Issue_Identifier__border_crop_percent: int
Issue_Identifier__sort_series_by_year: bool
Issue_Identifier__exact_series_matches_first: bool
Filename_Parsing__filename_parser: comicapi.utils.Parser
Filename_Parsing__remove_c2c: bool
Filename_Parsing__remove_fcbd: bool
Filename_Parsing__remove_publisher: bool
Filename_Parsing__split_words: bool
Filename_Parsing__protofolius_issue_number_scheme: bool
Filename_Parsing__allow_issue_start_with_letter: bool
Sources__source: str
Metadata_Options__assume_lone_credit_is_primary: bool
Metadata_Options__copy_characters_to_tags: bool
Metadata_Options__copy_teams_to_tags: bool
Metadata_Options__copy_locations_to_tags: bool
Metadata_Options__copy_storyarcs_to_tags: bool
Metadata_Options__copy_notes_to_comments: bool
Metadata_Options__copy_weblink_to_comments: bool
Metadata_Options__apply_transform_on_import: bool
Metadata_Options__apply_transform_on_bulk_operation: bool
Metadata_Options__remove_html_tables: bool
Metadata_Options__use_short_tag_names: bool
Metadata_Options__cr: bool
Metadata_Options__tag_merge: comicapi.merge.Mode
Metadata_Options__metadata_merge: comicapi.merge.Mode
Metadata_Options__tag_merge_lists: bool
Metadata_Options__metadata_merge_lists: bool
File_Rename__template: str
File_Rename__issue_number_padding: int
File_Rename__use_smart_string_cleanup: bool
File_Rename__auto_extension: bool
File_Rename__dir: str
File_Rename__move: bool
File_Rename__only_move: bool
File_Rename__strict_filenames: bool
File_Rename__replacements: comictaggerlib.defaults.Replacements
Auto_Tag__online: bool
Auto_Tag__save_on_low_confidence: bool
Auto_Tag__use_year_when_identifying: bool
Auto_Tag__assume_issue_one: bool
Auto_Tag__ignore_leading_numbers_in_filename: bool
Auto_Tag__parse_filename: bool
Auto_Tag__prefer_filename: bool
Auto_Tag__issue_id: str | None
Auto_Tag__metadata: comicapi.genericmetadata.GenericMetadata
Auto_Tag__clear_tags: bool
Auto_Tag__publisher_filter: list[str]
Auto_Tag__use_publisher_filter: bool
Auto_Tag__auto_imprint: bool
General__check_for_new_version: bool
General__blur: bool
General__prompt_on_save: bool
Dialog_Flags__show_disclaimer: bool
Dialog_Flags__dont_notify_about_this_version: str
Dialog_Flags__notify_plugin_changes: bool
Archive__rar: str
Source_comicvine__comicvine_key: str | None
Source_comicvine__comicvine_url: str | None
Source_comicvine__cv_use_series_start_as_volume: bool
Source_comicvine__comicvine_custom_parameters: str | None
class Commands(typing.TypedDict):
version: bool
command: comictaggerlib.resulttypes.Action
copy: list[str]
class Runtime_Options(typing.TypedDict):
config: comictaggerlib.ctsettings.types.ComicTaggerPaths
verbose: int
enable_quick_tag: bool
enable_embedding_hashes: bool
preferred_hash: str
quiet: bool
json: bool
raw: bool
interactive: bool
abort_on_low_confidence: bool
dryrun: bool
summary: bool
recursive: bool
glob: bool
darkmode: bool
no_gui: bool
abort_on_conflict: bool
delete_original: bool
tags_read: list[str]
tags_write: list[str]
skip_existing_tags: bool
files: list[str]
class Quick_Tag(typing.TypedDict):
url: urllib3.util.url.Url
max: int
aggressive_filtering: bool
hash: list[comictaggerlib.quick_tag.HashType]
exact_only: bool
class internal(typing.TypedDict):
install_id: str
embedded_hash_type: str
write_tags: list[str]
read_tags: list[str]
last_opened_folder: str
window_width: int
window_height: int
window_x: int
window_y: int
form_width: int
list_width: int
sort_column: int
sort_direction: int
remove_archive_after_successful_match: bool
class Issue_Identifier(typing.TypedDict):
series_match_identify_thresh: int
series_match_search_thresh: int
border_crop_percent: int
sort_series_by_year: bool
exact_series_matches_first: bool
class Filename_Parsing(typing.TypedDict):
filename_parser: comicapi.utils.Parser
remove_c2c: bool
remove_fcbd: bool
remove_publisher: bool
split_words: bool
protofolius_issue_number_scheme: bool
allow_issue_start_with_letter: bool
class Sources(typing.TypedDict):
source: str
class Metadata_Options(typing.TypedDict):
assume_lone_credit_is_primary: bool
copy_characters_to_tags: bool
copy_teams_to_tags: bool
copy_locations_to_tags: bool
copy_storyarcs_to_tags: bool
copy_notes_to_comments: bool
copy_weblink_to_comments: bool
apply_transform_on_import: bool
apply_transform_on_bulk_operation: bool
remove_html_tables: bool
use_short_tag_names: bool
cr: bool
tag_merge: comicapi.merge.Mode
metadata_merge: comicapi.merge.Mode
tag_merge_lists: bool
metadata_merge_lists: bool
class File_Rename(typing.TypedDict):
template: str
issue_number_padding: int
use_smart_string_cleanup: bool
auto_extension: bool
dir: str
move: bool
only_move: bool
strict_filenames: bool
replacements: comictaggerlib.defaults.Replacements
class Auto_Tag(typing.TypedDict):
online: bool
save_on_low_confidence: bool
use_year_when_identifying: bool
assume_issue_one: bool
ignore_leading_numbers_in_filename: bool
parse_filename: bool
prefer_filename: bool
issue_id: str | None
metadata: comicapi.genericmetadata.GenericMetadata
clear_tags: bool
publisher_filter: list[str]
use_publisher_filter: bool
auto_imprint: bool
class General(typing.TypedDict):
check_for_new_version: bool
blur: bool
prompt_on_save: bool
class Dialog_Flags(typing.TypedDict):
show_disclaimer: bool
dont_notify_about_this_version: str
notify_plugin_changes: bool
class Archive(typing.TypedDict):
rar: str
class Source_comicvine(typing.TypedDict):
comicvine_key: str | None
comicvine_url: str | None
cv_use_series_start_as_volume: bool
comicvine_custom_parameters: str | None
SettngsDict = typing.TypedDict(
"SettngsDict",
{
"Commands": Commands,
"Runtime Options": Runtime_Options,
"Quick Tag": Quick_Tag,
"internal": internal,
"Issue Identifier": Issue_Identifier,
"Filename Parsing": Filename_Parsing,
"Sources": Sources,
"Metadata Options": Metadata_Options,
"File Rename": File_Rename,
"Auto-Tag": Auto_Tag,
"General": General,
"Dialog Flags": Dialog_Flags,
"Archive": Archive,
"Source comicvine": Source_comicvine,
},
)

View File

@ -0,0 +1,248 @@
from __future__ import annotations
import argparse
import logging
import pathlib
import sys
import types
import typing
from collections.abc import Collection, Mapping
from typing import Any
import yaml
from appdirs import AppDirs
from comicapi import utils
from comicapi.comicarchive import tags
from comicapi.genericmetadata import REMOVE, GenericMetadata
logger = logging.getLogger(__name__)
if sys.version_info < (3, 10):
@typing.no_type_check
def get_type_hints(obj, globalns=None, localns=None, include_extras=False):
if getattr(obj, "__no_type_check__", None):
return {}
# Classes require a special treatment.
if isinstance(obj, type):
hints = {}
for base in reversed(obj.__mro__):
if globalns is None:
base_globals = getattr(sys.modules.get(base.__module__, None), "__dict__", {})
else:
base_globals = globalns
ann = base.__dict__.get("__annotations__", {})
if isinstance(ann, types.GetSetDescriptorType):
ann = {}
base_locals = dict(vars(base)) if localns is None else localns
if localns is None and globalns is None:
# This is surprising, but required. Before Python 3.10,
# get_type_hints only evaluated the globalns of
# a class. To maintain backwards compatibility, we reverse
# the globalns and localns order so that eval() looks into
# *base_globals* first rather than *base_locals*.
# This only affects ForwardRefs.
base_globals, base_locals = base_locals, base_globals
for name, value in ann.items():
if value is None:
value = type(None)
if isinstance(value, str):
if "|" in value:
value = "Union[" + value.replace(" |", ",") + "]"
value = typing.ForwardRef(value, is_argument=False, is_class=True)
value = typing._eval_type(value, base_globals, base_locals)
hints[name] = value
return hints if include_extras else {k: typing._strip_annotations(t) for k, t in hints.items()}
if globalns is None:
if isinstance(obj, types.ModuleType):
globalns = obj.__dict__
else:
nsobj = obj
# Find globalns for the unwrapped object.
while hasattr(nsobj, "__wrapped__"):
nsobj = nsobj.__wrapped__
globalns = getattr(nsobj, "__globals__", {})
if localns is None:
localns = globalns
elif localns is None:
localns = globalns
hints = getattr(obj, "__annotations__", None)
if hints is None:
# Return empty annotations for something that _could_ have them.
if isinstance(obj, typing._allowed_types):
return {}
else:
raise TypeError("{!r} is not a module, class, method, " "or function.".format(obj))
hints = dict(hints)
for name, value in hints.items():
if value is None:
value = type(None)
if isinstance(value, str):
if "|" in value:
value = "Union[" + value.replace(" |", ",") + "]"
# class-level forward refs were handled above, this must be either
# a module-level annotation or a function argument annotation
value = typing.ForwardRef(
value,
is_argument=not isinstance(obj, types.ModuleType),
is_class=False,
)
hints[name] = typing._eval_type(value, globalns, localns)
return hints if include_extras else {k: typing._strip_annotations(t) for k, t in hints.items()}
else:
from typing import get_type_hints
class ComicTaggerPaths(AppDirs):
def __init__(self, config_path: pathlib.Path | str | None = None) -> None:
super().__init__("ComicTagger", None, None, False, False)
self.path: pathlib.Path | None = None
if config_path:
self.path = pathlib.Path(config_path).absolute()
@property
def user_data_dir(self) -> pathlib.Path:
if self.path:
return self.path
return pathlib.Path(super().user_data_dir)
@property
def user_config_dir(self) -> pathlib.Path:
if self.path:
return self.path
return pathlib.Path(super().user_config_dir)
@property
def user_cache_dir(self) -> pathlib.Path:
if self.path:
return self.path / "cache"
return pathlib.Path(super().user_cache_dir)
@property
def user_state_dir(self) -> pathlib.Path:
if self.path:
return self.path
return pathlib.Path(super().user_state_dir)
@property
def user_log_dir(self) -> pathlib.Path:
if self.path:
return self.path / "log"
return pathlib.Path(super().user_log_dir)
@property
def user_plugin_dir(self) -> pathlib.Path:
if self.path:
return self.path / "plugins"
return pathlib.Path(super().user_config_dir) / "plugins"
@property
def site_data_dir(self) -> pathlib.Path:
return pathlib.Path(super().site_data_dir)
@property
def site_config_dir(self) -> pathlib.Path:
return pathlib.Path(super().site_config_dir)
def __str__(self) -> str:
return f"logs: {self.user_log_dir}, config: {self.user_config_dir}, cache: {self.user_cache_dir}"
def tag(types: str) -> list[str]:
enabled_tags = [tag for tag in tags if tags[tag].enabled]
result = []
types = types.casefold()
for typ in utils.split(types, ","):
if typ not in enabled_tags:
choices = ", ".join(enabled_tags)
raise argparse.ArgumentTypeError(f"invalid choice: {typ} (choose from {choices.upper()})")
result.append(tags[typ].id)
return result
def parse_metadata_from_string(mdstr: str) -> GenericMetadata:
def get_type(key: str, tt: Any = get_type_hints(GenericMetadata)) -> Any:
t: Any = tt.get(key, None)
if t is None:
return None
if getattr(t, "__origin__", None) is typing.Union and len(t.__args__) == 2 and t.__args__[1] is type(None):
t = t.__args__[0]
elif isinstance(t, types.GenericAlias) and issubclass(t.mro()[0], Collection):
t = t.mro()[0], t.__args__[0]
if isinstance(t, tuple) and issubclass(t[1], dict):
return (t[0], dict)
if isinstance(t, type) and issubclass(t, dict):
return dict
return t
def convert_value(t: type, value: Any) -> Any:
if isinstance(value, t):
return value
try:
if isinstance(value, (Mapping)):
value = t(**value)
elif not isinstance(value, str) and isinstance(value, (Collection)):
value = t(*value)
else:
if t is utils.Url and isinstance(value, str):
value = utils.parse_url(value)
else:
value = t(value)
except (ValueError, TypeError):
raise argparse.ArgumentTypeError(f"Invalid syntax for tag {key!r}: {value!r}")
return value
md = GenericMetadata()
try:
if not mdstr:
return md
if mdstr[0] == "@":
p = pathlib.Path(mdstr[1:])
if not p.is_file():
raise argparse.ArgumentTypeError("Invalid filepath")
mdstr = p.read_text()
if mdstr[0] != "{":
mdstr = "{" + mdstr + "}"
md_dict = yaml.safe_load(mdstr)
empty = True
# Map the dict to the metadata object
for key, value in md_dict.items():
if hasattr(md, key):
t = get_type(key)
if value is None:
value = REMOVE
elif isinstance(t, tuple):
if value == "":
value = t[0]()
else:
if isinstance(value, str):
value = [value]
if not isinstance(value, Collection):
raise argparse.ArgumentTypeError(f"Invalid syntax for tag '{key}'")
values = list(value)
for idx, v in enumerate(values):
if not isinstance(v, t[1]):
values[idx] = convert_value(t[1], v)
value = t[0](values)
else:
value = convert_value(t, value)
empty = False
setattr(md, key, value)
else:
raise argparse.ArgumentTypeError(f"'{key}' is not a valid tag name")
md.is_empty = empty
except argparse.ArgumentTypeError as e:
raise e
except Exception as e:
logger.exception("Unable to read metadata from the commandline '%s'", mdstr)
raise Exception("Unable to read metadata from the commandline") from e
return md

View File

@ -0,0 +1,29 @@
from __future__ import annotations
from typing import NamedTuple
class Replacement(NamedTuple):
find: str
replce: str
strict_only: bool
class Replacements(NamedTuple):
literal_text: list[Replacement]
format_value: list[Replacement]
DEFAULT_REPLACEMENTS = Replacements(
literal_text=[
Replacement(": ", " - ", True),
Replacement(":", "-", True),
],
format_value=[
Replacement(": ", " - ", True),
Replacement(":", "-", True),
Replacement("/", "-", False),
Replacement("//", "--", False),
Replacement("\\", "-", True),
],
)

View File

@ -0,0 +1,62 @@
"""A PyQT4 dialog to confirm and set options for export to zip"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
from PyQt6 import QtCore, QtWidgets, uic
from comictaggerlib.ui import ui_path
logger = logging.getLogger(__name__)
class ExportConflictOpts:
dontCreate = 1
overwrite = 2
createUnique = 3
class ExportWindow(QtWidgets.QDialog):
def __init__(self, parent: QtWidgets.QWidget, msg: str) -> None:
super().__init__(parent)
with (ui_path / "exportwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.label.setText(msg)
self.setWindowFlags(
QtCore.Qt.WindowType(self.windowFlags() & ~QtCore.Qt.WindowType.WindowContextHelpButtonHint)
)
self.cbxDeleteOriginal.setChecked(False)
self.cbxAddToList.setChecked(True)
self.radioDontCreate.setChecked(True)
self.deleteOriginal = False
self.addToList = True
self.fileConflictBehavior = ExportConflictOpts.dontCreate
def accept(self) -> None:
QtWidgets.QDialog.accept(self)
self.deleteOriginal = self.cbxDeleteOriginal.isChecked()
self.addToList = self.cbxAddToList.isChecked()
if self.radioDontCreate.isChecked():
self.fileConflictBehavior = ExportConflictOpts.dontCreate
elif self.radioCreateNew.isChecked():
self.fileConflictBehavior = ExportConflictOpts.createUnique

View File

@ -0,0 +1,325 @@
"""Functions for renaming files based on metadata"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import calendar
import datetime
import logging
import os
import pathlib
import string
from collections.abc import Collection, Iterable, Mapping, Sequence, Sized
from typing import Any, cast
from pathvalidate import Platform, normalize_platform, sanitize_filename
from comicapi.comicarchive import ComicArchive
from comicapi.genericmetadata import GenericMetadata
from comicapi.issuestring import IssueString
from comictaggerlib.defaults import DEFAULT_REPLACEMENTS, Replacement, Replacements
logger = logging.getLogger(__name__)
def get_rename_dir(ca: ComicArchive, rename_dir: str | pathlib.Path | None) -> pathlib.Path:
folder = ca.path.parent.absolute()
if rename_dir is not None:
if isinstance(rename_dir, str):
rename_dir = pathlib.Path(rename_dir.strip())
folder = rename_dir.absolute()
return folder
def _isnamedtupleinstance(x: Any) -> bool: # pragma: no cover
t = type(x)
b = t.__bases__
if len(b) != 1 or b[0] != tuple:
return False
f = getattr(t, "_fields", None)
if not isinstance(f, tuple):
return False
return all(isinstance(n, str) for n in f)
class MetadataFormatter(string.Formatter):
def __init__(
self, smart_cleanup: bool = False, platform: str = "auto", replacements: Replacements = DEFAULT_REPLACEMENTS
) -> None:
super().__init__()
self.smart_cleanup = smart_cleanup
self.platform = normalize_platform(platform)
self.replacements = replacements
def format_field(self, value: Any, format_spec: str) -> str:
if value is None or value == "":
return ""
return cast(str, super().format_field(value, format_spec))
def convert_field(self, value: Any, conversion: str | None) -> str:
if value is None:
return ""
if isinstance(value, Iterable) and not isinstance(value, (str, tuple)):
if conversion == "C":
if isinstance(value, Sized):
return str(len(value))
return ""
if conversion and conversion.isdecimal():
if not isinstance(value, Collection):
return ""
i = int(conversion) - 1
if i < 0:
i = 0
if i < len(value):
try:
return sorted(value)[i]
except Exception:
...
return list(value)[i]
return ""
if conversion == "j":
conversion = "s"
try:
return ", ".join(list(self.convert_field(v, conversion) for v in sorted(value) if v is not None))
except Exception:
...
return ", ".join(list(self.convert_field(v, conversion) for v in value if v is not None))
if not conversion:
return cast(str, super().convert_field(value, conversion))
if conversion == "u":
return str(value).upper()
if conversion == "l":
return str(value).casefold()
if conversion == "c":
return str(value).capitalize()
if conversion == "S":
return str(value).swapcase()
if conversion == "t":
return str(value).title()
if conversion.isdecimal():
return ""
return cast(str, super().convert_field(value, conversion))
def handle_replacements(self, string: str, replacements: list[Replacement]) -> str:
for find, replace, strict_only in replacements:
if self.is_strict() or not strict_only:
string = string.replace(find, replace)
return string
def none_replacement(self, value: Any, replacement: str, r: str) -> Any:
if r == "-" and value is None or value == "":
return replacement
if r == "+" and value is not None:
return replacement
return value
def split_replacement(self, field_name: str) -> tuple[str, str, str]:
if "-" in field_name:
return field_name.rpartition("-")
if "+" in field_name:
return field_name.rpartition("+")
return field_name, "", ""
def is_strict(self) -> bool:
return self.platform in [Platform.UNIVERSAL, Platform.WINDOWS]
def _vformat(
self,
format_string: str,
args: Sequence[Any],
kwargs: Mapping[str, Any],
used_args: set[Any],
recursion_depth: int,
auto_arg_index: int = 0,
) -> tuple[str, int]:
if recursion_depth < 0:
raise ValueError("Max string recursion exceeded")
result = []
lstrip = False
for literal_text, field_name, format_spec, conversion in self.parse(format_string):
# output the literal text
if literal_text:
if lstrip:
literal_text = literal_text.lstrip("-_)}]#")
if self.smart_cleanup:
literal_text = self.handle_replacements(literal_text, self.replacements.literal_text)
lspace = literal_text[0].isspace() if literal_text else False
rspace = literal_text[-1].isspace() if literal_text else False
literal_text = " ".join(literal_text.split())
if literal_text == "":
literal_text = " "
else:
if lspace:
literal_text = " " + literal_text
if rspace:
literal_text += " "
result.append(literal_text)
lstrip = False
# if there's a field, output it
if field_name is not None and field_name != "":
field_name, r, replacement = self.split_replacement(field_name)
field_name = field_name.casefold()
# this is some markup, find the object and do the formatting
# handle arg indexing when digit field_names are given.
if field_name.isdigit():
raise ValueError("cannot use a number as a field name")
# given the field_name, find the object it references
# and the argument it came from
try:
obj, arg_used = self.get_field(field_name, args, kwargs)
used_args.add(arg_used)
except Exception:
obj = None
obj = self.none_replacement(obj, replacement, r)
# do any conversion on the resulting object
obj = self.convert_field(obj, conversion)
if r == "-":
obj = self.none_replacement(obj, replacement, r)
# expand the format spec, if needed
format_spec, _ = self._vformat(
cast(str, format_spec), args, kwargs, used_args, recursion_depth - 1, auto_arg_index=False
)
# format the object and append to the result
fmt_obj = self.format_field(obj, format_spec)
if fmt_obj == "" and result and self.smart_cleanup and literal_text:
if self.str_contains(result[-1], "({["):
lstrip = True
if result:
if " " in result[-1]:
result[-1], _, _ = result[-1].rstrip().rpartition(" ")
result[-1] = result[-1].rstrip("-_({[#")
if self.smart_cleanup:
# colons and slashes get special treatment
fmt_obj = self.handle_replacements(fmt_obj, self.replacements.format_value)
fmt_obj = " ".join(fmt_obj.split())
fmt_obj = str(sanitize_filename(fmt_obj, platform=self.platform))
result.append(fmt_obj)
return "".join(result), False
def str_contains(self, chars: str, string: str) -> bool:
for char in chars:
if char in string:
return True
return False
class FileRenamer:
def __init__(
self,
metadata: GenericMetadata | None,
platform: str = "auto",
replacements: Replacements = DEFAULT_REPLACEMENTS,
) -> None:
self.template = "{publisher}/{series}/{series} v{volume} #{issue} (of {issue_count}) ({year})"
self.smart_cleanup = True
self.issue_zero_padding = 3
self.metadata = metadata or GenericMetadata()
self.move = False
self.platform = platform
self.replacements = replacements
self.original_name = ""
self.move_only = False
def set_metadata(self, metadata: GenericMetadata, original_name: str) -> None:
self.metadata = metadata
self.original_name = original_name
def set_issue_zero_padding(self, count: int) -> None:
self.issue_zero_padding = count
def set_smart_cleanup(self, on: bool) -> None:
self.smart_cleanup = on
def set_template(self, template: str) -> None:
self.template = template
def determine_name(self, ext: str) -> str:
class Default(dict[str, Any]):
def __missing__(self, key: str) -> str:
return "{" + key + "}"
md = self.metadata
template = self.template
new_name = ""
fmt = MetadataFormatter(self.smart_cleanup, platform=self.platform, replacements=self.replacements)
md_dict = vars(md)
md_dict.update(
dict(
month_name=None,
month_abbr=None,
date=None,
genre=None,
story_arc=None,
series_group=None,
web_link=None,
character=None,
team=None,
location=None,
)
)
md_dict["issue"] = IssueString(md.issue).as_string(pad=self.issue_zero_padding)
for role in ["writer", "penciller", "inker", "colorist", "letterer", "cover artist", "editor", "translator"]:
md_dict[role] = md.get_primary_credit(role)
if (isinstance(md.month, int) or isinstance(md.month, str) and md.month.isdigit()) and 0 < int(md.month) < 13:
md_dict["month_name"] = calendar.month_name[int(md.month)]
md_dict["month_abbr"] = calendar.month_abbr[int(md.month)]
if md.year is not None and datetime.MINYEAR <= md.year <= datetime.MAXYEAR:
md_dict["date"] = datetime.datetime(year=md.year, month=md.month or 1, day=md.day or 1)
if md.genres:
md_dict["genre"] = sorted(md.genres)[0]
if md.story_arcs:
md_dict["story_arc"] = md.story_arcs[0]
if md.series_groups:
md_dict["series_group"] = md.series_groups[0]
if md.web_links:
md_dict["web_link"] = md.web_links[0]
if md.characters:
md_dict["character"] = sorted(md.characters)[0]
if md.teams:
md_dict["team"] = sorted(md.teams)[0]
if md.locations:
md_dict["location"] = sorted(md.locations)[0]
new_basename = ""
for component in pathlib.PureWindowsPath(template).parts:
new_basename = str(
sanitize_filename(fmt.vformat(component, args=[], kwargs=Default(md_dict)), platform=self.platform)
).strip()
new_name = os.path.join(new_name, new_basename)
if self.move_only:
new_folder = os.path.join(new_name, os.path.splitext(self.original_name)[0])
return new_folder + ext
if self.move:
return new_name.strip() + ext
return new_basename.strip() + ext

View File

@ -0,0 +1,418 @@
"""A PyQt6 widget for managing list of comic archive files"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import os
import pathlib
import platform
from typing import Callable, cast
from PyQt6 import QtCore, QtGui, QtWidgets, uic
from comicapi import utils
from comicapi.comicarchive import ComicArchive
from comictaggerlib.ctsettings import ct_ns
from comictaggerlib.graphics import graphics_path
from comictaggerlib.optionalmsgdialog import OptionalMessageDialog
from comictaggerlib.settingswindow import linuxRarHelp, macRarHelp, windowsRarHelp
from comictaggerlib.ui import ui_path
from comictaggerlib.ui.qtutils import center_window_on_parent
logger = logging.getLogger(__name__)
class FileSelectionList(QtWidgets.QWidget):
selectionChanged = QtCore.pyqtSignal(QtCore.QVariant)
listCleared = QtCore.pyqtSignal()
fileColNum = 0
MDFlagColNum = 1
typeColNum = 2
readonlyColNum = 3
folderColNum = 4
dataColNum = fileColNum
def __init__(
self, parent: QtWidgets.QWidget, config: ct_ns, dirty_flag_verification: Callable[[str, str], bool]
) -> None:
super().__init__(parent)
with (ui_path / "fileselectionlist.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.config = config
self.twList.horizontalHeader().setMinimumSectionSize(50)
self.twList.currentItemChanged.connect(self.current_item_changed_cb)
self.currentItem = None
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
self.dirty_flag = False
select_all_action = QtGui.QAction("Select All", self)
remove_action = QtGui.QAction("Remove Selected Items", self)
self.separator = QtGui.QAction("", self)
self.separator.setSeparator(True)
select_all_action.setShortcut("Ctrl+A")
remove_action.setShortcut("Backspace" if platform.system() == "Darwin" else "Delete")
select_all_action.triggered.connect(self.select_all)
remove_action.triggered.connect(self.remove_selection)
self.addAction(select_all_action)
self.addAction(remove_action)
self.addAction(self.separator)
self.loaded_paths: set[pathlib.Path] = set()
self.dirty_flag_verification = dirty_flag_verification
self.rar_ro_shown = False
def get_sorting(self) -> tuple[int, int]:
col = self.twList.horizontalHeader().sortIndicatorSection()
order = self.twList.horizontalHeader().sortIndicatorOrder().value
return int(col), int(order)
def set_sorting(self, col: int, order: QtCore.Qt.SortOrder) -> None:
self.twList.horizontalHeader().setSortIndicator(col, order)
def add_app_action(self, action: QtGui.QAction) -> None:
self.insertAction(QtGui.QAction(), action)
def set_modified_flag(self, modified: bool) -> None:
self.dirty_flag = modified
def select_all(self) -> None:
self.twList.setRangeSelected(
QtWidgets.QTableWidgetSelectionRange(0, 0, self.twList.rowCount() - 1, self.twList.columnCount() - 1), True
)
def deselect_all(self) -> None:
self.twList.setRangeSelected(
QtWidgets.QTableWidgetSelectionRange(0, 0, self.twList.rowCount() - 1, self.twList.columnCount() - 1), False
)
def remove_archive_list(self, ca_list: list[ComicArchive]) -> None:
self.twList.setSortingEnabled(False)
current_removed = False
for ca in ca_list:
for row in range(self.twList.rowCount()):
row_ca = self.get_archive_by_row(row)
if row_ca == ca:
if row == self.twList.currentRow():
current_removed = True
self.twList.removeRow(row)
self.loaded_paths -= {ca.path}
break
self.twList.setSortingEnabled(True)
if self.twList.rowCount() > 0 and current_removed:
# since on a removal, we select row 0, make sure callback occurs if
# we're already there
if self.twList.currentRow() == 0:
self.current_item_changed_cb(self.twList.currentItem(), None)
self.twList.selectRow(0)
elif self.twList.rowCount() <= 0:
self.listCleared.emit()
def get_archive_by_row(self, row: int) -> ComicArchive | None:
if row >= 0:
ca: ComicArchive = self.twList.item(row, FileSelectionList.dataColNum).data(QtCore.Qt.ItemDataRole.UserRole)
return ca
return None
def get_current_archive(self) -> ComicArchive | None:
return self.get_archive_by_row(self.twList.currentRow())
def remove_selection(self) -> None:
row_list = []
for item in self.twList.selectedItems():
if item.column() == 0:
row_list.append(item.row())
if len(row_list) == 0:
return
if self.twList.currentRow() in row_list:
if not self.dirty_flag_verification(
"Remove Archive", "If you close this archive, data in the form will be lost. Are you sure?"
):
return
row_list.sort()
row_list.reverse()
self.twList.currentItemChanged.disconnect(self.current_item_changed_cb)
self.twList.setSortingEnabled(False)
for i in row_list:
self.loaded_paths -= {self.get_archive_by_row(i).path} # type: ignore[union-attr]
self.twList.removeRow(i)
self.twList.setSortingEnabled(True)
self.twList.currentItemChanged.connect(self.current_item_changed_cb)
if self.twList.rowCount() > 0:
# since on a removal, we select row 0, make sure callback occurs if
# we're already there
if self.twList.currentRow() == 0:
self.current_item_changed_cb(self.twList.currentItem(), None)
self.twList.selectRow(0)
else:
self.listCleared.emit()
def add_path_list(self, pathlist: list[str]) -> None:
if not pathlist:
return
filelist = utils.get_recursive_filelist(pathlist)
# we now have a list of files to add
progdialog = None
if len(filelist) < 3:
# Prog dialog on Linux flakes out for small range, so scale up
progdialog = QtWidgets.QProgressDialog("", "Cancel", 0, len(filelist), parent=self)
progdialog.setWindowTitle("Adding Files")
progdialog.setWindowModality(QtCore.Qt.WindowModality.WindowModal)
progdialog.setMinimumDuration(300)
progdialog.show()
center_window_on_parent(progdialog)
first_added = None
rar_added_ro = False
self.twList.setSortingEnabled(False)
for idx, f in enumerate(filelist):
if idx % 10 == 0:
QtCore.QCoreApplication.processEvents()
if progdialog is not None:
if progdialog.wasCanceled():
break
progdialog.setValue(idx + 1)
progdialog.setLabelText(f)
row, ca = self.add_path_item(f)
if row is not None:
rar_added_ro = bool(ca and ca.archiver.name() == "RAR" and not ca.archiver.is_writable())
if first_added is None and row != -1:
first_added = row
if progdialog is not None:
progdialog.hide()
QtCore.QCoreApplication.processEvents()
if first_added is not None:
self.twList.selectRow(first_added)
else:
if len(pathlist) == 1 and os.path.isfile(pathlist[0]):
QtWidgets.QMessageBox.information(
self, "File Open", "Selected file doesn't seem to be a comic archive."
)
else:
QtWidgets.QMessageBox.information(self, "File/Folder Open", "No readable comic archives were found.")
if rar_added_ro:
self.rar_ro_message()
self.twList.setSortingEnabled(True)
# Adjust column size
self.twList.resizeColumnsToContents()
self.twList.setColumnWidth(FileSelectionList.MDFlagColNum, 35)
self.twList.setColumnWidth(FileSelectionList.readonlyColNum, 35)
self.twList.setColumnWidth(FileSelectionList.typeColNum, 45)
if self.twList.columnWidth(FileSelectionList.fileColNum) > 250:
self.twList.setColumnWidth(FileSelectionList.fileColNum, 250)
if self.twList.columnWidth(FileSelectionList.folderColNum) > 200:
self.twList.setColumnWidth(FileSelectionList.folderColNum, 200)
def rar_ro_message(self) -> None:
if not self.rar_ro_shown:
if platform.system() == "Windows":
rar_help = windowsRarHelp
elif platform.system() == "Darwin":
rar_help = macRarHelp
else:
rar_help = linuxRarHelp
OptionalMessageDialog.msg_no_checkbox(
self,
"RAR Files are Read-Only",
"It looks like you have opened a RAR/CBR archive,\n"
"however ComicTagger cannot write to them without the rar program and are marked read only!\n\n"
f"{rar_help}",
)
self.rar_ro_shown = True
def get_current_list_row(self, path: str) -> tuple[int, ComicArchive]:
pl = pathlib.Path(path)
if pl not in self.loaded_paths:
return -1, None # type: ignore[return-value]
for r in range(self.twList.rowCount()):
ca = cast(ComicArchive, self.get_archive_by_row(r))
if ca.path == pl:
return r, ca
return -1, None # type: ignore[return-value]
def add_path_item(self, path: str) -> tuple[int, ComicArchive]:
path = str(path)
path = os.path.abspath(path)
current_row, ca = self.get_current_list_row(path)
if current_row >= 0:
return current_row, ca
ca = ComicArchive(
path, str(graphics_path / "nocover.png"), hash_archive=self.config.Runtime_Options__preferred_hash
)
if ca.seems_to_be_a_comic_archive():
self.loaded_paths.add(ca.path)
row: int = self.twList.rowCount()
self.twList.insertRow(row)
filename_item = QtWidgets.QTableWidgetItem()
folder_item = QtWidgets.QTableWidgetItem()
md_item = QtWidgets.QTableWidgetItem()
readonly_item = QtWidgets.QTableWidgetItem()
type_item = QtWidgets.QTableWidgetItem()
item_text = os.path.split(ca.path)[1]
filename_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
filename_item.setData(QtCore.Qt.ItemDataRole.UserRole, ca)
filename_item.setText(item_text)
filename_item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
self.twList.setItem(row, FileSelectionList.fileColNum, filename_item)
item_text = os.path.split(ca.path)[0]
folder_item.setText(item_text)
folder_item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
folder_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, FileSelectionList.folderColNum, folder_item)
type_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, FileSelectionList.typeColNum, type_item)
md_item.setText(", ".join(x for x in ca.get_supported_tags() if ca.has_tags(x)))
md_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
md_item.setTextAlignment(QtCore.Qt.AlignmentFlag.AlignHCenter)
self.twList.setItem(row, FileSelectionList.MDFlagColNum, md_item)
if not ca.is_writable():
readonly_item.setCheckState(QtCore.Qt.CheckState.Checked)
readonly_item.setData(QtCore.Qt.ItemDataRole.UserRole, True)
readonly_item.setText(" ")
else:
readonly_item.setData(QtCore.Qt.ItemDataRole.UserRole, False)
readonly_item.setCheckState(QtCore.Qt.CheckState.Unchecked)
# This is a nbsp it sorts after a space ' '
readonly_item.setText("\xa0")
readonly_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
readonly_item.setTextAlignment(QtCore.Qt.AlignmentFlag.AlignHCenter)
self.twList.setItem(row, FileSelectionList.readonlyColNum, readonly_item)
return row, ca
return -1, None # type: ignore[return-value]
def update_row(self, row: int) -> None:
if row >= 0:
ca: ComicArchive = self.twList.item(row, FileSelectionList.dataColNum).data(QtCore.Qt.ItemDataRole.UserRole)
filename_item = self.twList.item(row, FileSelectionList.fileColNum)
folder_item = self.twList.item(row, FileSelectionList.folderColNum)
md_item = self.twList.item(row, FileSelectionList.MDFlagColNum)
type_item = self.twList.item(row, FileSelectionList.typeColNum)
readonly_item = self.twList.item(row, FileSelectionList.readonlyColNum)
item_text = os.path.split(ca.path)[1]
filename_item.setText(item_text)
filename_item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item_text = os.path.split(ca.path)[0]
folder_item.setText(item_text)
folder_item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item_text = ca.archiver.name()
type_item.setText(item_text)
type_item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
md_item.setText(", ".join(x for x in ca.get_supported_tags() if ca.has_tags(x)))
if not ca.is_writable():
readonly_item.setCheckState(QtCore.Qt.CheckState.Checked)
readonly_item.setData(QtCore.Qt.ItemDataRole.UserRole, True)
readonly_item.setText(" ")
else:
readonly_item.setData(QtCore.Qt.ItemDataRole.UserRole, False)
readonly_item.setCheckState(QtCore.Qt.CheckState.Unchecked)
# This is a nbsp it sorts after a space ' '
readonly_item.setText("\xa0")
def get_selected_archive_list(self) -> list[ComicArchive]:
ca_list: list[ComicArchive] = []
for r in range(self.twList.rowCount()):
item = self.twList.item(r, FileSelectionList.dataColNum)
if item.isSelected():
ca: ComicArchive = item.data(QtCore.Qt.ItemDataRole.UserRole)
ca_list.append(ca)
return ca_list
def update_current_row(self) -> None:
self.update_row(self.twList.currentRow())
def update_selected_rows(self) -> None:
self.twList.setSortingEnabled(False)
for r in range(self.twList.rowCount()):
item = self.twList.item(r, FileSelectionList.dataColNum)
if item.isSelected():
self.update_row(r)
self.twList.setSortingEnabled(True)
def current_item_changed_cb(self, curr: QtCore.QModelIndex | None, prev: QtCore.QModelIndex | None) -> None:
if curr is not None:
new_idx = curr.row()
old_idx = -1
if prev is not None:
old_idx = prev.row()
if old_idx == new_idx:
return
# don't allow change if modified
if prev is not None and new_idx != old_idx:
if not self.dirty_flag_verification(
"Change Archive", "If you change archives now, data in the form will be lost. Are you sure?"
):
self.twList.currentItemChanged.disconnect(self.current_item_changed_cb)
self.twList.setCurrentItem(prev)
self.twList.currentItemChanged.connect(self.current_item_changed_cb)
# Need to defer this revert selection, for some reason
QtCore.QTimer.singleShot(1, self.revert_selection)
return
fi = self.twList.item(new_idx, FileSelectionList.dataColNum).data(QtCore.Qt.ItemDataRole.UserRole)
self.selectionChanged.emit(QtCore.QVariant(fi))
def revert_selection(self) -> None:
self.twList.selectRow(self.twList.currentRow())

View File

@ -0,0 +1,5 @@
from __future__ import annotations
import importlib.resources
graphics_path = importlib.resources.files(__package__)

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View File

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 56 KiB

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Some files were not shown because too many files have changed in this diff Show More