Compare commits

...

984 Commits

Author SHA1 Message Date
de3a9352ea Allow reading cli metadata from a file 2024-04-12 14:10:21 -07:00
d104ae1e8e Update help message for the -m option 2024-04-11 15:46:29 -07:00
8bcd51f49b Improve commandline metadata override
Change parse_metadata_from_string to yaml syntax
Add a special value to remove existing values when metadata is overlayed
2024-04-06 12:03:01 -07:00
de084ffff9 Fix string value of GenericMetadata 2024-04-06 12:02:21 -07:00
eb6c2ed72b [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/asottile/pyupgrade: v3.15.1 → v3.15.2](https://github.com/asottile/pyupgrade/compare/v3.15.1...v3.15.2)
- [github.com/PyCQA/autoflake: v2.3.0 → v2.3.1](https://github.com/PyCQA/autoflake/compare/v2.3.0...v2.3.1)
- [github.com/psf/black: 24.2.0 → 24.3.0](https://github.com/psf/black/compare/24.2.0...24.3.0)
2024-03-25 17:15:40 +00:00
c99b691041 pre-commit 2024-03-17 14:03:05 -07:00
48fd1c2897 Force plain text on TextEdits 2024-03-16 11:52:14 -07:00
37c809db2a Fix crash when no comics are found in the IssueIdentifier 2024-03-16 11:52:14 -07:00
51db3e1249 Allow ignoring errors that happen the gui 2024-03-16 11:52:14 -07:00
c99f3fa083 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/pre-commit/mirrors-mypy: v1.8.0 → v1.9.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.8.0...v1.9.0)
2024-03-12 20:00:49 +00:00
6f3a5a8860 Set the shell to bash 2024-03-09 19:49:59 -08:00
ebd99cb144 Set PKG_CONFIG_PATH as actions/setup-python@v5 overrides it 2024-03-09 18:06:30 -08:00
b1a9b0b016 Only upgrade icu4c and pkg-config 2024-03-09 14:47:47 -08:00
0929a6678b Update icu4c paths and upgrade packages on macOS 2024-03-09 14:45:49 -08:00
69824412ce Update GH Actions 2024-03-09 14:07:11 -08:00
0d9756f8b0 Pin minimum version for comicinfoxml 2024-03-09 13:51:35 -08:00
244cd9101d Remove commented code 2024-03-09 13:46:51 -08:00
3df263858d Merge branch 'web-links' into develop 2024-03-09 13:42:29 -08:00
b45c39043b Merge branch 'comicfn2dict' into develop 2024-03-09 13:10:27 -08:00
9eae71fb62 Disable checkboxes when the complicated parser is not used 2024-03-09 13:07:49 -08:00
9a95adf47d Bump comicfn2dict 2024-03-09 13:02:02 -08:00
956c383e5f Fix py7zr 2024-03-05 15:13:03 -08:00
5155762711 Add comicfn2dict as an alternative filename parser 2024-03-03 21:47:31 -08:00
ea43eccd78 Merge branch 'ii-rework' into develop 2024-03-01 15:39:01 -08:00
ff2547e7f2 Disable buttons for add/remove weblink 2024-03-01 15:26:56 -08:00
163cf44751 Open the editor when adding a now web link 2024-02-26 19:04:33 -08:00
14ce8a759f Mark all QTextEdit's as plain text only 2024-02-26 15:57:00 -08:00
22d92e1ded Move result determination out of _cover_matching 2024-02-26 15:38:13 -08:00
3c3700838b Select item on add and set the dirty flag on change 2024-02-25 08:26:29 -08:00
05423c8270 Use a QListWidget for web_links
Fix web_link in md_attributes
2024-02-24 22:31:45 -08:00
d277eb332b Add an option to disable prompt on save Fixes #422 2024-02-24 19:56:32 -08:00
dcad32ade0 Fix settngs generation 2024-02-24 19:55:28 -08:00
dd0b637566 Bump settngs 2024-02-24 19:01:10 -08:00
bad8b85874 Fix tests 2024-02-24 18:30:41 -08:00
938f760a37 Remove IssueIdentifier.search 2024-02-23 20:50:17 -08:00
f382c2f814 Update Tests 2024-02-23 20:47:22 -08:00
4e75731024 Re-write IssueIdentifier.search as IssueIdentifier.identify 2024-02-23 20:47:04 -08:00
920a0ed1af Implement better migration of changed settings should fix #609 2024-02-23 15:45:18 -08:00
9eb50da744 Fix setting rar info in the settings window Fixes #596
Look in all drive letters for rar executable
2024-02-23 15:45:18 -08:00
2e2d886cb2 Bump settngs 2024-02-22 14:52:26 -08:00
5738433c2b Fix fileselectionlist
Remove the custom widgetitem
Set a minimum size for the columns
Use a space " " a and nbsp "\xa0" for the check column to allow sorting
2024-02-22 14:30:15 -08:00
4a33dbde46 Fix PyInstaller packaging 2024-02-22 14:30:15 -08:00
10a48634bd Update talker dependencies 2024-02-19 12:29:36 -08:00
2492d96fb3 Merge branch 'pre-commit-ci-update-config' into develop 2024-02-19 12:08:43 -08:00
87248503b4 Allow 7z again 2024-02-19 11:57:30 -08:00
7705e7ea1f [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/asottile/pyupgrade: v3.15.0 → v3.15.1](https://github.com/asottile/pyupgrade/compare/v3.15.0...v3.15.1)
- [github.com/PyCQA/autoflake: v2.2.1 → v2.3.0](https://github.com/PyCQA/autoflake/compare/v2.2.1...v2.3.0)
- [github.com/psf/black: 24.1.1 → 24.2.0](https://github.com/psf/black/compare/24.1.1...24.2.0)
2024-02-19 17:19:25 +00:00
54b0630891 Allow 7z for rar decompression on Windows 2024-02-18 21:57:51 -08:00
27e70b966f Export translator_synonyms 2024-02-18 21:39:27 -08:00
ad8b92743c Remove unused variable 2024-02-18 18:01:51 -08:00
22b44c87ca Merge branch 'mizaki/autotag_source' into develop 2024-02-18 18:00:09 -08:00
2eca743f20 Fix #602
Tests were not made correctly to catch the change in 2c3a2566cc
This has now been corrected
2024-02-18 17:31:00 -08:00
bb4be306cc Fix fileselectionlist columns 2024-02-18 17:28:55 -08:00
768ef0b6bc Fix rar exe handling 2024-02-18 01:40:49 -08:00
b2d3869488 Update filerenaming for web_links
Ensure the j specifier in MetadataFormatter converts to str before joining
Add a web_link variable to the filerenamer
2024-02-17 17:42:07 -08:00
44e9a47a8b Support multiple web_links 2024-02-17 17:42:07 -08:00
215587d9a4 Move path under progress bar 2024-02-17 18:38:51 +00:00
7430e59b64 Add attributation to auto tag window 2024-02-17 18:36:49 +00:00
09490b8ebf Merge branch 'lordwelch-local-plugins' into develop 2024-02-12 17:40:09 -08:00
1e4a3b2484 Merge branch 'mizaki-meta_multi' into develop 2024-02-12 17:29:45 -08:00
b9bf3be4b2 Add short metadata style names 2024-02-12 20:57:32 +00:00
a1e4cec94f Log file path to plugin when it fails to load and remove debug statements 2024-02-11 13:18:03 -08:00
65e857af8b Move cache reset and load outside of loop. continue if it's impossible to use metadata 2024-02-11 19:32:12 +00:00
8887d48b3e Save metadata styles with one result per archive 2024-02-11 13:57:34 +00:00
e14714e26b Fix the --list-plugins command 2024-02-10 21:25:57 -08:00
8ec16528ab Implement local plugins 2024-02-10 21:00:24 -08:00
e9e619c992 Use CheckableComboBox in ui file 2024-02-11 01:51:47 +00:00
a6b60a4317 Simplify enabled widget check and reset cache before loading, break on failed metadata writing 2024-02-11 00:53:40 +00:00
69615c6c07 Fix hash and test 2024-02-10 15:02:24 -08:00
da6b2b02f4 Implement a replaceWidget helper function 2024-02-10 14:42:47 -08:00
3dfdae4033 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-02-10 01:55:15 +00:00
23021ba632 Add support for saving multiple metadata styles in the GUI
Unwind credit color comprehension

Convert save style from a string setting to a list

Use lordwelch version of Checkable combobox

Improve readbility, fix label alignment in taggerwindow.ui, better report removal of tags and clearer number meanings.

Unwind list comprehension for easier readability
2024-02-10 01:55:15 +00:00
bc335f1686 Forbid nested comprehensions 2024-02-06 18:01:26 -08:00
999d3eb497 Merge branch 'pre-commit-ci-update-config' into develop 2024-02-06 17:08:43 -08:00
bf67c6d270 Add E701 to flake8 ignores for new black version 2024-02-02 14:36:11 -08:00
df762746ec [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-01-29 17:14:26 +00:00
6687e5c6ca [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/psf/black: 23.12.1 → 24.1.1](https://github.com/psf/black/compare/23.12.1...24.1.1)
2024-01-29 17:14:04 +00:00
2becec0fb6 Update help for --overwrite 2024-01-22 17:01:40 -08:00
fbe56f4db9 Remove unnecessary dest arguments in settings 2024-01-22 17:00:59 -08:00
085543321a cbxClearFormBeforePopulating not working 2024-01-22 16:50:15 -08:00
f8c0ca195a Add cbxDisableCR, update cbxSplitWords and cbxClearFormBeforePopulating 2024-01-22 16:49:57 -08:00
dda0cb521a Add more credit synonyms 2024-01-21 15:06:34 -08:00
bb1a83b4ba Fix the rename command 2024-01-21 14:01:11 -08:00
f34e8200dd Fix add_to_path tests 2024-01-20 10:34:40 -08:00
539aac1307 Fix clearing lists via the '-m' option Fixes #587 2024-01-14 13:38:11 -08:00
f75ee58ac0 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/PyCQA/flake8: 6.1.0 → 7.0.0](https://github.com/PyCQA/flake8/compare/6.1.0...7.0.0)
2024-01-08 17:15:56 +00:00
d27621ccd7 Merge branch 'pre-commit-ci-update-config' into develop 2023-12-31 14:45:45 -08:00
1ca585a65c Fix #584 2023-12-31 14:33:27 -08:00
39407286b3 Fix tarfile 2023-12-25 22:59:57 -08:00
6e56872121 Fix running dmgbuild again 2023-12-25 22:50:11 -08:00
888c50d72a Fix running dmgbuild 2023-12-25 22:41:57 -08:00
231b600a0e Switch to tar.gz and dmg archives to reduce space 2023-12-25 22:16:18 -08:00
db00736f58 Fix filename parsing not respecting user settings 2023-12-25 21:57:31 -08:00
5a714e40d9 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/psf/black: 23.12.0 → 23.12.1](https://github.com/psf/black/compare/23.12.0...23.12.1)
- [github.com/pre-commit/mirrors-mypy: v1.7.1 → v1.8.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.7.1...v1.8.0)
2023-12-25 17:15:30 +00:00
230a4b6558 Update namespace 2023-12-24 18:32:52 -08:00
f7bd6ee4f3 Add cix support 2023-12-24 18:32:52 -08:00
1ef6e40c29 Allow the avif extension 2023-12-24 18:32:52 -08:00
7d1bf8525b Merge branch 'metadata-plugin' into develop 2023-12-24 18:32:42 -08:00
59694993ff Fix loading previous existing xml 2023-12-24 18:28:38 -08:00
109d8efc0b Update pyinstaller hook 2023-12-24 18:04:35 -08:00
c8507c08a9 Ensure ComicRack and CoMet metadata preserve unknown xml tags 2023-12-23 23:50:58 -08:00
28be4d9dd7 Improve errors when loading plugins 2023-12-23 23:47:44 -08:00
ceb3b30e5c Always apply the default page list when writing metadata 2023-12-20 21:24:12 -08:00
8dccedc229 Bump metron-talker minimum version 2023-12-19 09:05:56 -08:00
c3a8221d99 Return an empty object if an archive does not have the requested style 2023-12-18 16:59:31 -08:00
ed480720aa Update AUTHORS 2023-12-18 20:38:38 +00:00
f18f961dcd [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/pre-commit/pre-commit-hooks: v4.4.0 → v4.5.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.4.0...v4.5.0)
- [github.com/asottile/setup-cfg-fmt: v2.4.0 → v2.5.0](https://github.com/asottile/setup-cfg-fmt/compare/v2.4.0...v2.5.0)
- [github.com/asottile/pyupgrade: v3.10.1 → v3.15.0](https://github.com/asottile/pyupgrade/compare/v3.10.1...v3.15.0)
- [github.com/PyCQA/isort: 5.12.0 → 5.13.2](https://github.com/PyCQA/isort/compare/5.12.0...5.13.2)
- [github.com/psf/black: 23.7.0 → 23.12.0](https://github.com/psf/black/compare/23.7.0...23.12.0)
- [github.com/pre-commit/mirrors-mypy: v1.5.1 → v1.7.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.5.1...v1.7.1)
2023-12-18 17:17:28 +00:00
df781f67e3 Fix assigning black_and_white value 2023-12-18 02:46:53 -08:00
addddaf44e List metadata styles when listing plugins 2023-12-18 02:37:40 -08:00
4660b14453 Fixup metadata handling 2023-12-18 02:37:40 -08:00
9c231d7e11 Add better page info handling
Rename set_default_page_list to apply_default_page_list and apply
 during read_metadata
Add a filename attribute to the ImageMetadata class
Mark image_index as required
Always sort the page name list, a comic application will never need the
 unsorted list of names
Assign the first result from get_cover_page_index_list to coverImage in
 CoMet tags
Allow an Archiver to be passed to the ComicArchive constructor
2023-12-18 02:37:34 -08:00
989470772f Make widget disabling more consistent 2023-12-18 01:24:30 -08:00
8b7443945b Use ids for metadata type in file selection list
Removed unnecessary FileInfo class
2023-12-17 22:01:47 -08:00
da373764e0 Let the original ComicRack metadata be disabled
Ensure metadata styles can be overridden by other plugins
2023-12-17 21:47:44 -08:00
fd868d9596 Add supports_credit_role to metadata plugins 2023-12-17 21:47:44 -08:00
ae5e246180 Add plugin support for metadata 2023-12-17 21:47:43 -08:00
04b3b6b4ab Do not normalize series_name when a literal search is requested 2023-12-17 19:14:38 -08:00
564ce24988 Bump settngs to 0.9.2 2023-12-17 18:30:01 -08:00
3b2e763d7d Merge branch 'json-output' into develop 2023-12-17 18:28:53 -08:00
50859d07c4 Set the return code to 3 if any results are not successful 2023-12-17 18:17:19 -08:00
04bf7f484e Ensure IssueIdentifier output goes to the right place 2023-12-17 18:10:18 -08:00
4c1247f49c Print the summary even if quiet mode is enabled 2023-12-17 18:03:25 -08:00
17a8513efc Disable json output in interactive mode 2023-12-17 17:56:12 -08:00
7ada13bcc3 Remove unnecessary print statements 2023-12-17 17:35:21 -08:00
5b1c92e7b8 Fix a crash when fetching images during auto-tag in the gui 2023-12-17 16:25:21 -08:00
45643cc594 Add integration tests 2023-12-17 16:24:32 -08:00
ab6b970063 Create an Action tuple for determining the current command 2023-12-17 16:16:21 -08:00
9571020217 Upgrade settngs to 0.9.1 2023-12-17 16:15:26 -08:00
bb67ab009e Ensure that all output goes through a logger before output to the user
Adds an option to output json for CLI options
2023-12-17 15:51:43 -08:00
f3b235ae14 Move pyupgrade above autoflake to reduce runs of pre-commit required 2023-12-16 17:28:41 -08:00
0de95777b4 Handle multiple options sharing a dest 2023-12-16 17:06:27 -08:00
9d36ed0dc6 Update AUTHORS 2023-12-16 17:50:55 +00:00
e0eec002fa docs(contributor): contrib-readme-action has updated readme 2023-12-16 17:50:51 +00:00
79779b7a46 Merge branch 'DrMcCoy/fix_crash_shortcut_pagetype' into develop 2023-12-16 09:49:09 -08:00
df24ad0008 Fix crash when using shortcut to set page type
QListWidget has no rowCount() method, it has count() instead.
2023-12-16 17:16:31 +01:00
651c5aed37 Add packaging dependency 2023-12-13 09:53:41 -08:00
3c83dbd038 Merge branch 'mizaki/talkers_version_check' into develop 2023-12-13 09:52:20 -08:00
fc6e0c3db3 Parse ct version only once 2023-12-12 23:47:47 +00:00
c5cfd3ebdc Add a link to the log folder from the log window 2023-12-01 19:48:16 -08:00
cead69f8e3 Merge branch 'mizaki/settings_encoder' into develop 2023-12-01 19:43:18 -08:00
4d2b9e1157 Warn on bad min ct required verion and use anyway. Use clearer log messages 2023-12-01 14:09:17 +00:00
f977e70562 Rename min ct required var. Use a minimum version only check instead of full spec 2023-12-01 01:23:46 +00:00
12dd06c558 Add CT verion check against talker requirements 2023-11-30 01:50:28 +00:00
70541cc9ee Encode pathlib.Path for the settings file. Validate types from the JSON settings file after loaded. JSON.decoder not used due to its limitation with context. 2023-11-28 23:21:04 +00:00
d37c7a680d Update dependencies 2023-11-28 15:08:26 -08:00
1ff6f1768b Use importlib.resources instead of __file__ 2023-11-25 12:32:50 -08:00
99325f40cf Merge branch 'mizaki/cleanup_html' into develop 2023-11-23 16:12:02 -08:00
65948cd9cd Merge branch 'bump-settngs' into develop 2023-11-23 16:06:01 -08:00
305eb1dec5 Enable stricter mypy configuration 2023-11-23 16:05:16 -08:00
9aad872ae6 Merge branch 'uigenerator' into develop 2023-11-23 15:19:20 -08:00
a478a35f66 Simplify setting values on Qt widgets
Add explanatory comments
2023-11-23 15:18:59 -08:00
128cab077c Replace pycountry with isocodes
isocodes is updated more often and doesn't depend on deprecated packages
2023-11-23 14:21:21 -08:00
9dc6f8914f Upgrade settings to 0.8.0 2023-11-19 23:14:40 -08:00
57873136b6 Use isinstance for type check 2023-11-14 15:18:48 -08:00
987f3fc564 cleanup_html improvements 2023-11-13 01:41:26 +00:00
10776dbb07 Fix flake8 issues 2023-11-09 18:23:57 -08:00
2d3f68167c Merge branch 'progress-dialog' into develop 2023-11-09 16:57:02 -08:00
770f64b746 Merge branch 'mizaki-talker_file_picker' into develop 2023-11-09 16:53:15 -08:00
235c12bd53 Convert types back to their declared types in talkeruigenerator 2023-11-09 16:52:41 -08:00
10b19606e0 Fix GenericMetadata __str__ 2023-11-05 21:36:29 -08:00
a7d1084a4d Remove flake8-warnings 2023-11-05 13:27:31 -08:00
21575a9fb8 Fix saving CBI when credits are empty 2023-11-05 13:27:14 -08:00
2258d70d7b Add file picker to talkers options. Requires type of pathlib.Path 2023-11-01 02:01:54 +00:00
b23c3195e3 Merge branch 'lexNumbers' into develop 2023-10-27 23:50:05 -07:00
bd9b3522d8 Improve edge cases
Lex `'` as a symbol
Lex multiple symbols as a single item
Prefer `$` at the start of a number
Simplify issue number parsing
2023-10-27 23:26:40 -07:00
78060dff61 Rework parse_series 2023-10-27 23:26:40 -07:00
4a29040c74 Add format to the filename parser result 2023-10-27 23:13:12 -07:00
496f3f0e75 fix reset after space 2023-10-23 22:05:42 -07:00
f03b2e58cf Improve lexing numbers
lex currency amounts as text
lex a '.' followed by a number as a number if there is a preceding space
2023-10-23 21:13:31 -07:00
29ddc3779a Ensure FilenameInfo is always filled out fixes #556 2023-10-23 21:08:55 -07:00
7842109ca2 Pin chardet version 2023-10-22 16:01:46 -07:00
7527dc4fd8 FIX: A hamming distance of 0 is a perfect match. Adjust to 100 for empty URLs 2023-10-12 22:34:16 +01:00
8dfd38a15c Merge branch 'rar-cwd' into develop 2023-10-12 01:31:57 -07:00
6227edb0a3 Set rar cwd to reduce errors 2023-10-12 01:30:32 -07:00
114a0bb615 Fix parsing '&' with the "complicated" filename parser 2023-10-12 01:26:31 -07:00
abfd97d915 Merge branch 'protofolius_issue_scheme' into develop 2023-10-11 17:05:27 -07:00
582b8cc57b Add more parseable filenames 2023-10-11 17:03:07 -07:00
97a24d8d52 Change dialog modality and only center dialog when it is created 2023-10-08 11:59:57 -07:00
edb087abde Handle errors when reading zip comments fixes #548 2023-10-07 11:49:57 -07:00
96c5c4aa28 Fix pyinstaller build
Fix exception when PyQt is not installed
2023-10-07 11:49:30 -07:00
4b93262d5f Merge branch 'mizaki-window_sorting' into develop 2023-10-06 20:14:35 -07:00
78a890f900 Fix parsing a month name in the series fixes #542 2023-10-06 20:06:39 -07:00
5bdbe7d181 Always update rows even if None 2023-10-05 22:14:45 +01:00
f250d2c5c3 Merge branch 'mizaki-gmd_list_set' into develop 2023-10-04 20:16:33 -07:00
b6d5fe7013 Improve rar error messages 2023-10-04 19:08:17 -07:00
80f3dd7ce4 Restore issue number sorting 2023-09-30 23:19:10 +01:00
0c63f77e53 Restore issue count and year sorting 2023-09-30 23:05:06 +01:00
5688cdea89 Merge branch 'mizaki-gentalker_password' into develop 2023-09-26 17:05:20 -07:00
2949626f6d Merge branch 'mizaki-remove_series_genres' into develop 2023-09-26 17:04:45 -07:00
319aa582e5 Remove ignoring default for setting generation combobox 2023-09-25 00:55:50 +01:00
058651cc29 Change metadata lists to sets. Changed CV talker to reflect and tidied 2023-09-24 14:33:57 +01:00
5874f3bcaf Remove genres from ComicSeries as it is no longer required with the new cache system 2023-09-22 23:15:04 +01:00
c6522865ab Use casefold 2023-09-21 16:05:13 +01:00
5684694055 Generate password box for any settings dest name that end in password 2023-09-21 01:47:08 +01:00
360a9e6308 Merge branch 'mizaki-talker_gen_combo' into develop 2023-09-17 16:39:33 -07:00
015959bd97 Merge branch 'mizaki-talker_setting_logo_blurb' into develop 2023-09-17 16:35:13 -07:00
8feade923a Don't capitalise and therefore no need to use data on the combobox 2023-09-17 20:54:20 +01:00
df3e7912b3 Add talker information in setting window 2023-09-17 18:26:06 +01:00
919561099e Finish removing the script option 2023-09-17 08:36:00 -07:00
e7cc05679f Bump metron-talker version 2023-09-17 08:09:43 -07:00
99461c54f1 Fix a crash when setting the page type with no comic selected 2023-09-15 21:03:41 -07:00
56f172e7b5 Add combo box support to talker settings generator 2023-09-15 23:46:13 +01:00
ddd98ee86d Add metron-talker as an optional dependency 2023-09-15 15:13:14 -07:00
1d25179171 Allow unsetting metadata fields on the commandline fixes #528 2023-09-14 11:30:05 -07:00
7efef0bb44 Merge branch 'mizaki-on_change_windows' into develop 2023-09-14 11:20:01 -07:00
366e9cf6e8 Move update into own function. Add title missing to trigger issue update. 2023-09-13 21:35:52 +01:00
57abe22515 Merge branch 'mizaki-fix_auto_id' into develop 2023-09-12 15:16:16 -07:00
c7a49b3643 Fix crash with series and issue window if the year is None. Closes #523 2023-09-10 13:42:17 +01:00
1125788bb7 Update series and issue rows after calling for more information. Closes #512 2023-09-10 13:31:20 +01:00
034a25a813 Fix auto-identify crash 2023-09-07 14:44:30 +01:00
f72c0c8224 Fix call to check_api 2023-09-06 04:56:30 -04:00
f6be7919d7 Implement support for protofolius's permission scheme 2023-09-06 04:50:05 -04:00
0a2340b6dc Remove the --script commandline option 2023-09-06 03:00:27 -04:00
bf2b4ab268 Rename check_api_key to check_status
Parameter is changed to a settings dict so that a Talker can retrieve any info it needs
Change issue_id type annotation to str
2023-09-06 02:59:59 -04:00
40bd3d5bb8 Fix generation and saving of talker settings fixes #515 #514 2023-09-05 14:43:17 -04:00
61d2a8b833 Fix issue padding validation fixes #513 2023-09-05 14:42:03 -04:00
b04dad8015 Stop deleting self.progialog in the series selection window 2023-09-05 14:41:07 -04:00
3ade47a7e0 Convert bytes to str when printing raw tags. Fixes #510 2023-09-05 04:05:20 -04:00
5bc44650d6 Change --only-set-cv-key to --only-save-config 2023-09-05 03:56:56 -04:00
8b1bcd93e6 Add a combobox to select a metadata source in the main window Fixes #508 2023-09-05 03:55:18 -04:00
d70a98ed29 Fix --darkmode 2023-09-05 03:55:18 -04:00
05e6eaf88e Update setting group names
Make group names presentable to users and add builtin plugins during namespace generation.
Revamp talkeruigenerator.py to use generated group and setting names and remove as much hard-coded strings as possible
Add a --list-plugins commandline option
2023-09-05 03:55:12 -04:00
90eb1c3980 Fix date display in the issue selection window 2023-09-05 03:14:55 -04:00
7a63474769 Fix cbr tests and update pre-commit 2023-09-04 19:56:18 -05:00
0f07fc3153 Use a dictionary instead of a list in the issue/series selection windows
List lookups were done by row number which became inaccurate if any sorting was done

Fixes #507
2023-09-03 15:18:56 -07:00
e832b19f2f Fix attribute names 2023-09-03 15:12:06 -07:00
9499aeae10 PyrateLimter version 2 only for now. 2023-08-30 23:23:19 +01:00
f72ebdb149 Simplify ComicCacher to store a single binary data field and ID(s)
If the ComicCacher is to be a generic cache for talkers it must assume
 very little. Current assumptions:
 - There are issues that can be queried individually by an "Issue ID" and they have a relation to a single series
 - There are series that can be queried individually by an "Series ID" and they have a relation to zero or more issues
 - There are Searches that can be queried by the search term and they have a relation to zero or more series

Each series and issue have a boolean `complete` attribute which is up to the talker to decide what it means.
Data is returned as a tuple ([series, complete] or [issue, complete]) or a list of tuples
An issue consists of an ID, an series ID and a binary data attribute which is up to the talker to determine what it means.
An series consists of in ID and a binary data attribute which is up to the talker to determine what it means.

The data attribute is binary to allow for compression and efficient storage of binary data (e.g. pickle) it is suggested to store it as json or similar text format encoded with utf-8. If the talker is using a website API it is suggested to store the raw response from the server.

All caches automatically expire 7 days after insertion.
2023-08-05 03:02:12 -07:00
ea84031b87 Add more 4-digit issue number tests 2023-08-04 21:04:21 -07:00
611c40fe0b Add test for split 2023-08-03 01:06:10 -07:00
2c3a2566cc Convert ComicIssue into GenericMetadata
I could not find a good reason for ComicIssue to exist other than that
 it had more attributes than GenericMetadata, so it has been replaced.
New attributes for GenericMetadata:
  series_id:        a string uniquely identifying the series to tag_origin
  series_aliases:   alternate series names that are not the canonical name
  title_aliases:    alternate issue titles that are not the canonical name
  alternate_images: a list of urls to alternate cover images

Updated attributes for GenericMetadata:
  genre        -> genres:        str -> list[str]
  comments     -> description:   str -> str
  story_arc    -> story_arcs:    str -> list[str]
  series_group -> series_groups: str -> list[str]
  character    -> characters:    str -> list[str]
  team         -> teams:         str -> list[str]
  location     -> locations:     str -> list[str]
  tag_origin   -> tag_origin:    str -> TagOrigin (tuple[str, str])

ComicSeries has been relocated to the ComicAPI package, currently has no
 usage within ComicAPI.
CreditMetadata has been renamed to Credit and has replaced Credit from
 ComicTalker.
fetch_series has been added to ComicTalker, this is currently only used
 in the GUI when a series is selected and does not already contain the
 needed fields, this function should always be cached.

A new split function has been added to ComicAPI, all uses of split on
 single characters have been updated to use this

cleanup_html and the corresponding setting are now only used in
 ComicTagger proper, for display we want any html directly from the
 upstream. When applying the metadata we then strip the description of
 any html.

A new conversion has been added to the MetadataFormatter:
  j: joins any lists into a string with ', '. Note this is a valid
     operation on strings as well, it will add ', ' in between every
     character.

parse_settings now assigns the given ComicTaggerPaths object to the
 result ensuring that the correct path is always used.
2023-08-02 09:00:04 -07:00
1b6307f9c2 Merge branch 'mizaki-tidy_ii' into develop 2023-07-30 16:24:13 -07:00
548ad4a816 Fix folder archiver
Implement supports_comment and is_writable
Fix function call in ComicArchive for supports_comment
Add a menu option to open a folder as an archive
2023-07-29 00:07:25 -07:00
27f71833b3 Generate settngs namespace before formatting 2023-07-28 23:29:39 -07:00
6c07fab985 Fix tests taking forever caused by f90f373d20 2023-07-28 23:25:12 -07:00
4151c0e113 Cleanup sqlite
Remove the import rename
use sqlite3.Row allows retrieving value by name
2023-07-28 23:22:35 -07:00
3119d68ea2 Remove used issue id from get_issue_cover_match_score and fix test 2023-07-18 01:14:32 +01:00
f43f51aa2f Fix #396
Use a QWebEngineView if QtWebEngine is available.
If QtWebEngine is not available replace figure tags with div's to allow
 the QTextEdit to render the rest of the html properly
2023-07-01 23:29:38 -07:00
19986b64d0 Upgrade pre-commit hooks 2023-07-01 23:12:41 -07:00
00200334fb Add filter to SeriesSelectionWindow and IssueSelectionWindow fixes #476 2023-07-01 18:57:33 -07:00
cde980b470 Add LICENSE file 2023-07-01 18:13:38 -07:00
f90f373d20 Merge branch 'mizaki-rate_limit_cv' into develop 2023-07-01 18:04:24 -07:00
c246b96845 Merge branch 'mizaki-vol_to_issue' into develop 2023-07-01 18:02:57 -07:00
053afaa75e Merge branch 'mizaki-phash' into develop 2023-07-01 18:01:26 -07:00
3848aaeda3 Merge branch 'mizaki-issue_count_sort' into develop 2023-07-01 17:56:55 -07:00
16b13a6fe0 Format year and count of issues to 4 digits and do a None check 2023-06-28 01:08:04 +01:00
3f180612d3 Return int instead of hex and revert hamming_distance etc. 2023-06-27 22:44:08 +01:00
37cc66cbae Use requests.status_codes.codes.TOO_MANY_REQUESTS 2023-06-27 17:48:38 +01:00
81b15a5877 Fixes sorting by year and issue count. Removed superfluous if for publisher. Fixes #475 2023-06-27 00:21:28 +01:00
14a4055040 Add Perceptual Hash computation to imagehasher mirroring https://github.com/JohannesBuchner/imagehash but in pure python 2023-06-26 01:54:26 +01:00
2e01672e68 Fix #485
As mentioned in the comment in comictaggerlib/main.py:186
The default value should be None not the empty string.
We also check if the given value is the default or the empty string and
 the setting is unset so the default value is not saved in the settings
 file.
The default_api_url is shown in the GUI Settings Window it is not
 currently show in the cli help.
2023-06-23 17:48:18 -07:00
4a7aae4045 Add tests for fix_url 2023-06-23 17:10:40 -07:00
2187ddece8 Move volume from ComicSeries to ComicIssue 2023-06-23 22:38:15 +01:00
fba5518d06 Create two module limiters and assign class limiter var depending. Add to welcome message limits of default CV API key. 2023-06-23 21:25:02 +01:00
31cf687e2f Reduce startup time 2023-06-22 20:11:40 -07:00
526069dabf Use _guess_type from settngs for more robust type checking 2023-06-22 18:28:43 -07:00
635cb037f1 Merge branch 'mizaki-fix_add_fields' into develop 2023-06-22 17:51:26 -07:00
861584df3a Move rate limit check from defunc API status code 107 to HTTP code 429. Set a limit of 10 request every 10 seconds except for the default API key which is 1,2 (to be finisalised). Remove wait on rate limit option. 2023-06-22 23:50:32 +01:00
a53fda9fec Update linux packages in GitHub Actions 2023-06-21 19:47:41 -07:00
af5a0e50e0 Remove wait on CV rate limit in autotag 2023-06-21 22:32:06 +01:00
7a91acb60c Add pyrate-limiter and apply CV suggested rate limit 2023-06-20 22:28:29 +01:00
3a287504ae Fix setting issue and alternate_number on GenericMetadata
IssueString.as_string always returns a string this is a problem for
  GenericMetadata. When the overlay function is used it checks
  specifically for the value None this allows the -m option to unset
  attributes however the issue attribute would get set to the empty
  string when loading ComicRack tags regardless of if there was a value
  stored in the file. Fixes #465 and #480
2023-06-15 20:26:38 -07:00
82a22d25ea Merge branch 'mizaki-auto_ident_message' into develop 2023-06-11 21:44:05 -07:00
783e10a9a1 Generate a namespace object for typing settngs 2023-06-09 16:20:00 -07:00
e8f13b1f9e fix quoting 2023-06-09 02:08:38 +01:00
4b415b376f Fix tests 2023-06-08 01:26:03 +01:00
122bdf7eb1 Change auto-identfy message to point users to the auto-tag assume 1 option 2023-06-08 01:18:46 +01:00
2afb604ab3 Fix issue_count and add maturity rating 2023-06-08 00:52:24 +01:00
a912c7392b Merge branch 'mizaki-additional_comic_fields' into develop 2023-06-03 10:37:44 -07:00
3b92993ef6 Remove country name code 2023-06-03 00:11:40 +01:00
c3892082f5 Change data to metadata 2023-06-02 00:37:58 +01:00
92e2cb42e8 Replace instances of Comic Vine to use the talker's name 2023-06-01 22:05:14 +01:00
b8065e0f10 Fix #470 re-add notes when using --clear-metadata 2023-05-30 21:36:33 -07:00
a395e5541f Remove invalid comments 2023-05-25 15:00:53 +01:00
d191750231 Remove attempted validation of language and country plus minor changes 2023-05-25 01:32:52 +01:00
e72347656b Add format (1-shot, limited series, etc.) 2023-05-23 00:27:58 +01:00
8e2411a086 Add country functions to utils and try to convert a country name to ISO country name 2023-05-23 00:02:56 +01:00
97e64fa918 Add maturity_rating, language and country to ComicIssue and pass to metadata. 2023-05-18 02:02:21 +01:00
661d758315 Merge branch 'mizaki-talker_parse_key' into develop 2023-05-16 17:33:24 -07:00
364d870fe0 Merge branch 'mizaki-hide_api_token' into develop 2023-05-16 17:30:46 -07:00
2da64fd52d Remove password class from function 2023-05-16 15:20:45 +01:00
057725c5da Create generate_password_textbox 2023-05-16 00:25:12 +01:00
5996bd3588 Add show/hide icon to key field 2023-05-15 23:46:16 +01:00
fdf407898e Bump MacOS version for GitHub Actions 2023-05-15 10:59:23 -06:00
70d544b7bd Add attrib at the end of the CLI file run 2023-05-15 16:46:31 +01:00
c583f63c8c Attribution for metadata provider on command line 2023-05-14 23:39:23 +01:00
d65a120eb5 Add issue_count 2023-05-14 00:50:37 +01:00
60f47546c2 Hide the API key field as a password and add a show/show button 2023-05-13 23:12:29 +01:00
0b77078a93 Retrieve all fields instead of by (many) names 2023-05-12 23:46:34 +01:00
2598fc546a Use new xlate_int and xlate_float 2023-05-12 22:47:36 +01:00
ddf4407b77 Merge branch 'develop' into additional_comic_fields 2023-05-12 22:41:38 +01:00
6cf259191e Add volume and count_of_volumes to ComicSeries 2023-05-12 21:48:45 +01:00
30f1db1c73 Update requirements and Linux build dependencies 2023-04-26 14:46:18 -07:00
ff15bff94c Fix pypi upload 2023-04-25 16:26:05 -07:00
83aabfd9c3 Upgrade pre-commit 2023-04-25 16:11:19 -07:00
d3ff40c249 Only update the image in CoverImageWidget if the url matches the current url
This fixes an issue causing the first issue cover to show when using the auto-identify feature
Fixes #455
2023-04-25 16:00:08 -07:00
c07e1c4168 Add additional typing 2023-04-25 16:00:06 -07:00
1dc93c351d Update settngs to typed version fixes #453 2023-04-25 16:00:04 -07:00
f94c9ef857 Update appimage step
Fix platform case
Remove icu check from appimage step as ComicTagger is not installed
Add appimagetool to allowed commands
Fix appimage paths
2023-04-25 16:00:02 -07:00
14fa70e608 Separate xlate into separate functions based on return type fixes #454 2023-04-25 15:55:27 -07:00
ec65132cf2 Mark mypy as optional 2023-04-23 02:01:41 -07:00
941bbf545f Remove extraneous if 2023-04-23 01:52:56 -07:00
afdb08fa15 Fix package.yaml 2023-04-23 01:49:42 -07:00
c4b7411261 Use tox for building 2023-04-23 01:31:44 -07:00
5b3e9c9026 Switch to rarfile for rar/cbr support 2023-04-23 00:48:13 -07:00
e70c47d12a Make PyICU optional
Update README.md
2023-04-23 00:48:11 -07:00
c1aba269a9 Revert "Make PyICU optional"
This reverts commit bf55037690.
2023-04-22 21:28:14 -07:00
bf55037690 Make PyICU optional
Fix more locale issues
Update README.md
2023-04-18 21:03:50 -07:00
e2dfcc91ce Revert get_recursive_filelist Fixes #449 2023-04-13 20:58:30 -06:00
33796aa475 Fix #447 2023-04-06 10:48:40 -07:00
4218e3558b Add url 2023-03-05 18:58:06 +00:00
271bfac834 Do not fail when talker key is missing 2023-03-03 00:07:49 +00:00
9e86b5e331 Fix tests 2023-03-02 00:23:56 +00:00
c9638ba0d9 Format manga and rating 2023-03-02 00:10:52 +00:00
428879120a Merge branch 'mizaki-talkeruigen_fix' into develop 2023-02-28 11:49:27 -08:00
f0b9bc6c77 Missed name changes from options move 2023-02-28 15:37:52 +00:00
6133b886fb String widget fix-fix 2023-02-28 15:06:59 +00:00
dacd767162 String widget fix 2023-02-28 14:59:58 +00:00
4d90417ecf Update AUTHORS 2023-02-28 06:31:07 +00:00
c3e889279b Fix EOF 2023-02-27 22:30:31 -08:00
9bf998ca9e Remove check_api_url and fix docstrings 2023-02-27 22:29:23 -08:00
5b2a06870a Fix talker settings validation 2023-02-27 22:21:56 -08:00
fca5818874 Merge branch 'mizaki-talker_settings_generator' into develop 2023-02-27 22:20:53 -08:00
eaf0ef2f1b Fix Makefile dependencies
Remove dist/appimage before copy to prevent issues with 2nd run
Add dist/appimagetool target so that the appimage tool is downloaded once
2023-02-27 22:12:12 -08:00
09fb34c5ff Merge branch 'bmfrosty-feature/add-appimage-support' into develop 2023-02-27 22:01:13 -08:00
924467cc57 Add AppImage Support 2023-02-26 22:12:50 -08:00
2611c284b8 Revert "docs(contributor): contrib-readme-action has updated readme"
This reverts commit aba59bdbfe.
2023-02-24 13:23:29 +00:00
b4a3e8c2ee Add missing tool tips to labels
Change metadata select label
Use named tuple for talker tabs
Retrun a string and bool for api check
2023-02-24 00:06:48 +00:00
118429f84c Change source term to metadata
Generate API text field in their own function
API tests return string message of result
Add help to text field lables
2023-02-23 00:42:48 +00:00
8b9332e150 Fix linux build 2023-02-21 20:00:47 -08:00
5b5a483e25 Fix api key test button generation 2023-02-21 00:58:13 +00:00
33ea8da5bc Merge branch 'develop' into talker_settings_generator
# Conflicts:
#	comictaggerlib/settingswindow.py
#	comictalker/talkers/comicvine.py
2023-02-21 00:50:06 +00:00
aba59bdbfe docs(contributor): contrib-readme-action has updated readme 2023-02-21 00:43:46 +00:00
316bd52f21 Use currentData for combo box 2023-02-21 00:42:11 +00:00
59893b1d1c Fix optoin.type ifs 2023-02-21 00:38:13 +00:00
fb83863654 Update plugin settings
Make "runtime" a persistent group, allows normalizing without losing validation
Simplify archiver setting generation
Generate options for setting a url and key for all talkers
Return validated talker settings
Require that the talker id must match the entry point name
Add api_url and api_key as default attributes on talkers
Add default handling of api_url and api_key to register_settings
Update settngs to 0.6.2 to be able to add settings to a group and
  use the display_name attribute
Error if no talkers are loaded
Update talker entry point to comictagger.talker
2023-02-20 16:02:15 -08:00
f131c650fb Merge branch 'mizaki-talker_entry_points' into develop 2023-02-20 14:27:09 -08:00
f439797b03 Use new display_name from settngs. Add source combobox getting and setting and add to sources dict of widgets. 2023-02-20 18:45:39 +00:00
bd5e23f93f Add another test case for format_internal_name 2023-02-20 00:44:51 +00:00
fefb3ce6cd Remove general tab from talker tab and use base tab from settings window. Additional clean up. 2023-02-19 23:33:22 +00:00
a24bd1c719 Generate talker general tab programatically. Move search options to search tab. 2023-02-18 17:16:56 +00:00
02fd8beda8 Use None as parent for api and url message boxes
Rename test_api_key and test_api_url to api_key_btn_connect and api_url_btn_connect
Make separate function to set form values, called in settings_to_form
Change isinstance to is
Call findChildren only once
2023-02-18 01:15:46 +00:00
628dd5e456 Fix actions failure when there are no new contributors 2023-02-17 13:43:41 -08:00
c437532622 Merge branch 'mizaki-cache_role_fix' into develop 2023-02-17 10:21:54 -08:00
0714b94ca1 Restrict contributions updates to only run on pushes to develop 2023-02-17 10:16:21 -08:00
5ecaf89d15 Update AUTHORS 2023-02-17 01:23:54 +00:00
2491999a33 Update copyright statements to ComicTagger Authors 2023-02-16 17:23:13 -08:00
9c7bf2e235 Update AUTHORS 2023-02-17 01:14:29 +00:00
0c1093d58e docs(contributor): contrib-readme-action has updated readme 2023-02-17 01:14:27 +00:00
a41c5a8af5 Automate contributions 2023-02-16 17:13:26 -08:00
b727b1288d Apply credit datatype to person data from cache 2023-02-15 17:05:14 +00:00
73738010b8 Add additional fields to ComicIssue and add a genre field to ComicSeries to allow for filtering of search results from the cache. 2023-02-15 16:48:07 +00:00
2fde11a704 Test for menu generator format_internal_name 2023-02-14 01:47:32 +00:00
6a6a3320cb Move talker settings menu generator to a separate file 2023-02-14 01:32:56 +00:00
83a8d5d5e1 Generate settings tabs for each talker 2023-02-11 01:18:56 +00:00
4b3b9d8691 Entry points for talkers 2023-02-10 21:16:35 +00:00
3422a1093d Merge branch 'mizaki-showcontrols' into develop 2023-02-10 00:31:24 -08:00
4eb9e008ce Update pre-commit 2023-02-10 00:25:20 -08:00
5e86605a46 Fix docstring typos 2023-02-10 00:25:18 -08:00
8146b0c90e Merge branch 'talker-cleanup' into develop 2023-02-10 00:24:48 -08:00
983937cdea Mark internal functions in ComicVineTalker 2023-02-10 00:23:02 -08:00
e5b15abf91 clean up talker 2023-02-10 00:23:00 -08:00
4a5d02119e Merge branch 'settings-consistency' into develop 2023-02-10 00:22:44 -08:00
4b6c9fd066 Fix comicarchive_test.py 2023-02-10 00:14:58 -08:00
79a6cef794 Hide invisible controls to prevent bottom margin on source logos. 2023-02-10 00:43:05 +00:00
43cb68b38b Fix 'Default Preferences' button in the settings window 2023-02-04 11:34:49 -08:00
ad68726e1d Use consistent naming for settings
config: always values
setting: always the definition/description not the value
2023-02-04 11:33:21 -08:00
ba4b779145 Remove legacy settings 2023-02-03 20:14:31 -08:00
d987a811e3 Consolidate plugin code 2023-02-03 20:13:58 -08:00
ee426e6473 Merge branch 'mizaki-talker_settings' into develop 2023-02-03 18:14:26 -08:00
9aa42c1ca7 Add series match threshold back into search_for_series as it is no longer available via the talkers own settings. 2023-02-03 21:38:17 +00:00
d12325b7f8 Simplify parse_settings. Prefix talker_ to group name. Add back setting CV key via commandline. Other small changes as requested. 2023-02-02 00:53:13 +00:00
ce5205902a After merge isort 2023-02-01 23:53:02 +00:00
94aabcdd40 Merge branch 'develop' into talker_settings
# Conflicts:
#	comictaggerlib/ctoptions/__init__.py
#	comictaggerlib/main.py
#	comictalker/talkers/comicvine.py
2023-02-01 23:38:13 +00:00
839a918330 typed talkers var 2023-02-01 23:22:04 +00:00
053295e028 Merge branch 'mizaki-source_logo_url' into develop 2023-02-01 08:03:16 -08:00
c6e3266f60 More verbose attrib string 2023-02-01 15:39:24 +00:00
7c4e5b775b Merge branch 'plugableArchivers' into develop 2023-01-31 19:44:07 -08:00
bc02a9a2a2 Use a persistent setting group for archiver settings 2023-01-31 19:41:19 -08:00
2c5d419ee9 Remove legacy rar settings 2023-01-31 00:32:19 -08:00
46899255c8 Generate settings for an archivers executable 2023-01-30 21:36:47 -08:00
6a650514fa Rename new settings talker methods. Move parse_settings for talkers to earlier and only pass talkers own settings. 2023-01-30 01:59:23 +00:00
0f10e6e848 Create simple dict of talkers with objects. Moved thresh setting back to talkers (general) as it is called outside of talker. 2023-01-26 00:52:02 +00:00
0d69ba3c49 Rename talkers_general to talkers. Moved plugin option register to own file. Due to chicken and egg, first get talker classes then create objects. 2023-01-25 19:10:58 +00:00
d0e3b487eb Mark label for external links. attrib str to be complete. 2023-01-22 17:16:33 +00:00
c80627575a Add docstrings to Archiver 2023-01-21 15:24:27 -08:00
92eb79df71 Fix console_scripts entry point 2023-01-21 00:27:39 -08:00
ad48ad757c Fix plugin order 2023-01-20 19:32:32 -08:00
2de241cdd5 Fix typing 2023-01-20 19:32:06 -08:00
5d66815765 Add attrib string for source. Add logo and URL to issues window. 2023-01-20 00:29:02 +00:00
100e0f2101 Load plugins in init. 2023-01-15 17:38:50 +00:00
55e3b7c7e0 Use name for URL display. Window sizes. 2023-01-13 21:27:40 +00:00
f6698f7f0a Call load_archive_plugins in ComicArchive __init__ 2023-01-12 17:00:11 -08:00
50614d52fc Update PyInstaller hook 2023-01-12 15:47:34 -08:00
712986ee69 Turn comicapi.archivers.* into plugins 2023-01-12 14:45:49 -08:00
2f7e3921ef Separate archivers into their own packages 2023-01-12 14:45:17 -08:00
80f42fdc3f Move log header to execute immediately after the log is configured 2023-01-12 14:43:12 -08:00
725b2c66d3 Use imageWidget for source logo and URL. 2023-01-12 16:58:50 +00:00
5394b9f667 Fix tests. Probably not the correct way to do this? 2023-01-12 15:10:39 +00:00
fad103a7ad Use setting option for talker selection 2023-01-07 00:29:12 +00:00
87cd106b28 Add source logo and URL to series window 2023-01-04 23:51:39 +00:00
2d8c47edca Use new settings system for plugin 2023-01-02 01:04:15 +00:00
0ac5b59a1e Merge branch 'mizaki-rename_namespace_fix' into develop 2022-12-31 20:49:45 -08:00
7c735b3555 Fix rename namespace 2023-01-01 02:07:42 +00:00
9d8cf41cd3 Fix try block parsing credits in ComicCacher 2022-12-31 12:36:32 -08:00
ee3a06db46 Merge branch 'crop-border' into develop 2022-12-31 12:35:29 -08:00
7df2e3fdc0 Automatically crop black borders from covers 2022-12-31 11:52:23 -08:00
20e7de5b5f Fix reference to the user cache directory 2022-12-31 02:26:44 -08:00
f83f72fa12 Improve issue number handling regarding the '#' 2022-12-31 02:15:17 -08:00
fb4786159d Handle issue numbers with more than 3 digits 2022-12-30 21:50:10 -08:00
734b83cade Switch comictalker TypedDicts to dataclasses 2022-12-23 01:58:10 -08:00
746c98ad1c Add temp to .gitignore 2022-12-23 00:09:46 -08:00
9f00af4bba Change issue id and series id to strings 2022-12-23 00:09:19 -08:00
92fa4a874b Improve typing in ComicVineTalker 2022-12-22 10:47:37 -08:00
a33b00d77e Update ComicTalker documentation 2022-12-22 10:47:35 -08:00
a7f6349aa4 Merge branch 'volume-to-series' into develop 2022-12-22 10:45:58 -08:00
d4b4544b2f Replace most instances of volume in ComicVineTalker with series
All remaining uses of the word volume are used directly by the api and
are documented that it refers to the series
2022-12-22 10:30:48 -08:00
521d5634f3 Fix tests 2022-12-22 10:16:32 -08:00
1d9840913a Change all references of volume to series 2022-12-22 10:16:05 -08:00
53a0b23230 Collapse formatting 2022-12-15 20:21:53 -08:00
9004ee1a6b Merge branch 'settings' into develop 2022-12-15 20:17:50 -08:00
440479da8c Update to settngs 0.3.0
Use the namespace instead of a dictionary
Cleanup setting names
2022-12-15 20:10:35 -08:00
e5c3692bb9 Fail if an error occurs when loading settings 2022-12-15 18:58:53 -08:00
103379e548 Split settings out into a separate package 2022-12-14 23:16:54 -08:00
eca421e0f2 Split out settings functions 2022-12-13 08:50:38 -08:00
18566a0592 Fix setting cmdline arguments 2022-12-13 08:50:08 -08:00
48c6372cf4 Fix --no-overwrite 2022-12-10 18:35:41 -08:00
f3917c6e4d Add comments to tests 2022-12-10 18:05:27 -08:00
9bb5225301 Restrict pillow version to <10 until PyQt6 is supported 2022-12-06 17:06:13 -08:00
e9cef87154 Move test cases to the testing package
Add comments to tests
2022-12-06 17:00:21 -08:00
da01dde2b9 Fix color space on CMYK images 2022-12-06 08:38:24 -08:00
53445759f7 Add tests 2022-12-06 00:22:51 -08:00
9aff3ae38e Generalize settings
Add comments and docstrings
Create parent directories when saving
Add merging to normalize_options
Change get_option to return if the value is the default value
2022-12-06 00:22:49 -08:00
0302511f5f Settings tests 2022-12-06 00:22:48 -08:00
028949f216 Make logs use the .log extension 2022-12-06 00:22:46 -08:00
af0d7b878b Set logging level on comictalker 2022-12-06 00:22:44 -08:00
460a5bc4f4 Cleanup 2022-12-06 00:20:29 -08:00
3f6f8540c4 Fix wait_and_retry_on_rate_limit 2022-12-06 00:20:27 -08:00
17d865b72f Refactor cli.py into a class 2022-12-06 00:20:26 -08:00
da21dc110d Update help 2022-12-06 00:20:24 -08:00
3870cd0f53 Update help for --config 2022-12-06 00:20:23 -08:00
ed1df400d8 Add replacement settings 2022-12-06 00:20:21 -08:00
82d737407f Simplify --only-set-cv-key 2022-12-06 00:20:20 -08:00
d0719e7201 Fix log dir 2022-12-06 00:20:18 -08:00
19112ac79b Update Settings 2022-12-06 00:20:01 -08:00
a64d753d77 Fix package selection 2022-12-01 19:54:55 -08:00
970752435c Merge branch 'mizaki-fixii_keys' into develop 2022-11-29 15:15:42 -08:00
b1436ee76e Merge branch 'resize-volume-columns' into develop 2022-11-29 14:28:32 -08:00
8eba44cce4 Increase default size of VolumeSelectionWindow 2022-11-29 14:28:08 -08:00
5fc5a14bd9 Wider catch of series and issue_number being empty 2022-11-29 16:59:05 +00:00
10f36e9868 Allow searching without a comic archive selected 2022-11-28 21:44:01 -08:00
aab7e37bb2 Use contentsRect().width() instead of width 2022-11-28 20:55:50 -08:00
2860093b6f Set the minimum row height to the default on VolumeSelectionWindow 2022-11-28 20:54:24 -08:00
ad7b270650 Automatically resize the row height on the VolumeSelectionWindow 2022-11-28 15:34:15 -08:00
70dcb9768a Better resize columns in the VolumeSelectionWindow 2022-11-28 15:28:47 -08:00
873d976662 keys may be None if there is no comic archive. IssueString.as_string will convert None to empty string so use None comparison before. 2022-11-28 00:56:19 +00:00
fc4eb4f002 Cleanup
Move most options passed in to ComicVineTalker to ComicTalker
Give ComicCacher and ComicTalker a version argument to remove all
  references to comictaggerlib
Update default arguments to reflect what is required to use these classes
2022-11-25 19:22:01 -08:00
129e19ac9d Remove cast from taggerwindow.py 2022-11-25 19:22:00 -08:00
0dede72692 Re-add --only-set-cv-key feature 2022-11-25 19:21:58 -08:00
83ac9f91b5 Make errors loading the ComicVineTalker object explicit 2022-11-25 19:21:57 -08:00
858bc303d8 Stop setting the notes field in map_comic_issue_to_metadata 2022-11-25 19:21:55 -08:00
005d7b72f4 Fix tests 2022-11-25 19:21:54 -08:00
91b863fcb1 Merge branch 'mizaki-infosources' into dev 2022-11-25 19:21:25 -08:00
e5f6a7d1d6 Add warning about settings 2022-11-25 17:09:22 -08:00
e7f937ecd2 Enable version checking 2022-11-25 17:08:26 -08:00
d75f39fe93 Remove logos dir 2022-11-24 23:58:24 +00:00
12d9befc25 Remove unneeded code from fetch_issue_data. 2022-11-24 23:56:12 +00:00
3e8ee864b7 Remove setting options and logo_url. 2022-11-24 23:35:35 +00:00
134c4a60e9 Add some docstrings. 2022-11-24 23:26:48 +00:00
3f9e5457f6 Fix make clean 2022-11-24 09:41:51 -08:00
cc2ef8593c Update pre-commit 2022-11-24 01:25:24 -08:00
c5a5fc8bdb Fix issue with combine_notes 2022-11-24 01:24:15 -08:00
1cbed64299 Fix an issue with normalizing the platform in filerenamer.py 2022-11-23 12:36:19 -08:00
c608ff80a1 Improve typing 2022-11-22 17:11:56 -08:00
52cc692b58 Remove some TODOs. 2022-11-23 00:22:48 +00:00
31894a66ec Remove repair_urls function, taken care of in format results functions. 2022-11-19 21:59:10 +00:00
aa11a47164 HTML table patch 2022-11-18 23:22:39 +00:00
093d20a52b Remove all the cool settings changes. 2022-11-18 23:18:41 +00:00
38c3014222 Use strip().splitlines() in cacher to prevent [''] return. Some clean up. 2022-11-17 15:55:38 +00:00
df87f81698 Remove volume only functions used for testing. 2022-11-13 23:25:08 +00:00
cf12e891b0 Fix CV API test. Fix sending last source details in settings for API test and website link. 2022-11-12 23:13:53 +00:00
76fb565d4e Merge branch 'mizaki-iiemptyurl' into develop 2022-11-11 17:09:45 -08:00
06ffd9f6be Add logo/text button to source tab that links to webpage. 2022-11-12 01:09:17 +00:00
dfef425af3 Better handle missing talkers and default to comic vine. 2022-11-10 17:03:39 +00:00
880b1be401 Return zero score if there is no image url. Fixes #392 2022-11-10 16:15:27 +00:00
04ad588a58 Use source name in tag notes. 2022-11-08 16:33:46 +00:00
6b4abcf061 Update current talker object with new settings. 2022-11-08 16:32:37 +00:00
629b28f258 Small fixes after merge. 2022-11-07 02:03:36 +00:00
c34902449f Merge branch 'develop' into infosources
# Conflicts:
#	comictaggerlib/cli.py
#	comictaggerlib/comicvinetalker.py
#	comictaggerlib/taggerwindow.py
#	tests/comicvinetalker_test.py
#	tests/conftest.py
2022-11-07 01:50:47 +00:00
63e6174cf2 Not all fields are required in ComicVolume and ComicIssue but cacher would fail if any optional field were missing. 2022-11-07 01:38:19 +00:00
9da14e0f95 Fix source switching. Use start year if cover date is missing. 2022-11-07 01:19:03 +00:00
c469fdb25e Make 7zip support optional 2022-11-06 08:27:45 -08:00
67be086638 Move map comic data to utils along with remove html. Alter tests. 2022-11-05 16:49:59 +00:00
a724fd8430 Compensate for a split empty string returning ['']. I don't see a way around this? 2022-11-05 01:21:51 +00:00
685ce014b6 Fix tests for comicvinetalker 2022-11-04 16:27:30 -07:00
62bf1d3808 Update macOS packaging 2022-11-04 16:16:19 -07:00
d55d75cd79 Append notes instead of overwriting them
Add issue_id to GenericMetadata
2022-11-04 15:39:40 -07:00
19e5f10a7b Revert "Revert passing only issue id to fetch_comic_data. Instead send issue id, volume id and issue number. This is because MU will not have the issue number from the API call. Now, if it has been parsed from the file name it will be available for use by the MU talker."
This reverts commit e5e9617052.
2022-11-04 16:16:07 +00:00
e5e9617052 Revert passing only issue id to fetch_comic_data. Instead send issue id, volume id and issue number. This is because MU will not have the issue number from the API call. Now, if it has been parsed from the file name it will be available for use by the MU talker. 2022-11-04 00:52:22 +00:00
b4f6820f56 remove_fetch_alternate_cover_urls.patch 2022-11-03 23:32:35 +00:00
b07aa03c5f Use xlate for all int conversion in CV talker and compare cache issues to expected number. 2022-11-03 22:35:46 +00:00
2f54b1b36b A few minor logging tweaks. 2022-11-03 15:39:13 +00:00
70293a0819 Require PyInstaller >= 5.6.2 2022-11-01 13:51:10 -07:00
8592fdee74 Revert "Install PyInstaller from git until >5.6.1 is available"
This reverts commit 79137a12f8.
2022-11-01 13:49:52 -07:00
075faaea5a Removed TODO's checked and/or fixed. 2022-11-01 16:13:46 +00:00
870dc5e9b6 Move issue_id to first position of fetch_comic_data as most used. 2022-10-30 17:52:55 +00:00
86402af8b1 Merge branch 'develop' into infosources
# Conflicts:
#	comictaggerlib/comicvinetalker.py
2022-10-30 11:39:01 +00:00
d7976cf9d2 Hack tests. 2022-10-30 11:16:03 +00:00
b67765d9aa Merge to develop. 2022-10-30 11:07:53 +00:00
618e15600f Fix retrieving issues from cache when volume is incomplete 2022-10-29 19:21:11 -07:00
8cac2c255f Merge branch 'develop' into infosources
# Conflicts:
#	comictaggerlib/comicvinetalker.py
#	comictaggerlib/coverimagewidget.py
#	comictaggerlib/main.py
#	comictaggerlib/pagebrowser.py
#	comictaggerlib/pagelisteditor.py
#	comictaggerlib/settings.py
#	comictaggerlib/settingswindow.py
2022-10-30 01:31:58 +01:00
4f42fef4fc Return issue id from series search and use issue id for API. 2022-10-30 00:15:05 +01:00
73dd33dc64 Fix tags in GitHub Actions checkout 2022-10-29 13:09:13 -07:00
3774ab0568 Force install PyInstaller from git until >5.6.1 is available 2022-10-29 11:04:46 -07:00
f8807675d6 Cache issue info 2022-10-29 11:02:21 -07:00
79137a12f8 Install PyInstaller from git until >5.6.1 is available 2022-10-29 10:10:37 -07:00
d33d274725 Fix fetching alternate cover urls (fixes #372) 2022-10-29 10:10:35 -07:00
26851475ea Clean up loading cover images. Probably more to do. 2022-10-29 16:41:34 +01:00
a06d88efc0 Fix up full issue cache types. 2022-10-29 01:33:42 +01:00
dcf853515c Tidy CV logger errors. 2022-10-28 22:32:33 +01:00
bf06b94284 Enable cache for full issue information. 2022-10-28 22:15:14 +01:00
561dc28044 Don't proxy talker (really this time). Remove talker custom logging. Move static_options and settings_options to root of class object. Temp hack to keep talker menu genration working until settings revamp. 2022-10-27 23:36:57 +01:00
43ec4848ef Update pre-commit 2022-10-25 21:49:47 -07:00
aad83c8c03 Update PyInstaller usage
Switch to rapidfuzz from thefuzz
Add associations to macOS app bundle
2022-10-25 21:48:01 -07:00
4514ae80d0 Switch to API data for alt images, remove unneeded functions and removed async as new approach needed. See comments about fetch_partial_volume_data 2022-10-26 00:29:30 +01:00
cab69a32be Remove proxying from ComicTalker. Add some checks for talkers. 2022-10-25 00:37:18 +01:00
c5ad75370f Work around having to scrape alt covers from CV. Use cache to get issue page url for scrape. 2022-10-24 16:30:58 +01:00
d23258f359 Change ComicVolume, ComicIssue to image_url and image_thumb_url. Add/change search/volume DB layout to remove duplication of data. Fixup some test. 2022-10-23 22:40:15 +01:00
c9cd58fecb Remove fetch_issue_cover_urls and async_fetch_issue_cover_urls. Reduce API calls by using data already available with coverimagewidget. 2022-10-22 01:43:56 +01:00
58904a927f Set release name properly 2022-10-19 19:27:30 -07:00
fb1616aaa1 Remove CV parse date. Strings names. 2022-10-20 00:32:40 +01:00
4be12d857d Reuse CV test data in comic_issue_result data. Cover possible empty volume data in get_volume_issues_info. 2022-10-19 23:30:11 +01:00
e1ab72ec2a Rename super_url to image_url in comiccacher. Merge fetch_issue_data_by_issue_id into fetch_comic_data. Fill comic volume info in comiccacher:get_volume_issues_info 2022-10-19 19:33:51 +01:00
8a8dea8aa4 Fix autotagstartwindow.ui missed from merge. 2022-10-15 23:36:52 +01:00
43464724bd Convert all start_year to int. 2022-10-15 23:20:50 +01:00
34163fe9d7 Update the comicvine_api fixture in conftest.py to actually return the comicvinetalker. 2022-10-15 02:02:10 +01:00
9aa29f1445 Merge fetch_issue_data and fetch_volume_data to fetch_comic_data. 2022-10-14 01:10:46 +01:00
3ea44b7ca7 Remove fetch_issue_page_url from comictalker etc. 2022-10-12 23:08:47 +01:00
c1c8f4eb6e black 2022-10-12 00:11:57 +01:00
a14c24a78a Fix for issueidentifier_test 2022-10-11 16:52:41 +01:00
18d861a2be More test fixes that may need to be looked at further. 2022-10-09 23:43:52 +01:00
ac15a4dd72 More test fixes. 2022-10-06 01:14:03 +01:00
6a98afb89c After second merge. 2022-10-06 00:34:32 +01:00
21873d3830 Merge branch 'develop' into infosources
# Conflicts:
#	comictaggerlib/autotagstartwindow.py
#	comictaggerlib/cli.py
#	comictalker/talkers/comicvine.py
2022-10-05 01:58:46 +01:00
2daf9b3ed8 Style and typo fixes 2022-10-04 16:15:55 -07:00
a6d55cd21a Update MetadataFormatter
Several custom conversions (the s in {title!s}) have been created
u - str.upper()
l - str.casefold()
S - str.swapcase()
t - str.title()
c - str.Capitalize()

A new syntax has been added '{title+str}' and '{title-str}':
The + indicates an alternate value.
The - indicates a default value.

If the title of a comic is not set then
'{title-str}' will output 'str'
and
'{title+str} will output ''

If the title of a comic is 'hello' then
'{title+str}' will output 'str'
and
'{title-str}' will output 'hello'
2022-10-04 16:15:20 -07:00
d37e4607ee After merge. Testing files still to update. 2022-10-04 23:50:55 +01:00
00e95178cd Initial support for multiple comic information sources 2022-10-04 01:08:14 +01:00
4034123e6d Fix rar tests again 2022-10-02 21:47:07 -07:00
5587bfac31 Fix rar tests 2022-10-02 21:13:26 -07:00
4b6d35fd3a Fix CBL tagging 2022-10-02 19:33:12 -07:00
3cf75cf2ec Update importlib_matadata usage and requirements 2022-09-19 22:54:48 -07:00
30dbe758d4 Fix windows tests 2022-09-19 22:52:45 -07:00
55384790f8 Forcefully raise an OSError on windows 2022-09-17 01:59:15 -07:00
acaf5ed510 Fix issues with renaming
Stop a crash when renaming
Properly handle replacements on linux/macos
2022-09-17 01:28:26 -07:00
d213db3129 Use correct syntax for pips --no-binary flag 2022-09-15 22:09:04 -07:00
6a717377df Automatically set release name from tag message 2022-09-10 22:35:30 -07:00
904561fb8e Merge branch 'pyicu' into develop 2022-09-10 21:48:04 -07:00
be6b71dec7 Put unix specific commands in OS specific blocks 2022-09-10 21:11:48 -07:00
63b654a173 Update ci to install pyicu 2022-09-10 19:51:26 -07:00
bc25acde9f Fix sorting
Switch natsort to use os_sorted
Remove directories when returning a list of files in a comic
Update tests to account for '!cover.jpg'
2022-09-10 19:48:50 -07:00
03677ce4b8 Fix renaming
Make ComicArchive.path always absolute
Fix unique_file not preserving the extension
Fix incorrect output when renaming in CLI mode
Fix handling of platform when renaming
2022-08-19 20:20:37 -07:00
535afcb4c6 Fix replacements 2022-08-19 19:59:58 -07:00
06255f7848 Perform replacements on literal text and format values 2022-08-18 13:48:23 -07:00
00e649bb4c Move colon handling when renaming to the MetadataFormatter class
Fixes #356
2022-08-17 16:16:38 -07:00
078f569ec6 Fix codeblock in README.md 2022-08-14 10:51:08 -07:00
315cf7d920 Merge pull request #355 from Xav83/patch-1
Adds the Chocolatey package as a way to install ComicTagger
2022-08-14 10:47:24 -07:00
e9cc6a16a8 Note that @Xav83 is the maintainer of the chocolatey package
Co-authored-by: Xavier Jouvenot <x.jouvenot@gmail.com>
2022-08-14 10:45:51 -07:00
26eb6985fe Adds the Chocolatey package as a way to install ComicTagger
Adds the Chocolatey package in the list of possibilities to install ComicTagger
2022-08-13 11:52:09 +02:00
be983c61bc Fix #353
The two primary cases fixed are:
Ms. Marvel
spider-man/deadpool

The first issue removed 'Ms.' which is a problem as many comics have
series that the only difference in the title is the
designation/honorific.

The second issue is that the '/' was removed and not replaced with
anything causing a search for 'mandeadpool' which will not show useful
results.

Consequently all designations/honorifics are now untouched
All punctuation is replaced with a space
2022-08-12 07:10:36 -07:00
77a53a6834 Update dependencies
Includes changes from pyupgrade
2022-08-10 20:55:46 -07:00
860a3147d2 Construct URL correctly 2022-08-10 16:33:40 -07:00
8ecb87fa26 Install all optional dependencies in CI 2022-08-08 19:10:57 -07:00
f17f560705 Fix tests on windows
Make the speedup dependency to thefuzz optional it requires a C compiler
2022-08-08 19:03:25 -07:00
aadeb07c49 Fix issues
Refactor add_to_path with tests
Fix type hints for titles_match
Use casefold in get_language
Fix using the recursive flag in cli mode
Add http status code to ComicVine exceptions
Fix parenthesis getting removed when renaming
Add more tests
2022-08-08 18:05:06 -07:00
e07fe9e8d1 Construct URLs more consistently 2022-07-29 22:05:22 -07:00
f2a68d6c8b Fix rename and add test 2022-07-29 22:05:03 -07:00
94be266e17 Handle the 'primary' key missing in get_primary_credit
Fixes #342
Add better exception handling for the formatter
2022-07-27 23:24:34 -07:00
5a19eaf9a0 Fix serializing of sets 2022-07-25 11:22:44 -07:00
28cbbbece7 Fix #334 2022-07-23 10:05:04 -07:00
40314367c9 Improve formatting and consistency 2022-07-18 12:17:13 -07:00
6e7660c3d9 Tests
Add tests for IssueIdentifier
Change tags to a set from a string
Add copy and replace convenience functions on GenericMetadata
Update deprecated resampling code for Pillow
Change comicvine test data to be the same as the test comic
Cleanup tests
2022-07-18 12:06:49 -07:00
99030fae6b Merge branch 'unicode_search' into develop 2022-07-13 23:16:59 -07:00
947dc81c74 use thefuzz
use thefuzz

use thefuzz
2022-07-13 23:11:17 -07:00
c0880c9afe Account for aliases field from CV 2022-07-13 23:11:14 -07:00
e6414fba96 Allow non-ascii in ComicVine searches 2022-07-13 22:45:45 -07:00
a00891f622 Add more tests 2022-07-13 22:27:31 -07:00
9ba8b2876c Ensure homebrew is in the path if it exists 2022-07-12 09:28:51 -07:00
46d3e99d48 Fix tests 2022-07-12 07:43:33 -07:00
d206f5f581 Fixing source_name position 2022-07-12 07:31:42 -07:00
ec83667d77 Adding source_name to add_issue_select_details. 2022-07-12 07:31:42 -07:00
0bbf417133 Tests
Add tests for ComicCacher and ComicVineTalker
Move fixtures to conftest.py
Move test data to testing module
2022-07-11 18:40:12 -07:00
a3e1153283 Improve rar executable handling
Show a message when a CBR/RAR archive is added and rar is not available
Ensure that an empty value for the rar executable becomes 'rar'
2022-07-10 15:21:15 -07:00
ccb461ae76 Improve rename
Implement rename on ComicArchive
Simplify unique_file with pathlib
Fix issues during renaming and simplify with pathlib
Allow exporting as zip to export 7-zip archives
2022-07-09 23:13:18 -07:00
d24b51f94e Apply black formatting and fix mypy issues 2022-07-09 22:56:52 -07:00
def2635ac2 Ignore aspect ratio on background image
Fixes #327
2022-07-07 16:10:12 -07:00
b72fcaa9a9 Add source field to cache DB.
Add source to cache db.

Rename comicvinecacher to comiccacher and update refs.

Fix comment spacing.

Move source_name to end to reduce changes.

Move source_name to end to reduce changes. Fixed.

Fix syntax.

Fix various issues with DB changes.

Move new source_name to bottom.

Remove source_name from CV_.

Revert id to volume_id
2022-07-05 11:29:10 -07:00
3ddfacd89e Fix #325
The aspect ratio mode was missed in b9af606
2022-07-04 18:03:18 -07:00
6eb5fa7ac7 Fix #324
Co-authored-by: Mizaki <jinxybob@hotmail.com>
2022-07-04 15:53:44 -07:00
68efcc74fb Updates
Use casefold in place of lower
Make lint job fail if errors are detected
Use join instead of utils.list_to_string
Simplify get_recursive_filelist with the glob library
Fix handling of un-parseable numbers in xlate
2022-07-01 16:22:01 -07:00
3d84af3746 Convert GenericMetadata to a dataclass
dataclasses allow for simple comparison and object creation

Add more tests
2022-07-01 16:15:43 -07:00
cb5b321539 Update filerenamer
Remove space separated right partition of previous literal text
2022-06-26 01:53:40 -07:00
20ec8c38c2 Fixes
Add importlib_metadata to requirements.txt
Add comments stating origin of new parser
2022-06-23 22:59:09 -07:00
8bdf91ab96 Merge branch 'rating' into develop 2022-06-23 18:13:34 -07:00
fbbd36ab4d make tests and testing proper modules 2022-06-23 13:27:36 -07:00
95643fdace Fix community rating
The user rating control is replaced with critical rating which is now
represented as a float.
utils.xlate has been updated to have an is_float parameter
Metadata is reloaded on save so that changes can be seen
e.g. for CBL tags the critical rating field only stores integers
2022-06-23 13:18:42 -07:00
6c65c2ad56 Make importlib usage compatible with python 3.9 2022-06-23 13:05:27 -07:00
292a69a204 Allow pushes to run CI again 2022-06-10 16:32:21 -07:00
5c6e7d6f3e Allow multiple types to be specified using -t fixes #24 2022-06-10 16:20:58 -07:00
7e033857ba Replace pkg_resources with importlib.metadata 2022-06-10 16:18:58 -07:00
d9c02b0115 Allow changing the ComicVine URL fixes #104 2022-06-10 15:23:58 -07:00
b9af606f87 Improve filename parsing and cover image scaling
Cover image scaling now uses the smooth transformation option in Qt
Filename parsing now identifies a single number as a filename
e.g. '52.cbz' gets parsed as issue: 52 and series: 52
2022-06-09 12:31:57 -07:00
d3c29ae40a Ignore tags on the CI workflow 2022-06-08 09:06:46 -07:00
ff73cbf2f9 Fix small issues
Fix spelling errors
Remove Redundant exception types
Remove dead code
Change the forum link to point to GitHub discussions
2022-06-07 20:22:33 -07:00
3369a24343 Update GitHub Actions
Separate release/packaging and CI
Add an ignore for flake8 on ctversion.py as it is generated
Cleanup unused portions of the makefile
Use 'build' to generate PyPi distribution
Python venv on windows uses the Scripts directory
2022-06-07 19:39:01 -07:00
ce693b55f1 Fix file write semantics for Windows 2022-06-07 12:53:27 -07:00
db37ec7204 Add a literal search option 2022-06-07 12:16:23 -07:00
470b5c0a17 Fix adding files to GUI via running ComicTagger with more filenames
Add flake8-print to ensure all logging uses the logging package
2022-06-06 20:04:51 -07:00
04409a55c7 Handle more exceptions
Handle exceptions during metadata save fixes #309
Handle exceptions during metadata read fixes #126 and #309
2022-06-06 20:04:51 -07:00
bb7fbb4e38 Add pre-commit.ci config 2022-06-06 20:04:34 -07:00
5bb48cf816 fix rar test 2022-06-06 20:04:34 -07:00
b5e6e41043 Add a log window to see the current log 2022-06-06 20:04:34 -07:00
62d927a104 Fix #308
Add null check when loading community_rating
Use iterators instead of while loops
2022-06-05 15:23:20 -07:00
4c9fa4f716 Update template help and default template 2022-06-02 18:32:41 -07:00
e8fa51ad45 Ensure comicapi is as consistent as possible 2022-06-02 18:32:33 -07:00
fd4c453854 Apply pre-commit configuration 2022-06-02 18:32:16 -07:00
c19ed49e05 Move to argparse for argument parsing 2022-06-02 18:28:54 -07:00
36adf91744 Merge branch 'MichaelFitzurka-feature/301-double-page-modified' into develop 2022-05-24 11:45:08 -07:00
8b73a87360 Merge branch 'cleanup' into develop 2022-05-24 11:44:54 -07:00
8c591a8a3b Remove unused imports 2022-05-24 11:44:26 -07:00
c5772c75e5 Cleanup setCheckState
Fix word splitting when auto-tagging
Remove commented code
2022-05-24 11:38:10 -07:00
ff02d25eea Merge branch 'tests' into develop 2022-05-24 11:30:38 -07:00
98a7ee35ee Add tests 2022-05-24 11:30:25 -07:00
59d48619b1 Merge branch 'volume' into develop 2022-05-24 11:30:15 -07:00
10056c4229 Improve volume handling
Include changes by @gramster from #120
During filename parsing set the issue to the volume if there is no issue
2022-05-24 11:27:24 -07:00
7e772abda7 Toggled to Clicked 2022-05-24 10:25:44 -04:00
09ea531a90 Fixing double page always flagging as modified 2022-05-23 09:46:46 -04:00
710d9bf6a5 Fix packaging issues
Add wordninja datafile to pyinstaller
Add publishers.json to the correct package
2022-05-20 00:19:33 -07:00
bb81f921ff Fix Qt typing references to strings 2022-05-19 22:29:46 -07:00
1468b1932f Fix crash on startup
Add publishers.json to pip package
Add exception handling to prevent crash
2022-05-19 20:13:59 -07:00
74d95b6a50 Add typing_extensions 2022-05-19 18:17:22 -07:00
d33fb6ef31 Fix build errors
Add wordninja to requirements.txt
Fix typing to allow unrar-cffi to be optional
2022-05-19 18:08:05 -07:00
4201558483 Merge branch 'wordSplit' into develop 2022-05-19 17:58:45 -07:00
983b3d08f6 Merge branch 'clearMetadata' into develop 2022-05-19 13:39:41 -07:00
eec715551a Allow overwriting existing metadata 2022-05-19 13:28:36 -07:00
d3f552173e Merge branch 'AutoImprint' into develop 2022-05-19 13:28:18 -07:00
3e3dcb03f9 Typed 2022-05-19 13:19:19 -07:00
44b0e70399 Merge branch 'fixComicremoval' into develop 2022-05-16 15:23:15 -07:00
38aedac101 Ensure that comics are properly removed when using remove_archive_list 2022-05-16 15:21:59 -07:00
9a9d97f3bb Fix #291
ComicTagger now accounts for any single unicode numeric value
2022-05-14 01:59:44 -07:00
a4cb8b51a6 Restore test cbz
Add test to ensure that metadata is read correctly
Add tests for IssueString
2022-05-14 01:59:39 -07:00
1bbdebff42 Merge branch 'filenameParser' into develop 2022-05-06 00:33:36 -07:00
783c4e1c5b Merge branch 'uiCleanup' into develop 2022-05-06 00:33:30 -07:00
eb5360a38b Merge branch 'renameFix' into develop 2022-05-06 00:33:24 -07:00
205d337751 Add new filename parser
I created a new, mostly over complicated, filename parser
The new parser works well in many cases and will collect more data than
the original parser but will sometimes give odd results because of how
complicated it has been made e.g.
'100 page giant' will cause issues however '100-page giant' will not

Remove the parse scan info setting as it was not respected in many cases
2022-05-06 00:30:33 -07:00
d469ee82d8 Cleanup ui files
Qt Designer has new defaults since these were originally generated
2022-05-04 00:06:32 -07:00
c464283962 Merge branch 'removeIndent' into develop 2022-04-30 00:01:53 -07:00
48467b14b5 Remove utils.indent, python 3.9 provides a similar function 2022-04-30 00:01:00 -07:00
70df9d0682 Update filerenamer
Fixes an out of range exception during smart cleanup
Enforces field names to be present in format templates
Instead of removing previous text if a replacement is empty only strip
specifically "-_({[#" off the right of the string
2022-04-29 23:45:28 -07:00
049971a78a Merge branch 'removeRenamer' into develop 2022-04-29 23:29:24 -07:00
052e95e53b Remove old file renamer
Use PureWindowsPath objects in templates and tests, this allows both
path separators to be used and compared regardless of platform
2022-04-29 23:27:58 -07:00
fa0c193730 Merge branch 'MichaelFitzurka-feature-258/community-rating' into develop 2022-04-29 23:22:58 -07:00
a98eb2f81b Merge branch 'buildFix' into develop 2022-04-29 23:14:46 -07:00
ae4de0b3e6 Update build settings
Update excluded folders for flake8
Ensure pip install -e is used in both cases to install ComicTagger
Set required python version to 3.9
2022-04-29 23:06:57 -07:00
84b762877f Changes as per comments 2022-04-27 10:15:53 -04:00
2bb7aaeddf Merge branch 'MichaelFitzurka-feature-278/remove-empty-tags' into develop 2022-04-26 04:25:51 -07:00
08434a703e Remove empty versus clearing. 2022-04-22 09:48:47 -04:00
552a319298 Adding CommunityRating. fitxes #258 2022-04-22 09:39:32 -04:00
b9e72bf7a1 Merge branch 'cleanup' into develop 2022-04-20 13:15:44 -07:00
135544c0db Code cleanup 2022-04-20 13:13:03 -07:00
c297fd7fe7 Merge branch 'removeEnum' into develop 2022-04-20 11:44:42 -07:00
168f24b139 Partial revert of 'e616aa8373688fe0ee7394ddad5b409653354271'
Changing PageType to an Enum creates too many issues
2022-04-20 11:41:42 -07:00
89ddea7e9b Update documentation
Add CONTRIBUTING.md
Update install instructions in README
Update Build badge in README
2022-04-19 21:55:34 -07:00
bfe005cb63 Merge branch 'fixSerialization' into develop 2022-04-19 14:55:50 -07:00
48c2e91f7e Fix pip reference 2022-04-19 14:49:14 -07:00
02f365b93f Fix Makefile
make check now uses a venv
make CI uses the environment
Fix rar test
2022-04-19 14:45:36 -07:00
d78c3e3039 Fix serialization errors
Add tests to ensure issue is fixed
Add make check
Add pytest to make CI
2022-04-19 13:16:33 -07:00
f18513fd0e Fix Template help 2022-04-19 00:44:29 -07:00
caa94c4e28 Merge branch 'Renaming' into develop 2022-04-18 22:56:49 -07:00
7037877a77 Add a strict mode to file renaming
Strict renaming removes all reserved names and characters regardless
 of operating system, with out strict mode only for the current
 Operating System
Add more edge cases to smart cleanup
Add more tests for file renaming
2022-04-18 22:55:13 -07:00
6cccf22d54 Allow switching between old and new rename templates
Show a message dialog explaining that there is a new template format
Add a dynamic label to show the effect of a rename
Add tests for FileRenamer
Remove the filename parameter from the determine_name function
2022-04-18 20:12:20 -07:00
ceb2b2861e Merge branch 'filename_tests' into develop 2022-04-18 20:11:06 -07:00
298f50cb45 Merge branch 'configDir' into develop 2022-04-18 20:10:50 -07:00
e616aa8373 Merge branch 'CodeCleanup' into develop 2022-04-18 20:10:08 -07:00
0fe881df59 Code cleanup 2022-04-18 19:40:04 -07:00
f3f48ea958 Add the ability to specify a config directory 2022-04-18 19:08:38 -07:00
9a9d36dc65 Add more tests for parsing filenames 2022-04-18 19:06:09 -07:00
028b728d82 Improve file renaming
Moves to Python format strings for renaming, handles directory
structures, moving of files to a destination directory, sanitizes
file paths with pathvalidate and takes a different approach to
smart filename cleanup using the Python string.Formatter class

Moving to Python format strings means we can point to python
documentation for syntax and all we have to do is document the
properties and types that are attached to the GenericMetadata class.

Switching to pathvalidate allows comictagger to more simply handle both
directories and symbols in filenames.

The only changes to the string.Formatter class is:
1. format_field returns
an empty string if the value is none or an empty string regardless of
the format specifier.
2. _vformat drops the previous literal text if the field value
is an empty string and lstrips the following literal text of closing
special characters.
2022-04-18 18:52:53 -07:00
23f323f52d Add filename tests 2022-04-15 02:46:57 -07:00
49210e67c5 Fix rar_support variable 2022-04-14 16:25:25 -07:00
e519bf79be Merge branch 'MichaelFitzurka-feature/263-pages-keyboard' into develop 2022-04-14 16:23:51 -07:00
4f08610a28 Fix CI 2022-04-14 13:16:51 -07:00
544bdcb4e3 Using shortcuts and actions. 2022-04-14 12:22:53 -04:00
f3095144f5 Merge branch 'feature/149-add-tests' into develop 2022-04-12 15:20:58 -07:00
75f31c7cb2 Merge branch 'fileEncoding' into develop 2022-04-11 18:02:26 -07:00
f7f4e41c95 Catch exception when displaying raw tags 2022-04-11 17:16:07 -07:00
6da177471b Fix #242
Fix file encoding inconsistencies, windows defaults to cp1252, which is
not Unicode compatible.
Add logging for all exceptions in the comicapi package
Ensure that all exceptions are logged and shown to the user
2022-04-11 14:52:41 -07:00
8a74e5b02b Keyboard commands for the Pages tab to make editing easier. 2022-04-10 18:10:09 -04:00
5658f261b0 Merge branch 'MichaelFitzurka-feature/m-age-rating' into develop 2022-04-10 11:05:06 -07:00
6da3bf764e Merge branch 'feature/m-age-rating' of https://github.com/MichaelFitzurka/comictagger into MichaelFitzurka-feature/m-age-rating 2022-04-10 11:04:48 -07:00
5e06d35057 Merge branch 'feature/253-recalc-page-dims' of https://github.com/MichaelFitzurka/comictagger into MichaelFitzurka-feature/253-recalc-page-dims 2022-04-10 11:00:10 -07:00
82bcc876b3 Merge branch 'MichaelFitzurka-feature/183-comment-html-fix' into develop 2022-04-10 10:59:40 -07:00
d7a6882577 Merge branch 'feature/183-comment-html-fix' of https://github.com/MichaelFitzurka/comictagger into MichaelFitzurka-feature/183-comment-html-fix 2022-04-10 10:59:00 -07:00
5e7e1b1513 Merge branch 'MichaelFitzurka-feature/246-dbl-page' into develop 2022-04-10 10:57:46 -07:00
cd9a02c255 Merge branch 'feature/246-dbl-page' of https://github.com/MichaelFitzurka/comictagger into MichaelFitzurka-feature/246-dbl-page 2022-04-10 10:54:49 -07:00
b47f816dd5 Merge branch 'abuchanan920-develop' into develop 2022-04-10 10:50:41 -07:00
d1a649c0ba Adding "M" age rating for 2.0 schema 2022-04-06 11:49:54 -04:00
b7759506fe Menu command to clear out page size,height,width on demand, to then recalculate on save. 2022-04-05 16:23:26 -04:00
97777d61d2 Fixing some HTML to comment translations. 2022-04-05 16:16:27 -04:00
e622b56dae Adding attribs to ImageMetadata class. 2022-04-05 11:23:18 -04:00
a24251e5b4 Merge branch 'comictagger:develop' into develop 2022-04-05 10:38:36 -04:00
d4470a2015 Use more idiomatic regular expression string
Co-authored-by: Timmy Welch <timmy@narnian.us>
2022-04-05 10:37:33 -04:00
d37022b71f Merge branch 'comictagger:develop' into feature/246-dbl-page 2022-04-05 09:59:20 -04:00
5f38241bcb Double Page functionality. 2022-04-05 09:52:59 -04:00
4fb9461491 Stop a crash when the logs folder already exists 2022-04-05 00:58:19 -07:00
c9b5bd625f Fix parsing of filenames that end with an ID such as [__######__] 2022-04-04 22:34:31 -04:00
558072a330 Create the logs folder before attempting to use it 2022-04-04 19:28:38 -07:00
26fa7eeabb Merge branch 'logging' into develop 2022-04-04 19:16:54 -07:00
c50cef568e Add basic logging 2022-04-04 19:10:22 -07:00
2db80399a6 Merge branch 'MichaelFitzurka-feature/247-empty-tags' into develop 2022-04-04 14:16:29 -07:00
4936c31c18 black changed some single quotes to double quotes. 2022-04-04 16:36:46 -04:00
ada88d719f Empty metadata should not assign an empty tag. 2022-04-03 16:50:27 -04:00
1b28623fe3 Bookmark functionality. Fixes #212. 2022-04-03 15:44:20 -04:00
593f568ea7 method renamed to match new changes. 2022-04-03 15:39:03 -04:00
7b4dba35b5 Ensure that tags are overwritten when saving metadata 2022-04-02 15:41:50 -07:00
c95e700025 Merge branch 'CodeCleanup' into develop 2022-04-02 15:36:03 -07:00
e10f7dd7a7 Code cleanup
Remove no longer used google scripts
Remove convenience files from comicataggerlib and import comicapi directly
Add type-hints to facilitate auto-complete tools
Make PyQt5 code more compatible with PyQt6

Implement automatic tooling
isort and black for code formatting
Line length has been set to 120
flake8 for code standards with exceptions:
E203 - Whitespace before ':'  - format compatiblity with black
E501 - Line too long          - flake8 line limit cannot be set
E722 - Do not use bare except - fixing bare except statements is a
                                lot of overhead and there are already
                                many in the codebase

These changes, along with some manual fixes creates much more readable code.
See examples below:

diff --git a/comicapi/comet.py b/comicapi/comet.py
index d1741c5..52dc195 100644
--- a/comicapi/comet.py
+++ b/comicapi/comet.py
@@ -166,7 +166,2 @@ class CoMet:

-            if credit['role'].lower() in set(self.editor_synonyms):
-                ET.SubElement(
-                    root,
-                    'editor').text = "{0}".format(
-                    credit['person'])

@@ -174,2 +169,4 @@ class CoMet:
         self.indent(root)
+            if credit["role"].lower() in set(self.editor_synonyms):
+                ET.SubElement(root, "editor").text = str(credit["person"])

diff --git a/comictaggerlib/autotagmatchwindow.py b/comictaggerlib/autotagmatchwindow.py
index 4338176..9219f01 100644
--- a/comictaggerlib/autotagmatchwindow.py
+++ b/comictaggerlib/autotagmatchwindow.py
@@ -63,4 +63,3 @@ class AutoTagMatchWindow(QtWidgets.QDialog):
             self.skipButton, QtWidgets.QDialogButtonBox.ActionRole)
-        self.buttonBox.button(QtWidgets.QDialogButtonBox.Ok).setText(
-            "Accept and Write Tags")
+        self.buttonBox.button(QtWidgets.QDialogButtonBox.StandardButton.Ok).setText("Accept and Write Tags")

diff --git a/comictaggerlib/cli.py b/comictaggerlib/cli.py
index 688907d..dbd0c2e 100644
--- a/comictaggerlib/cli.py
+++ b/comictaggerlib/cli.py
@@ -293,7 +293,3 @@ def process_file_cli(filename, opts, settings, match_results):
                 if opts.raw:
-                    print((
-                        "{0}".format(
-                            str(
-                                ca.readRawCIX(),
-                                errors='ignore'))))
+                    print(ca.read_raw_cix())
                 else:
2022-04-02 14:21:37 -07:00
84dc148cff Merge branch 'MichaelFitzurka-feature/239-add-web-btn' into develop 2022-04-02 12:57:14 -07:00
14c9609efe Merge branch 'MichaelFitzurka-feature/232-inv-page-type' into develop 2022-04-02 12:57:04 -07:00
2a3620ea21 Replacing requests validation with urlparse. 2022-04-01 09:48:53 -04:00
8c5d4869f9 Updates to comments. 2022-03-31 13:34:40 -04:00
c0aa665347 Adding web link convenience button to open a valid url value in a browser window. 2022-03-31 12:40:43 -04:00
6900368251 Displaying invalid value with Error indicator, that way the user can see what is the invalid value and has the option to leave it or change it. 2022-03-31 10:25:00 -04:00
ac1bdf2f9c Merge branch 'abuchanan920-develop' into develop 2022-03-29 22:29:48 -07:00
c840724c9c Merge branch 'rhaussmann-natsort_fix' into develop 2022-03-29 22:23:00 -07:00
220606a046 Merge branch 'comictagger:develop' into natsort_fix 2022-03-29 09:28:38 -06:00
223269cc2e update requirements 2022-03-29 09:23:05 -06:00
31b96fdbb9 Merge branch 'feature/179-7zip' into develop 2022-03-28 23:29:02 -07:00
908a500e7e One more. 2022-03-26 12:45:33 -04:00
ae20a2eec8 Updates as requested. 2022-03-26 12:42:33 -04:00
287c5f39c1 Merge branch 'comictagger:develop' into feature/179-7zip 2022-03-26 12:27:34 -04:00
cfd2489228 Merge branch 'feature-227-data-src-alt-covers' into develop 2022-03-21 17:52:22 -07:00
86a83021a6 Update to look for images in data-src as well as src. 2022-03-21 15:29:31 -04:00
d7595f5ca1 Merge branch 'comictagger:develop' into feature/179-7zip 2022-03-21 09:27:47 -04:00
5a2bb66d5b Merge branch 'unicodeFix' into develop 2022-03-20 10:43:02 -07:00
5de2ce65a4 Remove print statements
Fixes #223
2022-03-20 10:40:30 -07:00
95d167561d Fix locale for macOS 2022-03-20 02:10:11 -07:00
7d2702c3b6 Update pyinstaller 2022-03-20 02:09:47 -07:00
d0f96b6511 Ensure XML is UTF-8 encoded 2022-03-19 18:17:38 -07:00
ba71e61d87 Added 7zip support thru py7zr.
Tweaked save of archive file and images in comicarchive.
2022-03-18 15:14:42 -04:00
191d72554c Explicitly specify unsigned integer sort to fix comic page order 2022-03-14 13:27:03 -04:00
628251c75b Merge branch 'metadataEdit' into develop 2022-02-21 20:22:28 -08:00
71499c3d7c Merge branch 'bugFixes' into develop
Closes #65,#59,#154,#180,#187,#209
2022-02-21 20:06:44 -08:00
03b8bf4671 Bug fixes
Closes #65,#59,#154,#180,#187,#209
2022-02-21 20:05:07 -08:00
773735bf6e Merge pull request #213 from lordwelch/series_sort
Cleanup settings from #200
2022-01-22 17:29:26 -08:00
b62e291749 Cleanup settings from #200
Rename blacklist to filter to be more accurate
2022-01-22 15:00:22 -08:00
a66b5ea0e3 Series sorting filtering (#200)
Because additional series results are now returned due to #143 the series selection window can with a large number of results that are not usually sorted in a useful way.

I've created 3 settings that can help finding the corect series quickly

use the publisher black list - can be toggled from the series selction screen, as well as a setting for is default behaviour
a setting to make the result initially sorted by start year instead of the default no of issues
a setting to initially put exact and near matches at the top of the list
2022-01-22 14:40:45 -08:00
615650f822 Update xml instead of overwrite 2022-01-05 22:01:00 -08:00
ed16199940 Merge pull request #132 from lordwelch/FixLanguageSort
Sort language correctly
2021-12-15 23:41:40 -08:00
7005bd296e Merge pull request #131 from lordwelch/PageListEditorExtendedSelection
Allow extended selection in the page list editor
2021-12-15 23:40:08 -08:00
cdeca34791 Add experimental word splitting to the filename parser
Adds a global setting as well as a setting that is only in effect
during auto-tagging
2021-12-15 10:58:34 -08:00
aefe778b36 Add publisher and imprint handling
Imprint handling has been added to utils and uses a subclassed dict to
return a tuple for imprint matching, this may not be the best idea but
it works for now.

Add settings option auto_imprint
Add cli flag -a, --auto-import
2021-12-15 10:54:16 -08:00
c6e1dc87dc Allow extended selection in the page list editor 2021-12-15 10:53:01 -08:00
ef37158e57 Sort language correctly 2021-12-15 10:52:25 -08:00
444e67100c Merge pull request #207 from jpcranford/patch-1
Fixed typo
2021-12-15 08:49:15 -08:00
82d054fd05 Fixed typo 2021-12-14 16:52:48 -07:00
f82c024f8d Merge pull request #206 from lordwelch/rarOptionalFix
Fix rarfile import as by default it is optional
2021-12-12 18:49:05 -08:00
da4daa6a8a Fix rarfile import as by default it is optional 2021-12-12 18:46:28 -08:00
6e1e8959c9 Merge pull request #204 from lordwelch/buildSystem
Update build
2021-12-12 18:15:58 -08:00
aedc5bedb4 Update build
Separate dependencies into files and add optional dependencies
Update natsort usage to be compliant with the latest version (#203)
Set PyQt5 to 5.15.3, 5.15.4 has issues with pyinstaller
Add pyproject.toml with setuptools, isort and black configuration
Add optional dependencies (#191)
Update README (#174)
2021-10-23 21:39:58 -07:00
93f5061c8f Add GitHub Actions yaml file (#201)
Upload artifacts this allows easy testing of macOS and Windows binaries
Update unrar-cffi for Python 3.9 wheels
2021-09-29 01:17:04 -07:00
d46e171bd6 Merge pull request #199 from lordwelch/seriesSearch
Improve issue identification
2021-09-26 17:09:54 -07:00
e7fe520660 Improve issue identification
Move title sanitizing code to utils module
Update issue identifier to compare sanitized names
2021-09-26 17:06:30 -07:00
91f288e8f4 Update travis
hold windows to 3.7.9 as unrar-cffi only has windows wheels for 3.7
switch to using builtin python for macOS
remove ssl dlls from comictagger.spec
require pyinstaller=4.3 to allow macOS codesigning
Update python usage
restrict builds to tags and pull requests
2021-09-26 12:51:17 -07:00
d7bd3bb94b Merge pull request #198 from lordwelch/143-regression
Fix regression of #143
2021-09-25 23:01:38 -07:00
9e0b0ac01c Fix regression of #143 2021-09-25 22:59:59 -07:00
03a8d906ea Merge pull request #189 from lordwelch/seriesSearch
Series search
2021-09-21 19:59:26 -07:00
fff28cf6ae Improve searchForSeries
Refactor removearticles to only remove articles
Add normalization on the search string and the series name results

Searching now only compares ASCII a-z and 0-9 and all other characters
are replaced with single space, this is done to both the search string
and the result. This fixes an with names that are separated by a
hyphen (-) in the filename but in the Comic Vine name are separated by a
slash (/) and other similar issues.
2021-08-29 17:35:34 -07:00
9ee95b8d5e Merge pull request #192 from lordwelch/fixes
Fix errors
2021-08-16 17:37:19 -07:00
11bf5a9709 Move to python requests module
Add requests to requirements.txt
Requests is much simpler and fixes all ssl errors.
Comic Vine now requires a unique useragent string
2021-08-11 20:13:53 -07:00
af4b3af14e Cleanup metadata handling
Mainly corrects for consistency in most situations
CoMet is not touched as there is no support in the gui and has an odd requirements on attributes
2021-08-07 21:54:29 -07:00
9bb7fbbc9e Fix errors
Libraries updated and these are no longer needed
2021-08-05 17:21:21 -07:00
beb7c57a6b fix: change accidental overwrite of reserved __dir__ 2019-10-20 00:36:13 +02:00
ce48730bd5 fix: choco install multiple packages breaks with version 2019-10-20 00:25:52 +02:00
806b65db24 freeze windows python version to 3.7.5 2019-10-20 00:20:57 +02:00
cdf9a40227 fix: add setup.py install before testing 2019-10-20 00:08:11 +02:00
0adac47968 add pytest run to travis ci 2019-10-20 00:02:03 +02:00
096a89eab4 add pytest 2019-10-19 23:57:49 +02:00
f877d620af allow for alpha releases in travis 2019-10-06 16:25:31 +02:00
c175e46b15 Increase comicvine search results per request to max (#164) 2019-10-06 07:14:11 -07:00
f0bc669d40 PyPI release (#163) 2019-10-06 07:01:33 -07:00
db3db48e5c Better console handling on Windows (#162) 2019-10-06 05:15:18 -07:00
cec585f8e0 Changed: use unrar-cffi for cbr handling (#151) 2019-10-05 23:59:52 +02:00
d71a48d8d4 Better support for CLI mode on windows (#158) 2019-10-05 23:55:34 +02:00
9e4a560911 Better support for macOS dark mode (#159) 2019-10-05 23:53:56 +02:00
f244255386 update urls to new github comictagger organization 2019-10-05 16:31:12 +02:00
254e2c25ee Brand new README file (#156) 2019-10-05 16:09:04 +02:00
7455cf17c8 fix broken drag & drop on macOS (#142) 2019-09-29 23:02:44 +01:00
d93cb50896 add version info to mac info_plist (#146) 2019-09-29 22:11:42 +01:00
3316cab775 fix travis regex 2019-09-28 17:05:15 +02:00
c01f00f6c3 multi platform build on travis (#145) 2019-09-28 17:01:05 +02:00
06ff25550e use setuptools_scm to handle version 2019-09-28 14:59:36 +02:00
1f7ef44556 remove obsolete download_url (https://git.io/JeZrE) 2019-09-28 14:57:09 +02:00
fabf2b4df6 Merge tag '1.2.0+2' into develop
1.2.0+2
2019-09-25 01:55:29 +02:00
0fbaeb861e Merge branch 'release/1.2.0+2' 2019-09-25 01:55:15 +02:00
3dcc04a318 try to fix appveyor deployment 2019-09-25 01:55:03 +02:00
933e053df3 Merge tag '1.2.0+1' into develop
1.2.0+1
2019-09-25 01:30:32 +02:00
5f22a583e8 Merge branch 'release/1.2.0+1' 2019-09-25 01:30:03 +02:00
3174b49d94 bump version to force appveyor deploy 2019-09-25 01:29:50 +02:00
93ce311359 Release 1.2.0 2019-09-25 00:51:28 +02:00
bc43c5e329 Release 1.2.0 2019-09-25 00:50:50 +02:00
9bf7aa20fb bump version to 1.2.0 2019-09-25 00:49:52 +02:00
5416bb15c3 Appveyor GitHub release (#139) 2019-09-24 23:36:08 +01:00
562a659195 Travis build for macOS build (#100) 2019-09-24 23:30:23 +01:00
1d3d6e2741 bump version 1.1.32-rc1 2019-09-22 12:47:19 +01:00
c9724527b5 Fixed TLS version for the Comic Vine (#135)
* Fixed TLS version for the comicvine

* Fixed TLS version for the Comic Vine - Auto-Identify and Auto-Tag functions
2019-09-22 12:40:59 +01:00
2891209b4e bump version 2019-02-04 20:27:37 +01:00
5b87e19d3e Limit Comic Vine search result queries (#119)
* Tweaked search string based on new comic vine search behavior
Placated Beaufitul Soup by passing the parser

* Limit search results fetching after recent Comic Vine changes.
Also, minor debug comment tweaks.
2019-02-04 20:16:44 +01:00
tlc
674e24fc41 Enable Zip64 (#96) 2018-09-20 00:09:24 +02:00
91f82fd6d3 Python3 and QT5 upgrade (#109)
* Tweaked search string based on new comic vine search behavior
Placated Beaufitul Soup by passing the parser

* First cut at porting to Python 3 and PyQt5

* remove debug print

* tweaked progress dialog handling for issues on ubuntu gui

* Handle bad key more gracefullu

* More integration of unrarlib into settings and rest of app

* Better handling of "personal" unrar lib setting

* PEP 440-compliant version string

* Tuned linux rar help strings

* Got setup working again
* Attempts to build unrar on install
* Some minimal desktop integration on various platforms

* Fix wrong shortfile

* More setup.py enhancements
* Use proper temp file
* Added comment block at top

* Comment out desktop integration attempt for now

* Updated some links and info

* Fixed the html a bit

* Repaired some images that caused libpng to complain

* update readme re:  py3qt5 branch changes

* another note

* #108 feat: try to simplify windows build using only pip and python3

* #108 feat: fix python location on appveyor (try 1)

* #108 feat: use venv (try 2)

* #108 feat: use venv (try 3)

* #108 feat: update to latest pyinstaller develop branch

* #108 feat: update to latest pyinstaller develop branch (again)

* #108: add ssl libraries for windows packaging

* #108: refresh env in win build to pick the right mingw

* #108: change order of win build script operations

* #113: fix subprocess usage in pyinstaller package

* bump version
2018-09-19 22:05:39 +02:00
cf43513d52 feat: add appveyor configuration 2018-01-17 13:35:10 -08:00
a7288a94cc #98 Multiplatform pyinstaller dist (#99)
Multiplatform pyinstaller dist (#98)
2018-01-14 16:41:27 +01:00
d0918c92e4 #87 Update comic vine url and ssl config (#93)
* #87 fix ssl comicvine communication

* handle missing libunrar. update macos makefile. remove version check window. bump version.

* update release notes

* #87 fix ssl context in several places. update comicvine api url.

* fix drag and drop issues on macOS

* bump version to 1.1.16-beta-rc2

* use PNG conversion for Windows build
2017-12-21 15:19:45 +01:00
4ff2061568 Merge pull request #74 from Alkpone/master
Bugs in move2folder.py script
2015-03-22 10:49:21 +01:00
08c402149b Prevent error when no file has been detected
Script raised an unhandled exception:  local variable 'fmt_str' referenced before assignment
Traceback (most recent call last):
  File "/volume1/@appstore/comictagger/comictaggerlib/options.py", line 233, in launch_script
    script.main()
  File "/volume1/@appstore/comictagger/scripts/move2folder.py", line 90, in main
    print >> sys.stderr, fmt_str.format("")
UnboundLocalError: local variable 'fmt_str' referenced before assignment
2015-03-21 14:32:55 +01:00
184dbf0684 Prevent error when running the script
Script raised an unhandled exception:  coercing to Unicode: need string or buffer, NoneType found
Traceback (most recent call last):
  File "/root/comictagger/comictaggerlib/options.py", line 233, in launch_script
    script.main()
  File "scripts/move2folder.py", line 80, in main
    ca = ComicArchive(filename, settings.rar_exe_path)
  File "/root/comictagger/comicapi/comicarchive.py", line 648, in __init__
    with open(fname, 'rb') as fd:
TypeError: coercing to Unicode: need string or buffer, NoneType found
2015-03-21 14:17:05 +01:00
ed0050ba05 fixed typo 2015-03-06 11:26:47 +01:00
68030a1024 updated to unrar 0.3 2015-03-01 16:14:01 +01:00
983ad1fcf4 Merge branch 'fcanc-master' 2015-03-01 15:44:11 +01:00
d959ac0401 Huge code cleanup
- `autopep8 -aa` for general cleanup;
- Changed order of imports, they should be ordered into 3 groups:
1. standard library imports;
2. 3rd party packages;
3. project imports.
- I commented various imports that were reported as unused by my IDE.
If everything goes fine we can consider to delete them;
- The Apache license disclaimers are now comments since triple-quotes
should be used only for docstrings;
- Fix - `utils.centerWindowOnParent` did not resolve, changed to
`centerWindowOnParent`
2015-02-22 03:30:32 +01:00
2a550db02a Merge pull request #1 from davide-romanini/master
Merge davide-romanini commits
2015-02-18 20:44:28 +01:00
6369fa5fda updated readme 2015-02-16 16:34:38 +01:00
d5a13a4206 various fixes after merging comicstream-integr 2015-02-16 16:19:38 +01:00
b2532ce03a Merge branch 'comicstream-integr' 2015-02-16 16:18:00 +01:00
79a67d8c29 Merge pull request #71 from branch 'fcanc-master' 2015-02-16 14:51:57 +01:00
d9bd38674c added new dependencies to requirements.txt. with new unrar needs UNRAR_LIB_PATH to be set to start 2015-02-16 14:27:13 +01:00
a0154aaaae Merge commit '17f74cf2968a4e0aa01d7309afe7e1407b8abef2' into comicstream-integr 2015-02-16 14:09:21 +01:00
17f74cf296 Squashed 'comicapi/' changes from b7d2458..18f87d3
18f87d3 using comicapi subtree classes

git-subtree-dir: comicapi
git-subtree-split: 18f87d35b1b2cf5e135fad353419eda11209a6be
2015-02-16 14:09:21 +01:00
3f112cd578 Merge commit 'f6439049d8d8b5a4709f1b78afbfd289d00e8c25' as 'comicapi' 2015-02-16 13:27:21 +01:00
f6439049d8 Squashed 'comicapi/' content from commit b7d2458
git-subtree-dir: comicapi
git-subtree-split: b7d2458b80467a47be1d1d58b31ffcac62c2743c
2015-02-16 13:27:21 +01:00
2fe818872c removed splitted comicapi 2015-02-16 13:25:35 +01:00
a419969b85 autopep8 -aa
—aggressive, level 2
2015-02-15 12:55:04 +01:00
ee52448f17 autopep8 -a
—aggressive, level 1
2015-02-15 12:44:09 +01:00
79103990fa autopep8
automatically formats Python code to conform to the PEP 8 style guide —
default usage (whitespace changes only)
2015-02-15 11:44:00 +01:00
22dbafbc00 Code cleanup, round 1
Some formatting cleanup, plus print modernization, & typos correction.
2015-02-14 00:08:07 +01:00
0df283778c Indentation
Replaced tabs with spaces, and removed some trailing spaces.
2015-02-12 23:57:46 +01:00
a6282b5449 Move2folder script
Added a script to organize comics in a folder tree by Publisher/Series
(Volume).
2015-02-12 19:15:17 +01:00
5574280ad6 Filename parser tweaks
Fixes the Scan Info tag being left blank when the filename doesn’t
provide an issue number.
2015-02-12 19:09:33 +01:00
19b907b742 refactor (continue) 2015-02-11 19:45:45 +01:00
a9ff8f37b0 refactor core comicarchive classes in its own package comicapi 2015-02-11 19:45:02 +01:00
0769111f8c #70 added support for the day field on the gui 2015-02-09 21:50:02 +01:00
cf6ae8b5ae aligned with comicstreamer updates
refactor qt specific functions in utils.py in new ui.qtutils module
2015-02-02 17:20:48 +01:00
1d6846ced3 gitignore
changed to README.md for github.
2015-01-23 17:42:22 +01:00
d516d80093 Removed unused FileTableWidget, and explicitly set the column count. This fixes a problem on ArchLinux systems
git-svn-id: http://comictagger.googlecode.com/svn/trunk@744 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-07-06 18:19:50 +00:00
bf9ab71fd9 release notes update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@737 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-14 03:56:46 +00:00
33b00ad323 Text tweaks
git-svn-id: http://comictagger.googlecode.com/svn/trunk@736 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-14 03:56:32 +00:00
301ff084f1 fixes for webp, api key handling, and CV rate limit
git-svn-id: http://comictagger.googlecode.com/svn/trunk@734 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-13 06:26:44 +00:00
0c146bb245 minor fix
git-svn-id: http://comictagger.googlecode.com/svn/trunk@733 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-13 06:26:13 +00:00
08cc4a1acb Use pip-installed pyinstaller
git-svn-id: http://comictagger.googlecode.com/svn/trunk@732 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-06-13 06:25:35 +00:00
f97a1653d9 dos-ified release_notes file
git-svn-id: http://comictagger.googlecode.com/svn/trunk@728 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-18 15:44:38 +00:00
d9dbab301a prep for release
git-svn-id: http://comictagger.googlecode.com/svn/trunk@727 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-18 15:42:05 +00:00
3d93197101 Added warning when rar is tried be loaded, and unrar tool isn't known
Fixed a bug when erroneous message is show when file is attempted to be reloaded

git-svn-id: http://comictagger.googlecode.com/svn/trunk@726 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-12 06:08:07 +00:00
752a1d8923 actual version bump
git-svn-id: http://comictagger.googlecode.com/svn/trunk@714 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 04:04:40 +00:00
68002daffa bumped version number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@713 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 04:02:13 +00:00
ad5062c582 Persist some auto-tag options
git-svn-id: http://comictagger.googlecode.com/svn/trunk@712 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 03:21:24 +00:00
2680468f34 New CBL transform to copy story arcs to generic tags
git-svn-id: http://comictagger.googlecode.com/svn/trunk@711 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 02:06:44 +00:00
6156fc296a Added settings option to auto-clear form when importing from CV
added settings option to remove html tables from CV summary

git-svn-id: http://comictagger.googlecode.com/svn/trunk@710 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 01:52:14 +00:00
0feed294d4 Avoid an exception condition
git-svn-id: http://comictagger.googlecode.com/svn/trunk@709 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-09 01:50:40 +00:00
e57736b955 Decouple comicarchive from settings
Enforce single instance of GUI app

git-svn-id: http://comictagger.googlecode.com/svn/trunk@708 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:13:04 +00:00
70fcdc0129 Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@707 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:12:05 +00:00
9a64195ebd Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@706 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:10:18 +00:00
b0f229f851 Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@705 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:09:03 +00:00
877a5ccd85 Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@704 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:08:22 +00:00
c0f2e2f771 Decouple comicarchive from settings
git-svn-id: http://comictagger.googlecode.com/svn/trunk@703 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-08 07:07:39 +00:00
0adfc9beb3 properly decode the user settings path
git-svn-id: http://comictagger.googlecode.com/svn/trunk@702 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:46:56 +00:00
d0bc41d7ee Allow user to specify the GUI start up tag style on the command line
git-svn-id: http://comictagger.googlecode.com/svn/trunk@701 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:44:47 +00:00
fa46a065a4 fixed some spelling errors
git-svn-id: http://comictagger.googlecode.com/svn/trunk@700 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:43:21 +00:00
8fcd5ba7d6 try to parse table HTML in the comment field
git-svn-id: http://comictagger.googlecode.com/svn/trunk@699 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:42:11 +00:00
759cdc6b40 use the requirements in the setup
git-svn-id: http://comictagger.googlecode.com/svn/trunk@698 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-04-06 19:40:22 +00:00
1405d9ff0e more process tweaks
git-svn-id: http://comictagger.googlecode.com/svn/trunk@692 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 22:28:50 +00:00
d8fcbbad0a Upload the zip package to pypi index site also
git-svn-id: http://comictagger.googlecode.com/svn/trunk@691 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 22:28:03 +00:00
3eca25db34 changed build checklist
git-svn-id: http://comictagger.googlecode.com/svn/trunk@688 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 21:39:16 +00:00
c8a5a89369 changed download URL to point at google drive site
git-svn-id: http://comictagger.googlecode.com/svn/trunk@687 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 21:38:55 +00:00
ff578ea819 bumped version to 1.1.12
git-svn-id: http://comictagger.googlecode.com/svn/trunk@686 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 21:38:22 +00:00
1c730c25d5 removed auto-upload to google code site
git-svn-id: http://comictagger.googlecode.com/svn/trunk@685 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 21:38:02 +00:00
35b7b39b86 Don't choke when the version string server fails.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@683 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 20:59:35 +00:00
719c711484 Language tweak
git-svn-id: http://comictagger.googlecode.com/svn/trunk@668 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 18:03:08 +00:00
afbbc9d00c git-svn-id: http://comictagger.googlecode.com/svn/trunk@667 6c5673fe-1810-88d6-992b-cd32ca31540c 2014-03-23 17:48:59 +00:00
b8e0a45fc8 bumped version and release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@665 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 17:31:00 +00:00
b7360dd33e Updated copyright dates
git-svn-id: http://comictagger.googlecode.com/svn/trunk@664 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 17:30:23 +00:00
d9f1956426 handle a crash bug when file starts with --
git-svn-id: http://comictagger.googlecode.com/svn/trunk@663 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-23 16:56:04 +00:00
b5c7f36410 New pyunrar version to handle rar tools 5.x
git-svn-id: http://comictagger.googlecode.com/svn/trunk@662 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:43:03 +00:00
0b0663d935 Update copyright date
git-svn-id: http://comictagger.googlecode.com/svn/trunk@661 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:42:07 +00:00
eee1f65436 handle corner case of non-numeric issue ending in "."
git-svn-id: http://comictagger.googlecode.com/svn/trunk@660 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:41:38 +00:00
9a8d4149f2 fixed spelling error
git-svn-id: http://comictagger.googlecode.com/svn/trunk@659 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:40:01 +00:00
b02a205668 Make sure all error print outs are unicode
Catch error when zipfile list fails

git-svn-id: http://comictagger.googlecode.com/svn/trunk@658 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:38:36 +00:00
57284dfbed fixed typo in makefile
git-svn-id: http://comictagger.googlecode.com/svn/trunk@657 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-03-22 21:37:19 +00:00
afcbde7fc6 update todo
git-svn-id: http://comictagger.googlecode.com/svn/trunk@651 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:45:15 +00:00
151fac5bf1 updated release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@650 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:45:06 +00:00
57c1efdab9 makefile TAGGER_BASE can be set in the environment
git-svn-id: http://comictagger.googlecode.com/svn/trunk@649 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:43:22 +00:00
6b272cef87 When searching for a title, convert the string to a list of words separated by "ANDS", and then back to a string
git-svn-id: http://comictagger.googlecode.com/svn/trunk@648 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:40:58 +00:00
1cdc732739 Added a message when not able to open selected folder or file
git-svn-id: http://comictagger.googlecode.com/svn/trunk@647 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:39:47 +00:00
d1b00d162d Allow any size archive to be considered a comic
git-svn-id: http://comictagger.googlecode.com/svn/trunk@646 6c5673fe-1810-88d6-992b-cd32ca31540c
2014-01-31 04:37:13 +00:00
3dd3980bc1 update todo file
git-svn-id: http://comictagger.googlecode.com/svn/trunk@645 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-08-18 18:01:01 +00:00
cbf475eb26 removed filtering out of period (".")
git-svn-id: http://comictagger.googlecode.com/svn/trunk@644 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-08-18 18:00:04 +00:00
ac8b575659 bumped version number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@643 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-08-18 17:56:38 +00:00
ac8ef286a4 Perform the rar test first, since some rars can be falsly identified as zips, somehow...
git-svn-id: http://comictagger.googlecode.com/svn/trunk@641 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-07-23 17:06:35 +00:00
f567dc37be Handle case of None value credit tags in XML
git-svn-id: http://comictagger.googlecode.com/svn/trunk@640 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-07-08 23:32:24 +00:00
15c5fc5258 release notes update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@637 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-09 01:31:26 +00:00
cc985b52a5 Do the limited series check/elimination after cover matching
git-svn-id: http://comictagger.googlecode.com/svn/trunk@636 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-08 02:39:06 +00:00
910b0386be Remove tooltip if not expandable
git-svn-id: http://comictagger.googlecode.com/svn/trunk@635 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-08 02:38:06 +00:00
0fece23405 Allow rename w/smart cleanup to have "--"
git-svn-id: http://comictagger.googlecode.com/svn/trunk@634 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 22:30:32 +00:00
eee320e0c7 bumped version number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@632 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 21:07:00 +00:00
accabf8e21 Added keyboard shortcut for form clear
git-svn-id: http://comictagger.googlecode.com/svn/trunk@631 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 21:06:48 +00:00
acc253d35c todo update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@630 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 18:36:30 +00:00
ede0154efe issueCount now gets passed to issueidentifier.
a possible technique for eliminating potential volumes is coded, but commented out for now

git-svn-id: http://comictagger.googlecode.com/svn/trunk@629 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 18:24:57 +00:00
5b805b1428 auto-tag progress window now uses coverimagewidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@628 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 18:22:14 +00:00
2e6b2a89db Added a raw image data mode for the coverimagewidget
git-svn-id: http://comictagger.googlecode.com/svn/trunk@627 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-06 18:21:34 +00:00
c028bb4ddc Make sure to catch all non-numeric characters after a # for the issue number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@626 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-04 01:48:42 +00:00
b70beb5684 more file name parser enhancements
git-svn-id: http://comictagger.googlecode.com/svn/trunk@625 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-04 01:22:39 +00:00
128af4521b better filename parsing
git-svn-id: http://comictagger.googlecode.com/svn/trunk@623 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-02 16:31:50 +00:00
43cf7a80c8 remove print
git-svn-id: http://comictagger.googlecode.com/svn/trunk@622 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-01 22:33:05 +00:00
3223ed190c Make sure form is updated when removing top item from list
git-svn-id: http://comictagger.googlecode.com/svn/trunk@621 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-01 22:32:20 +00:00
9e2817c037 deal with CV bug (wrong result set count) when not specifying page=1
git-svn-id: http://comictagger.googlecode.com/svn/trunk@620 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-01 22:31:25 +00:00
6e7bd10fb9 deal with pagination bug on comicvine side reporting wrong result set size when not specifiying page=1
git-svn-id: http://comictagger.googlecode.com/svn/trunk@619 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-05-01 22:30:30 +00:00
c099205779 Reworked the issue string parsing
git-svn-id: http://comictagger.googlecode.com/svn/trunk@618 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-30 18:05:10 +00:00
47d8da0e80 removed extra line
git-svn-id: http://comictagger.googlecode.com/svn/trunk@615 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-22 02:37:13 +00:00
0f7e88e58c bump to 1.1.8-beta
git-svn-id: http://comictagger.googlecode.com/svn/trunk@614 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-22 00:49:20 +00:00
65902a15b1 add-on script for renaming files based on transform list
git-svn-id: http://comictagger.googlecode.com/svn/trunk@613 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-21 06:55:32 +00:00
a68b2babeb some reworking so scripts get passed all options after scriptname
git-svn-id: http://comictagger.googlecode.com/svn/trunk@612 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-21 06:53:44 +00:00
4098802e43 sleep 1 sec before retrying after http 500 error
git-svn-id: http://comictagger.googlecode.com/svn/trunk@611 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-21 06:51:43 +00:00
9c14258e9f verify need to check version in GUI
git-svn-id: http://comictagger.googlecode.com/svn/trunk@610 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-17 18:10:08 +00:00
33bdbe8be8 verify need to check version in CLI
git-svn-id: http://comictagger.googlecode.com/svn/trunk@609 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-17 18:09:43 +00:00
a76864c109 be a little smarted in colon replacement in renaming
git-svn-id: http://comictagger.googlecode.com/svn/trunk@608 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-17 18:09:10 +00:00
cb68d07751 Added special handling of HTTP 500 error that Comic Vine seems to give occasionally.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@607 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-17 18:08:39 +00:00
8e9fccdbbc removed line feed from prints
git-svn-id: http://comictagger.googlecode.com/svn/trunk@600 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-13 05:30:53 +00:00
39990fc2b4 Updated todo and release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@599 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:56:15 +00:00
e8c315d834 parse scan info by default
git-svn-id: http://comictagger.googlecode.com/svn/trunk@598 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:55:38 +00:00
f8a06a8746 Make sure there is a default image URL if none exists
git-svn-id: http://comictagger.googlecode.com/svn/trunk@597 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:53:35 +00:00
9415087da7 removed debug print
git-svn-id: http://comictagger.googlecode.com/svn/trunk@596 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:52:43 +00:00
9aee5c32eb Made the description font a little smaller
git-svn-id: http://comictagger.googlecode.com/svn/trunk@595 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 17:52:23 +00:00
fcdb4a3889 cli option to assume issue number 1 if not found/parsed
git-svn-id: http://comictagger.googlecode.com/svn/trunk@594 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 06:11:25 +00:00
534a326258 Remember filelist sorting
git-svn-id: http://comictagger.googlecode.com/svn/trunk@593 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 06:10:50 +00:00
0390ff5919 Added option to parse scan info from filename
git-svn-id: http://comictagger.googlecode.com/svn/trunk@592 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 04:49:08 +00:00
b800ae1751 Added issue description to the match and issue selection dialogs
git-svn-id: http://comictagger.googlecode.com/svn/trunk@591 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 01:56:24 +00:00
a2c17982d3 Fixed the resizing with the splitter
git-svn-id: http://comictagger.googlecode.com/svn/trunk@590 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 01:55:59 +00:00
0347befae6 bumped version number
git-svn-id: http://comictagger.googlecode.com/svn/trunk@589 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-12 01:54:59 +00:00
af54b79790 Added cover date to issue selection dialog
git-svn-id: http://comictagger.googlecode.com/svn/trunk@588 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-11 01:57:19 +00:00
dd04ae98a0 Remove optimization for eliminating one-shots from consideratoion (not need with new CV search method)
git-svn-id: http://comictagger.googlecode.com/svn/trunk@587 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-11 01:32:07 +00:00
31b76fba92 Make sure out data is set in the case of pages that don't need to be resized
git-svn-id: http://comictagger.googlecode.com/svn/trunk@586 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-10 20:46:59 +00:00
9f4a4b0eb0 More version checking stuff
git-svn-id: http://comictagger.googlecode.com/svn/trunk@585 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-06 19:31:00 +00:00
575a23c6bf More version checking stuff
git-svn-id: http://comictagger.googlecode.com/svn/trunk@584 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-06 19:30:01 +00:00
5d84f09359 Check online for new version
Use non-deprecated "read_file" for configparser

git-svn-id: http://comictagger.googlecode.com/svn/trunk@583 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-05 19:48:49 +00:00
3072583482 Normalize issue number for search
git-svn-id: http://comictagger.googlecode.com/svn/trunk@582 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-05 19:43:45 +00:00
8d867cf78a This file will be checked by the app to see if it should update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@581 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-04 19:42:10 +00:00
36c79b5a2a Twitter and facebook buttons
git-svn-id: http://comictagger.googlecode.com/svn/trunk@580 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-04 19:18:55 +00:00
dfdaf731b4 updated release notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@576 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-03 17:37:23 +00:00
67bff8586c Make sure start_year test is with all ints
git-svn-id: http://comictagger.googlecode.com/svn/trunk@575 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-03 00:34:55 +00:00
9e4cbea6e4 Made sure some prints are unicode
git-svn-id: http://comictagger.googlecode.com/svn/trunk@574 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-03 00:33:03 +00:00
d150b2ce54 made Auto-ID use the info already fetched from the 'issues' query for the image and page URLs (rather than use the cache or fetch again)
git-svn-id: http://comictagger.googlecode.com/svn/trunk@573 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 22:37:28 +00:00
a20949cc4d got rid of debug print
git-svn-id: http://comictagger.googlecode.com/svn/trunk@572 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 22:33:13 +00:00
e3fceb20a2 merged all the cover_date parsing into one function in CV talker
git-svn-id: http://comictagger.googlecode.com/svn/trunk@571 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 20:47:18 +00:00
f4e00d9ef3 bumped version
git-svn-id: http://comictagger.googlecode.com/svn/trunk@570 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 19:59:35 +00:00
1980bd5988 Added search across issues by volume id, issue number, and date for much faster matching
git-svn-id: http://comictagger.googlecode.com/svn/trunk@569 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 19:58:23 +00:00
db54affc74 Handle None cover_date
git-svn-id: http://comictagger.googlecode.com/svn/trunk@568 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 19:57:50 +00:00
0edb9444ef Nice twitter button for code page
git-svn-id: http://comictagger.googlecode.com/svn/trunk@567 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 16:42:49 +00:00
b22c25f53f Remove parsing of title. We're back to how it was before, except now we get 'none' instead of empty string.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@566 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-04-02 14:11:00 +00:00
76e6666a79 Tweaks for dealing with unicode issue "number"
Updated release_notes


git-svn-id: http://comictagger.googlecode.com/svn/trunk@563 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-30 16:31:56 +00:00
a804a10e0e use unicode in case of weird things like "1/2" symbol
git-svn-id: http://comictagger.googlecode.com/svn/trunk@562 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-30 06:26:41 +00:00
fe413b12c1 Use issues filtered query to get issue list instead of deprecated volume.issues
git-svn-id: http://comictagger.googlecode.com/svn/trunk@561 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-30 06:25:04 +00:00
e38dc2f063 CV API changes: use cover_date instead of publish_month/year for issues, roles are now a list
bumped version

git-svn-id: http://comictagger.googlecode.com/svn/trunk@560 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-29 23:09:41 +00:00
5e5418090b Added resource types for comicvine requests
git-svn-id: http://comictagger.googlecode.com/svn/trunk@557 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-28 19:04:30 +00:00
56c1f8582a todo update
git-svn-id: http://comictagger.googlecode.com/svn/trunk@554 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 19:35:41 +00:00
00f8c0a280 removed typo
git-svn-id: http://comictagger.googlecode.com/svn/trunk@553 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 19:25:42 +00:00
1d915eb155 make sure issue number comparisons are case-normalized in case of alpha appendage
git-svn-id: http://comictagger.googlecode.com/svn/trunk@552 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 19:21:20 +00:00
b7b8060ef2 Fixed filename parsing to find "AU" issues
git-svn-id: http://comictagger.googlecode.com/svn/trunk@551 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 19:20:10 +00:00
2d190b076a Bumped version and notes
git-svn-id: http://comictagger.googlecode.com/svn/trunk@550 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 18:17:20 +00:00
cd92b1afea cleanup
git-svn-id: http://comictagger.googlecode.com/svn/trunk@549 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 17:58:05 +00:00
4d21a001d6 Fix the way sorting is done by issues
git-svn-id: http://comictagger.googlecode.com/svn/trunk@548 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 17:57:05 +00:00
4af59d2315 Handle changes to the ComicVine API and result sets
git-svn-id: http://comictagger.googlecode.com/svn/trunk@547 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 17:56:30 +00:00
c9c98b6c11 Handle if volume description is None
git-svn-id: http://comictagger.googlecode.com/svn/trunk@546 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-27 17:55:02 +00:00
1ff43db2ce Add-on for reducing page sizes in comics
git-svn-id: http://comictagger.googlecode.com/svn/trunk@545 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-24 17:45:10 +00:00
822f6b4729 0.1 issue gets special consideration a "first" issue.
git-svn-id: http://comictagger.googlecode.com/svn/trunk@544 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-11 23:19:50 +00:00
44a8dc6815 Fixed flawed RE assumption when parsing issue number with # in front. Now properly handle issues with decimal point
git-svn-id: http://comictagger.googlecode.com/svn/trunk@543 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-11 23:18:07 +00:00
a35576895c Removed warning about writing CBI to RAR since CBL supports it now. Yay!
git-svn-id: http://comictagger.googlecode.com/svn/trunk@542 6c5673fe-1810-88d6-992b-cd32ca31540c
2013-03-11 23:16:22 +00:00
198 changed files with 26353 additions and 14566 deletions

95
.github/workflows/build.yaml vendored Normal file
View File

@ -0,0 +1,95 @@
name: CI
env:
LC_COLLATE: en_US.UTF-8
on:
pull_request:
push:
branches:
- '**'
tags-ignore:
- '**'
jobs:
lint:
permissions:
checks: write
contents: read
pull-requests: write
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.9]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install build dependencies
run: |
python -m pip install flake8
- uses: reviewdog/action-setup@v1
with:
reviewdog_version: nightly
- run: flake8 | reviewdog -f=flake8 -reporter=github-pr-review -tee -level=error -fail-on-error
env:
REVIEWDOG_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
build-and-test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.9]
os: [ubuntu-latest, macos-11, windows-latest]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install tox
run: |
python -m pip install --upgrade --upgrade-strategy eager tox
- name: Install macos dependencies
run: |
brew upgrade icu4c pkg-config || brew install icu4c pkg-config
if: runner.os == 'macOS'
- name: Install linux dependencies
run: |
sudo apt-get update && sudo apt-get upgrade && sudo apt-get install pkg-config libicu-dev libqt5gui5 libfuse2
if: runner.os == 'Linux'
- name: Build and install PyPi packages
run: |
export PKG_CONFIG_PATH="/usr/local/opt/icu4c/lib/pkgconfig:/opt/homebrew/opt/icu4c/lib/pkgconfig${PKG_CONFIG_PATH+:$PKG_CONFIG_PATH}";
export PATH="/usr/local/opt/icu4c/bin:/usr/local/opt/icu4c/sbin${PATH+:$PATH}"
python -m tox r -m build
shell: bash
- name: Archive production artifacts
uses: actions/upload-artifact@v3
with:
name: "${{ format('ComicTagger-{0}', runner.os) }}"
path: |
dist/*.zip
dist/*.tar.gz
dist/*.dmg
dist/*.AppImage
- name: PyTest
run: |
python -m tox r

43
.github/workflows/contributions.yaml vendored Normal file
View File

@ -0,0 +1,43 @@
name: Contributions
on:
push:
branches:
- 'develop'
tags-ignore:
- '**'
jobs:
contrib-readme-job:
permissions:
contents: write
runs-on: ubuntu-latest
env:
CI_COMMIT_AUTHOR: github-actions[bot]
CI_COMMIT_EMAIL: <41898282+github-actions[bot]@users.noreply.github.com>
CI_COMMIT_MESSAGE: Update AUTHORS
name: A job to automate contrib in readme
steps:
- name: Contribute List
uses: akhilmhdh/contributors-readme-action@v2.3.6
with:
use_username: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Update AUTHORS
run: |
git config --global log.mailmap true
git log --reverse '--format=%aN <%aE>' | cat -n | sort -uk2 | sort -n | cut -f2- >AUTHORS
- name: Commit and push AUTHORS
run: |
if ! git diff --exit-code; then
git pull
git config --global user.name "${{ env.CI_COMMIT_AUTHOR }}"
git config --global user.email "${{ env.CI_COMMIT_EMAIL }}"
git commit -a -m "${{ env.CI_COMMIT_MESSAGE }}"
git push
fi

74
.github/workflows/package.yaml vendored Normal file
View File

@ -0,0 +1,74 @@
name: Package
env:
LC_COLLATE: en_US.UTF-8
on:
push:
tags:
- "[0-9]+.[0-9]+.[0-9]+*"
jobs:
package:
permissions:
contents: write
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.9]
os: [ubuntu-latest, macos-11, windows-latest]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install tox
run: |
python -m pip install --upgrade --upgrade-strategy eager tox
- name: Install macos dependencies
run: |
brew upgrade && brew install icu4c pkg-config
if: runner.os == 'macOS'
- name: Install linux dependencies
run: |
sudo apt-get update && sudo apt-get upgrade && sudo apt-get install pkg-config libicu-dev libqt5gui5 libfuse2
if: runner.os == 'Linux'
- name: Build, Install and Test PyPi packages
run: |
export PKG_CONFIG_PATH="/usr/local/opt/icu4c/lib/pkgconfig:/opt/homebrew/opt/icu4c/lib/pkgconfig${PKG_CONFIG_PATH+:$PKG_CONFIG_PATH}";
export PATH="/usr/local/opt/icu4c/bin:/usr/local/opt/icu4c/sbin${PATH+:$PATH}"
python -m tox r
python -m tox r -m release
shell: bash
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
- name: Get release name
if: startsWith(github.ref, 'refs/tags/')
shell: bash
run: |
git fetch --depth=1 origin +refs/tags/*:refs/tags/* # github is dumb
echo "release_name=$(git tag -l --format "%(refname:strip=2): %(contents:lines=1)" ${{ github.ref_name }})" >> $GITHUB_ENV
- name: Release
uses: softprops/action-gh-release@v2
if: startsWith(github.ref, 'refs/tags/')
with:
name: "${{ env.release_name }}"
prerelease: "${{ contains(github.ref, '-') }}" # alpha-releases should be 1.3.0-alpha.x full releases should be 1.3.0
draft: false
# upload the single application zip file for each OS and include the wheel built on linux
files: |
dist/*.zip
dist/*.tar.gz
dist/*.dmg
dist/*${{ fromJSON('["never", ""]')[runner.os == 'Linux'] }}.whl
dist/*.AppImage

160
.gitignore vendored Normal file
View File

@ -0,0 +1,160 @@
# generated by setuptools_scm
ctversion.py
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion
*.iml
## Directory-based project format:
.idea/
### Other editors
.*.swp
nbproject/
.vscode
comictaggerlib/_version.py
*.exe
*.zip
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# for testing
temp/

9
.mailmap Normal file
View File

@ -0,0 +1,9 @@
Andrew W. Buchanan <buchanan@difference.com>
Davide Romanini <d.romanini@cineca.it> <davide.romanini@gmail.com>
Davide Romanini <d.romanini@cineca.it> <user159033@92-63-141-211.rdns.melbourne.co.uk>
Michael Fitzurka <MichaelFitzurka@users.noreply.github.com> <MichaelFitzurka@github.com>
Timmy Welch <timmy@narnian.us>
beville <beville@users.noreply.github.com> <(no author)@6c5673fe-1810-88d6-992b-cd32ca31540c>
beville <beville@users.noreply.github.com> <beville@6c5673fe-1810-88d6-992b-cd32ca31540c>
beville <beville@users.noreply.github.com> <beville@gmail.com@6c5673fe-1810-88d6-992b-cd32ca31540c>
beville <beville@users.noreply.github.com> <beville@users.noreply.github.com>

46
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,46 @@
exclude: ^scripts
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: debug-statements
- id: name-tests-test
- id: requirements-txt-fixer
- repo: https://github.com/asottile/setup-cfg-fmt
rev: v2.5.0
hooks:
- id: setup-cfg-fmt
- repo: https://github.com/asottile/pyupgrade
rev: v3.15.2
hooks:
- id: pyupgrade
args: [--py39-plus]
- repo: https://github.com/PyCQA/autoflake
rev: v2.3.1
hooks:
- id: autoflake
args: [-i, --remove-all-unused-imports, --ignore-init-module-imports]
- repo: https://github.com/PyCQA/isort
rev: 5.13.2
hooks:
- id: isort
args: [--af,--add-import, 'from __future__ import annotations']
- repo: https://github.com/psf/black
rev: 24.3.0
hooks:
- id: black
- repo: https://github.com/PyCQA/flake8
rev: 7.0.0
hooks:
- id: flake8
additional_dependencies: [flake8-encodings, flake8-builtins, flake8-length, flake8-print, flake8-no-nested-comprehensions]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.9.0
hooks:
- id: mypy
additional_dependencies: [types-setuptools, types-requests, settngs>=0.10.0]
ci:
skip: [mypy]

18
AUTHORS Normal file
View File

@ -0,0 +1,18 @@
beville <beville@users.noreply.github.com>
Davide Romanini <d.romanini@cineca.it>
fcanc <f.canc@icloud.com>
Alban Seurat <alkpone@alkpone.com>
tlc <tlc@users.noreply.github.com>
Marek Pawlak <francuz14@gmail.com>
Timmy Welch <timmy@narnian.us>
J.P. Cranford <philipcranford4@gmail.com>
thFrgttn <39759781+thFrgttn@users.noreply.github.com>
Andrew W. Buchanan <buchanan@difference.com>
Michael Fitzurka <MichaelFitzurka@users.noreply.github.com>
Richard Haussmann <richard.haussmann@gmail.com>
Mizaki <jinxybob@hotmail.com>
Xavier Jouvenot <x.jouvenot@gmail.com>
github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Ben Longman <deck@steamdeck.lan>
Sven Hesse <drmccoy@drmccoy.de>
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

98
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,98 @@
# How to contribute
If your not sure what you can do or you need to ask a question or just want to talk about ComicTagger head over to the [discussions tab](https://github.com/comictagger/comictagger/discussions/categories/general) and start a discussion
## Tests
We have tests written using pytest! Some of them even pass! If you are contributing code any tests you can write are appreciated.
A great place to start is extending the tests that are already made.
For example the file tests/filenames.py has lists of filenames to be parsed in the format:
```py
pytest.param(
"Star Wars - War of the Bounty Hunters - IG-88 (2021) (Digital) (Kileko-Empire).cbz",
"number ends series, no-issue",
{
"issue": "",
"series": "Star Wars - War of the Bounty Hunters - IG-88",
"volume": "",
"year": "2021",
"remainder": "(Digital) (Kileko-Empire)",
"issue_count": "",
},
marks=pytest.mark.xfail,
)
```
A test consists of 3-4 parts
1. The filename to be parsed
2. The reason it might fail
3. What the result of parsing the filename should be
4. `marks=pytest.mark.xfail` This marks the test as expected to fail
If you are not comfortable creating a pull request you can [open an issue](https://github.com/comictagger/comictagger/issues/new/choose) or [start a discussion](https://github.com/comictagger/comictagger/discussions/new)
## Submitting changes
Please open a [GitHub Pull Request](https://github.com/comictagger/comictagger/pull/new/develop) with a clear list of what you've done (read more about [pull requests](http://help.github.com/pull-requests/)). When you send a pull request, we will love you forever if you include tests. We can always use more test coverage. Please run the code tools below and make sure all of your commits are atomic (one feature per commit).
## Contributing Code
Currently only python 3.9 is supported however 3.10 will probably work if you try it
Those on linux should install `Pillow` from the system package manager if possible and if the GUI `pyqt5` should be installed from the system package manager
Those on macOS will need to ensure that you are using python3 in x86 mode either by installing an x86 only version of python or using the universal installer and using `python3-intel64` instead of `python3`
1. Clone the repository
```
git clone https://github.com/comictagger/comictagger.git
```
2. It is preferred to use a virtual env for running from source:
```
python3 -m venv venv
```
3. Activate the virtual env:
```
. venv/bin/activate
```
or if on windows PowerShell
```
. venv/bin/activate.ps1
```
4. Install tox:
```bash
pip install tox
```
5. If you are on an M1 Mac you will need to export two environment variables for tests to pass.
```
export tox_python=python3.9-intel64
export tox_env=m1env
```
6. install ComicTagger
```
tox run -e venv
```
7. Make your changes
8. Build to ensure that your changes work: this will produce a binary build in the dist folder
```bash
tox run -m build
```
The build runs these formatters and linters automatically
setup-cfg-fmt: Formats the setup.cfg file
autoflake: Removes unused imports
isort: sorts imports so that you can always find where an import is located<br>
black: formats all of the code consistently so there are no surprises<br>
flake8: checks for code quality and style (warns for unused imports and similar issues)<br>
mypy: checks the types of variables and functions to catch errors
pytest: runs tests for ComicTagger functionality

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,4 +0,0 @@
include README.txt
include release_notes.txt
include requirements.txt
recursive-include scripts *.py *.txt

View File

@ -1,60 +0,0 @@
TAGGER_BASE := $(HOME)/Dropbox/tagger/comictagger
TAGGER_SRC := $(TAGGER_BASE)/comictaggerlib
VERSION_STR := $(shell grep version $(TAGGER_SRC)/ctversion.py| cut -d= -f2 | sed 's/\"//g')
PASSWORD := $(shell cat $(TAGGER_BASE)/project_password.txt)
UPLOAD_TOOL := $(TAGGER_BASE)/google/googlecode_upload.py
all: clean
clean:
rm -rf *~ *.pyc *.pyo
rm -rf scripts/*.pyc
cd comictaggerlib; rm -f *~ *.pyc *.pyo
rm -rf dist MANIFEST
rm -rf *.deb
rm -rf logdict*.log
make -C mac clean
make -C windows clean
rm -rf build
pydist:
mkdir -p release
rm -f release/*.zip
python setup.py sdist --formats=zip #,gztar
mv dist/comictagger-$(VERSION_STR).zip release
@echo When satisfied with release, do this:
@echo make svn_tag
remove_test_install:
sudo rm -rf /usr/local/bin/comictagger.py
sudo rm -rf /usr/local/lib/python2.7/dist-packages/comictagger*
#deb:
# fpm -s python -t deb \
# -n 'comictagger' \
# --category 'utilities' \
# --maintainer 'comictagger@gmail.com' \
# --after-install debian_scripts/after_install.sh \
# --before-remove debian_scripts/before_remove.sh \
# -d 'python >= 2.6' \
# -d 'python < 2.8' \
# -d 'python-imaging' \
# -d 'python-bs4' \
# --deb-suggests 'rar' \
# --deb-suggests 'unrar-free' \
# --python-install-bin /usr/share/comictagger \
# --python-install-lib /usr/share/comictagger \
# setup.py
#
# # For now, don't require PyQt, since command-line is available without it
# #-d 'python-qt4 >= 4.8'
upload:
$(UPLOAD_TOOL) -p comictagger -s "ComicTagger $(VERSION_STR) Source" -l Featured,Type-Source -u beville -w $(PASSWORD) "release/comictagger-$(VERSION_STR).zip"
$(UPLOAD_TOOL) -p comictagger -s "ComicTagger $(VERSION_STR) Mac OS X" -l Featured,Type-Archive -u beville -w $(PASSWORD) "release/ComicTagger-$(VERSION_STR).dmg"
$(UPLOAD_TOOL) -p comictagger -s "ComicTagger $(VERSION_STR) Windows" -l Featured,Type-Installer -u beville -w $(PASSWORD) "release/ComicTagger v$(VERSION_STR).exe"
python setup.py register
svn_tag:
svn copy https://comictagger.googlecode.com/svn/trunk \
https://comictagger.googlecode.com/svn/tags/$(VERSION_STR) -m "Release $(VERSION_STR)"

185
README.md Normal file
View File

@ -0,0 +1,185 @@
[![CI](https://github.com/comictagger/comictagger/actions/workflows/build.yaml/badge.svg?branch=develop&event=push)](https://github.com/comictagger/comictagger/actions/workflows/build.yaml)
[![GitHub release (latest by date)](https://img.shields.io/github/downloads/comictagger/comictagger/latest/total)](https://github.com/comictagger/comictagger/releases/latest)
[![PyPI](https://img.shields.io/pypi/v/comictagger)](https://pypi.org/project/comictagger/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/comictagger)](https://pypistats.org/packages/comictagger)
[![Chocolatey package](https://img.shields.io/chocolatey/dt/comictagger?color=blue&label=chocolatey)](https://community.chocolatey.org/packages/comictagger)
[![PyPI - License](https://img.shields.io/pypi/l/comictagger)](https://opensource.org/licenses/Apache-2.0)
[![GitHub Discussions](https://img.shields.io/github/discussions/comictagger/comictagger)](https://github.com/comictagger/comictagger/discussions)
[![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/comictagger/community)
[![Google Group](https://img.shields.io/badge/discuss-on%20groups-%23207de5)](https://groups.google.com/forum/#!forum/comictagger)
[![Twitter](https://img.shields.io/badge/%40comictagger-twitter-lightgrey)](https://twitter.com/comictagger)
[![Facebook](https://img.shields.io/badge/comictagger-facebook-lightgrey)](https://www.facebook.com/ComicTagger-139615369550787/)
# ComicTagger
ComicTagger is a **multi-platform** app for **writing metadata to digital comics**, written in Python and PyQt.
![ComicTagger logo](https://raw.githubusercontent.com/comictagger/comictagger/develop/comictaggerlib/graphics/app.png)
## Features
* Runs on macOS, Microsoft Windows, and Linux systems
* Get comic information from [Comic Vine](https://comicvine.gamespot.com/)
* **Automatic issue matching** using advanced image processing techniques
* **Batch processing** in the GUI for tagging hundreds or more comics at a time
* Support for **ComicRack** and **ComicBookLover** tagging formats
* Native full support for **CBZ** digital comics
* Native read only support for **CBR** digital comics: full support enabled installing additional [rar tools](https://www.rarlab.com/download.htm)
* Command line interface (CLI) enabling **custom scripting** and **batch operations on large collections**
For details, screen-shots, and more, visit [the Wiki](https://github.com/comictagger/comictagger/wiki)
## Installation
### Binaries
Windows, Linux and MacOS binaries are provided in the [Releases Page](https://github.com/comictagger/comictagger/releases).
Just unzip the archive in any folder and run, no additional installation steps are required.
### PIP installation
A pip package is provided, you can install it with:
```
$ pip3 install comictagger[GUI]
```
There are optional dependencies. You can install the optional dependencies by specifying one or more of them in braces e.g. `comictagger[CBR,GUI]`
Optional dependencies:
1. `ICU`: Ensures that comic pages are supported correctly. This should always be installed. *Currently only exists in the latest alpha release *
1. `CBR`: Provides support for CBR/RAR files.
1. `GUI`: Installs the GUI.
1. `7Z`: Provides support for CB7/7Z files.
1. `all`: Installs all of the above optional dependencies.
### Chocolatey installation (Windows only)
A [Chocolatey package](https://community.chocolatey.org/packages/comictagger), maintained by @Xav83, is provided, you can install it with:
```powershell
choco install comictagger
```
### From source
1. Ensure you have python 3.9 installed
2. Clone this repository `git clone https://github.com/comictagger/comictagger.git`
7. `pip3 install .[ICU]` or `pip3 install .[GUI,ICU]`
## Contributors
<!-- readme: beville,davide-romanini,collaborators,contributors -start -->
<table>
<tr>
<td align="center">
<a href="https://github.com/beville">
<img src="https://avatars.githubusercontent.com/u/7294848?v=4" width="100;" alt="beville"/>
<br />
<sub><b>beville</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/davide-romanini">
<img src="https://avatars.githubusercontent.com/u/731199?v=4" width="100;" alt="davide-romanini"/>
<br />
<sub><b>davide-romanini</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/fcanc">
<img src="https://avatars.githubusercontent.com/u/4999486?v=4" width="100;" alt="fcanc"/>
<br />
<sub><b>fcanc</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/lordwelch">
<img src="https://avatars.githubusercontent.com/u/7547075?v=4" width="100;" alt="lordwelch"/>
<br />
<sub><b>lordwelch</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/mizaki">
<img src="https://avatars.githubusercontent.com/u/1141189?v=4" width="100;" alt="mizaki"/>
<br />
<sub><b>mizaki</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/MichaelFitzurka">
<img src="https://avatars.githubusercontent.com/u/27830765?v=4" width="100;" alt="MichaelFitzurka"/>
<br />
<sub><b>MichaelFitzurka</b></sub>
</a>
</td></tr>
<tr>
<td align="center">
<a href="https://github.com/abuchanan920">
<img src="https://avatars.githubusercontent.com/u/368793?v=4" width="100;" alt="abuchanan920"/>
<br />
<sub><b>abuchanan920</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/AlbanSeurat">
<img src="https://avatars.githubusercontent.com/u/500180?v=4" width="100;" alt="AlbanSeurat"/>
<br />
<sub><b>AlbanSeurat</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/rhaussmann">
<img src="https://avatars.githubusercontent.com/u/7084007?v=4" width="100;" alt="rhaussmann"/>
<br />
<sub><b>rhaussmann</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/jpcranford">
<img src="https://avatars.githubusercontent.com/u/21347202?v=4" width="100;" alt="jpcranford"/>
<br />
<sub><b>jpcranford</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/PawlakMarek">
<img src="https://avatars.githubusercontent.com/u/26022173?v=4" width="100;" alt="PawlakMarek"/>
<br />
<sub><b>PawlakMarek</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/DrMcCoy">
<img src="https://avatars.githubusercontent.com/u/156130?v=4" width="100;" alt="DrMcCoy"/>
<br />
<sub><b>DrMcCoy</b></sub>
</a>
</td></tr>
<tr>
<td align="center">
<a href="https://github.com/Xav83">
<img src="https://avatars.githubusercontent.com/u/6787157?v=4" width="100;" alt="Xav83"/>
<br />
<sub><b>Xav83</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/thFrgttn">
<img src="https://avatars.githubusercontent.com/u/39759781?v=4" width="100;" alt="thFrgttn"/>
<br />
<sub><b>thFrgttn</b></sub>
</a>
</td>
<td align="center">
<a href="https://github.com/tlc">
<img src="https://avatars.githubusercontent.com/u/19436?v=4" width="100;" alt="tlc"/>
<br />
<sub><b>tlc</b></sub>
</a>
</td></tr>
</table>
<!-- readme: beville,davide-romanini,collaborators,contributors -end -->

View File

@ -1,31 +0,0 @@
ComicTagger is a multi-platform app for writing metadata to comic archives, written in Python and PyQt.
Features:
* Runs on Mac OSX, Microsoft Windows, and Linux systems
* Communicates with an online database (Comic Vine) for acquiring metadata
* Uses image processing to automatically match a given archive with the correct issue data
* Batch processing in the GUI for tagging hundreds or more comics at a time
* Reads and writes multiple tagging schemes ( ComicBookLover and ComicRack, with more planned).
* Reads and writes RAR and Zip archives (external tools needed for writing RAR)
* Command line interface (CLI) on all platforms (including Windows), which supports batch operations, and which can be used in native scripts for complex operations. For example, to recusrively scrape and tag all archives in a folder
comictagger.py -R -s -o -f -t cr -v -i --nooverwrite /path/to/comics/
For details, screenshots, release notes, and more, visit http://code.google.com/p/comictagger/
Requires:
* python 2.6 or 2.7
* configparser
* python imaging (PIL) >= 1.1.6
* beautifulsoup > 4.1
Optional requirement (for GUI):
* pyqt4
Install and run:
* ComicTagger can be run directly from this directory, using the launcher script "comictagger.py"
* To install on your system use: "python setup.py install". Take note in the output where comictagger.py goes!

View File

@ -0,0 +1,11 @@
[Desktop Entry]
Encoding=UTF-8
Name=ComicTagger
GenericName=Comic Metadata Editor
Comment=A cross-platform GUI/CLI app for writing metadata to comic archives
Exec=comictagger %F
Icon=/usr/local/share/comictagger/app.png
Terminal=false
Type=Application
MimeType=text/plain;
Categories=Application;

View File

@ -0,0 +1,241 @@
# -*- mode: python ; coding: utf-8 -*-
import platform
from comictaggerlib import ctversion
enable_console = False
block_cipher = None
a = Analysis(
["../comictaggerlib/__main__.py"],
pathex=[],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
exe_binaries = []
exe_zipfiles = []
exe_datas = []
exe_exclude_binaries = True
coll_binaries = a.binaries
coll_zipfiles = a.zipfiles
coll_datas = a.datas
if platform.system() in ["Windows"]:
enable_console = True
exe_binaries = a.binaries
exe_zipfiles = a.zipfiles
exe_datas = a.datas
exe_exclude_binaries = False
coll_binaries = []
coll_zipfiles = []
coll_datas = []
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
exe_binaries,
exe_zipfiles,
exe_datas,
[],
exclude_binaries=exe_exclude_binaries,
name="comictagger",
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=enable_console,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
icon="windows/app.ico",
)
if platform.system() not in ["Windows"]:
coll = COLLECT(
exe,
coll_binaries,
coll_zipfiles,
coll_datas,
strip=False,
upx=True,
upx_exclude=[],
name="comictagger",
)
app = BUNDLE(
coll,
name="ComicTagger.app",
icon="mac/app.icns",
info_plist={
"NSHighResolutionCapable": "True",
"NSPrincipalClass": "NSApplication",
"NSRequiresAquaSystemAppearance": "False",
"CFBundleDisplayName": "ComicTagger",
"CFBundleShortVersionString": ctversion.version,
"CFBundleVersion": ctversion.version,
"CFBundleDocumentTypes": [
{
"CFBundleTypeRole": "Editor",
"LSHandlerRank": "Default",
"LSItemContentTypes": [
"public.folder",
],
"CFBundleTypeName": "Folder",
},
{
"CFBundleTypeExtensions": [
"cbz",
],
"LSTypeIsPackage": False,
"NSPersistentStoreTypeKey": "Binary",
"CFBundleTypeIconSystemGenerated": True,
"CFBundleTypeName": "ZIP Comic Archive",
"LSItemContentTypes": [
"public.zip-comic-archive",
"com.simplecomic.cbz-archive",
"com.macitbetter.cbz-archive",
"public.cbz-archive",
"cx.c3.cbz-archive",
"com.yacreader.yacreader.cbz",
"com.milke.cbz-archive",
"com.bitcartel.comicbooklover.cbz",
"public.archive.cbz",
"public.zip-archive",
],
"CFBundleTypeRole": "Editor",
"LSHandlerRank": "Default",
},
{
"CFBundleTypeExtensions": [
"cb7",
],
"LSTypeIsPackage": False,
"NSPersistentStoreTypeKey": "Binary",
"CFBundleTypeIconSystemGenerated": True,
"CFBundleTypeName": "7-Zip Comic Archive",
"LSItemContentTypes": [
"org.7-zip.7-zip-archive",
"com.simplecomic.cb7-archive",
"public.cb7-archive",
"com.macitbetter.cb7-archive",
"cx.c3.cb7-archive",
"org.7-zip.7-zip-comic-archive",
],
"CFBundleTypeRole": "Editor",
"LSHandlerRank": "Default",
},
{
"CFBundleTypeExtensions": [
"cbr",
],
"LSTypeIsPackage": False,
"NSPersistentStoreTypeKey": "Binary",
"CFBundleTypeIconSystemGenerated": True,
"CFBundleTypeName": "RAR Comic Archive",
"LSItemContentTypes": [
"com.rarlab.rar-archive",
"com.rarlab.rar-comic-archive",
"com.simplecomic.cbr-archive",
"com.macitbetter.cbr-archive",
"public.cbr-archive",
"cx.c3.cbr-archive",
"com.bitcartel.comicbooklover.cbr",
"com.milke.cbr-archive",
"public.archive.cbr",
"com.yacreader.yacreader.cbr",
],
"CFBundleTypeRole": "Editor",
"LSHandlerRank": "Default",
},
],
"UTImportedTypeDeclarations": [
{
"UTTypeIdentifier": "com.rarlab.rar-archive",
"UTTypeDescription": "RAR Archive",
"UTTypeConformsTo": [
"public.data",
"public.archive",
],
"UTTypeTagSpecification": {
"public.mime-type": [
"application/x-rar",
"application/x-rar-compressed",
],
"public.filename-extension": [
"rar",
],
},
},
{
"UTTypeConformsTo": [
"public.data",
"public.archive",
"com.rarlab.rar-archive",
],
"UTTypeIdentifier": "com.rarlab.rar-comic-archive",
"UTTypeDescription": "RAR Comic Archive",
"UTTypeTagSpecification": {
"public.mime-type": [
"application/vnd.comicbook-rar",
"application/x-cbr",
],
"public.filename-extension": [
"cbr",
],
},
},
{
"UTTypeConformsTo": [
"public.data",
"public.archive",
"public.zip-archive",
],
"UTTypeIdentifier": "public.zip-comic-archive",
"UTTypeDescription": "ZIP Comic Archive",
"UTTypeTagSpecification": {
"public.filename-extension": [
"cbz",
],
},
},
{
"UTTypeConformsTo": [
"public.data",
"public.archive",
"org.7-zip.7-zip-archive",
],
"UTTypeIdentifier": "org.7-zip.7-zip-comic-archive",
"UTTypeDescription": "7-Zip Comic Archive",
"UTTypeTagSpecification": {
"public.mime-type": [
"application/vnd.comicbook+7-zip",
"application/x-cb7-compressed",
],
"public.filename-extension": [
"cb7",
],
},
},
],
},
bundle_identifier="com.comictagger",
)

24
build-tools/dmgbuild.conf Normal file
View File

@ -0,0 +1,24 @@
import pathlib
import platform
from comictaggerlib.ctversion import __version__
app = "ComicTagger"
exe = app.casefold()
ver = platform.mac_ver()
os_version = f"osx-{ver[0]}-{ver[2]}"
app_name = f"{app}.app"
final_name = f"{app}-{__version__}-{os_version}"
path = pathlib.Path(f"dist/{app_name}")
zip_file = pathlib.Path(f"dist/{final_name}.zip")
format = 'ULMO'
files = (str(path),)
symlinks = {'Applications': '/Applications'}
icon = pathlib.Path().cwd() / 'build-tools' / 'mac' / 'volume.icns'
icon_locations = {
app_name: (100, 100),
'Applications': (300, 100)
}

View File

@ -0,0 +1,24 @@
from __future__ import annotations
import pathlib
import settngs
import comictaggerlib.main
def generate() -> str:
app = comictaggerlib.main.App()
app.load_plugins(app.initial_arg_parser.parse_known_args()[0])
app.register_settings()
imports, types = settngs.generate_dict(app.manager.definitions)
imports2, types2 = settngs.generate_ns(app.manager.definitions)
i = imports.splitlines()
i.extend(set(imports2.splitlines()) - set(i))
return "\n\n".join(("\n".join(i), types2, types))
if __name__ == "__main__":
src = generate()
pathlib.Path("./comictaggerlib/ctsettings/settngs_namespace.py").write_text(src)
print(src, end="")

View File

@ -0,0 +1,33 @@
from __future__ import annotations
import argparse
import os
import pathlib
import stat
import requests
parser = argparse.ArgumentParser()
parser.add_argument("APPIMAGETOOL", default="build/appimagetool-x86_64.AppImage", type=pathlib.Path, nargs="?")
opts = parser.parse_args()
opts.APPIMAGETOOL = opts.APPIMAGETOOL.absolute()
def urlretrieve(url: str, dest: pathlib.Path) -> None:
resp = requests.get(url)
if resp.status_code == 200:
dest.parent.mkdir(parents=True, exist_ok=True)
dest.write_bytes(resp.content)
if opts.APPIMAGETOOL.exists():
raise SystemExit(0)
urlretrieve(
"https://github.com/AppImage/AppImageKit/releases/latest/download/appimagetool-x86_64.AppImage", opts.APPIMAGETOOL
)
os.chmod(opts.APPIMAGETOOL, stat.S_IRWXU)
if not opts.APPIMAGETOOL.exists():
raise SystemExit(1)

View File

@ -1,35 +1,26 @@
#PYINSTALLER_CMD := VERSIONER_PYTHON_PREFER_32_BIT=yes arch -i386 python $(HOME)/pyinstaller-2.0/pyinstaller.py
PYINSTALLER_CMD := python $(HOME)/pyinstaller-2.0/pyinstaller.py
TAGGER_BASE := $(HOME)/Dropbox/tagger/comictagger
PYINSTALLER_CMD := pyinstaller
TAGGER_BASE ?= ../
TAGGER_SRC := $(TAGGER_BASE)/comictaggerlib
APP_NAME := ComicTagger
VERSION_STR := $(shell grep version $(TAGGER_SRC)/ctversion.py| cut -d= -f2 | sed 's/\"//g')
VERSION_STR := $(shell cd .. && python setup.py --version)
MAC_BASE := $(TAGGER_BASE)/mac
DIST_DIR := $(MAC_BASE)/dist
STAGING := $(MAC_BASE)/$(APP_NAME)
APP_BUNDLE := $(DIST_DIR)/$(APP_NAME).app
VOLUME_NAME := $(APP_NAME)-$(VERSION_STR)
VOLUME_NAME := "$(APP_NAME)-$(VERSION_STR)"
DMG_FILE := $(VOLUME_NAME).dmg
all: clean dist diskimage
dist:
$(PYINSTALLER_CMD) $(TAGGER_BASE)/comictagger.py -o $(MAC_BASE) -w -n $(APP_NAME) -s
$(PYINSTALLER_CMD) $(TAGGER_BASE)/comictagger.py -w -n $(APP_NAME) -s
cp -a $(TAGGER_SRC)/ui $(APP_BUNDLE)/Contents/MacOS
cp -a $(TAGGER_SRC)/graphics $(APP_BUNDLE)/Contents/MacOS
cp $(MAC_BASE)/app.icns $(APP_BUNDLE)/Contents/Resources/icon-windowed.icns
# fix the version string in the Info.plist
sed -i -e 's/0\.0\.0/$(VERSION_STR)/' $(MAC_BASE)/dist/ComicTagger.app/Contents/Info.plist
# strip out PPC/x64
#./make_thin.sh dist/ComicTagger.app/Contents/MacOS
#./make_thin.sh dist/ComicTagger.app/Contents/MacOS/qt4_plugins/accessible
#./make_thin.sh dist/ComicTagger.app/Contents/MacOS/qt4_plugins/bearer
#./make_thin.sh dist/ComicTagger.app/Contents/MacOS/qt4_plugins/codecs
#./make_thin.sh dist/ComicTagger.app/Contents/MacOS/qt4_plugins/graphicssystems
#./make_thin.sh dist/ComicTagger.app/Contents/MacOS/qt4_plugins/iconengines
#./make_thin.sh dist/ComicTagger.app/Contents/MacOS/qt4_plugins/imageformats
clean:
rm -rf $(DIST_DIR) $(MAC_BASE)/build
@ -39,7 +30,7 @@ clean:
rm -f raw*.dmg
echo $(VERSION_STR)
diskimage:
#Set up disk image staging folder
# Set up disk image staging folder
rm -rf $(STAGING)
mkdir $(STAGING)
cp $(TAGGER_BASE)/release_notes.txt $(STAGING)
@ -48,28 +39,27 @@ diskimage:
cp $(MAC_BASE)/volume.icns $(STAGING)/.VolumeIcon.icns
SetFile -c icnC $(STAGING)/.VolumeIcon.icns
##generate raw disk image
# generate raw disk image
rm -f $(DMG_FILE)
hdiutil create -srcfolder $(STAGING) -volname $(VOLUME_NAME) -format UDRW -ov raw-$(DMG_FILE)
hdiutil create -srcfolder $(STAGING) -volname $(VOLUME_NAME) -format UDRW -ov raw-$(DMG_FILE)
#remove working files and folders
# remove working files and folders
rm -rf $(STAGING)
# we now have a raw DMG file.
# remount it so we can set the volume icon properly
mkdir -p $(STAGING)
hdiutil attach raw-$(DMG_FILE) -mountpoint $(STAGING)
SetFile -a C $(STAGING)
hdiutil detach $(STAGING)
rm -rf $(STAGING)
# convert the raw image
rm -f $(DMG_FILE)
hdiutil convert raw-$(DMG_FILE) -format UDZO -o $(DMG_FILE)
rm -f raw-$(DMG_FILE)
#move finished product to release folder
# move finished product to release folder
mkdir -p $(TAGGER_BASE)/release
mv $(DMG_FILE) $(TAGGER_BASE)/release

View File

@ -8,12 +8,12 @@ do
then
echo "Fat Binary: $FILE"
mkdir -p thin
lipo -thin i386 -output thin/$FILE $BINFOLDER/$FILE
lipo -thin i386 -output thin/$FILE $BINFOLDER/$FILE
fi
done
if [ -d thin ]
then
then
mv thin/* $BINFOLDER
else
echo No files to lipo

View File

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

View File

@ -0,0 +1,88 @@
from __future__ import annotations
import os
import pathlib
import platform
import sys
import tarfile
import zipfile
from comictaggerlib.ctversion import __version__
def addToZip(zf: zipfile.ZipFile, path: str, zippath: str) -> None:
if os.path.isfile(path):
zf.write(path, zippath)
elif os.path.isdir(path):
if zippath:
zf.write(path, zippath)
for nm in sorted(os.listdir(path)):
addToZip(zf, os.path.join(path, nm), os.path.join(zippath, nm))
def Zip(zip_file: pathlib.Path, path: pathlib.Path) -> None:
zip_file.unlink(missing_ok=True)
with zipfile.ZipFile(f"{zip_file}.zip", "w", compression=zipfile.ZIP_DEFLATED, compresslevel=8) as zf:
zippath = os.path.basename(path)
if not zippath:
zippath = os.path.basename(os.path.dirname(path))
if zippath in ("", os.curdir, os.pardir):
zippath = ""
addToZip(zf, str(path), zippath)
def addToTar(tf: tarfile.TarFile, path: str, zippath: str) -> None:
if os.path.isfile(path):
tf.add(path, zippath)
elif os.path.isdir(path):
if zippath:
tf.add(path, zippath, recursive=False)
for nm in sorted(os.listdir(path)):
addToTar(tf, os.path.join(path, nm), os.path.join(zippath, nm))
def Tar(tar_file: pathlib.Path, path: pathlib.Path) -> None:
tar_file.unlink(missing_ok=True)
with tarfile.open(f"{tar_file}.tar.gz", "w:gz") as tf:
zippath = os.path.basename(path)
if not zippath:
zippath = os.path.basename(os.path.dirname(path))
if zippath in ("", os.curdir, os.pardir):
zippath = ""
addToTar(tf, str(path), zippath)
if __name__ == "__main__":
app = "ComicTagger"
exe = app.casefold()
if platform.system() == "Windows":
os_version = f"win-{platform.machine()}"
app_name = f"{exe}.exe"
final_name = f"{app}-{__version__}-{os_version}.exe"
elif platform.system() == "Darwin":
ver = platform.mac_ver()
os_version = f"osx-{ver[0]}-{ver[2]}"
app_name = f"{app}.app"
final_name = f"{app}-{__version__}-{os_version}"
else:
app_name = exe
final_name = f"ComicTagger-{__version__}-{platform.system()}"
path = pathlib.Path(f"dist/{app_name}")
zip_file = pathlib.Path(f"dist/{final_name}")
if platform.system() == "Darwin":
from dmgbuild.__main__ import main as dmg_main
sys.argv = [
"zip_artifacts",
"-s",
str(pathlib.Path(__file__).parent / "dmgbuild.conf"),
f"{app} {__version__}",
f"dist/{final_name}.dmg",
]
dmg_main()
elif platform.system() == "Windows":
Zip(zip_file, path)
else:
Tar(zip_file, path)

3
comicapi/__init__.py Normal file
View File

@ -0,0 +1,3 @@
from __future__ import annotations
__author__ = "dromanin"

View File

@ -0,0 +1,7 @@
from __future__ import annotations
import os
def get_hook_dirs() -> list[str]:
return [os.path.dirname(__file__)]

View File

@ -0,0 +1,10 @@
from __future__ import annotations
from PyInstaller.utils.hooks import collect_data_files, collect_entry_point
datas, hiddenimports = collect_entry_point("comicapi.archiver")
mdatas, mhiddenimports = collect_entry_point("comicapi.metadata")
hiddenimports += mhiddenimports
datas += mdatas
datas += collect_data_files("comicapi.data")

468
comicapi/_url.py Normal file
View File

@ -0,0 +1,468 @@
# mypy: disable-error-code="no-redef"
from __future__ import annotations
try:
from urllib3.exceptions import HTTPError, LocationParseError, LocationValueError
from urllib3.util import Url, parse_url
except ImportError:
import re
import typing
class HTTPError(Exception):
"""Base exception used by this module."""
class LocationValueError(ValueError, HTTPError):
"""Raised when there is something wrong with a given URL input."""
class LocationParseError(LocationValueError):
"""Raised when get_host or similar fails to parse the URL input."""
def __init__(self, location: str) -> None:
message = f"Failed to parse: {location}"
super().__init__(message)
self.location = location
def to_str(x: str | bytes, encoding: str | None = None, errors: str | None = None) -> str:
if isinstance(x, str):
return x
elif not isinstance(x, bytes):
raise TypeError(f"not expecting type {type(x).__name__}")
if encoding or errors:
return x.decode(encoding or "utf-8", errors=errors or "strict")
return x.decode()
# We only want to normalize urls with an HTTP(S) scheme.
# urllib3 infers URLs without a scheme (None) to be http.
_NORMALIZABLE_SCHEMES = ("http", "https", None)
# Almost all of these patterns were derived from the
# 'rfc3986' module: https://github.com/python-hyper/rfc3986
_PERCENT_RE = re.compile(r"%[a-fA-F0-9]{2}")
_SCHEME_RE = re.compile(r"^(?:[a-zA-Z][a-zA-Z0-9+-]*:|/)")
_URI_RE = re.compile(
r"^(?:([a-zA-Z][a-zA-Z0-9+.-]*):)?" r"(?://([^\\/?#]*))?" r"([^?#]*)" r"(?:\?([^#]*))?" r"(?:#(.*))?$",
re.UNICODE | re.DOTALL,
)
_IPV4_PAT = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}"
_HEX_PAT = "[0-9A-Fa-f]{1,4}"
_LS32_PAT = "(?:{hex}:{hex}|{ipv4})".format(hex=_HEX_PAT, ipv4=_IPV4_PAT)
_subs = {"hex": _HEX_PAT, "ls32": _LS32_PAT}
_variations = [
# 6( h16 ":" ) ls32
"(?:%(hex)s:){6}%(ls32)s",
# "::" 5( h16 ":" ) ls32
"::(?:%(hex)s:){5}%(ls32)s",
# [ h16 ] "::" 4( h16 ":" ) ls32
"(?:%(hex)s)?::(?:%(hex)s:){4}%(ls32)s",
# [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32
"(?:(?:%(hex)s:)?%(hex)s)?::(?:%(hex)s:){3}%(ls32)s",
# [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32
"(?:(?:%(hex)s:){0,2}%(hex)s)?::(?:%(hex)s:){2}%(ls32)s",
# [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32
"(?:(?:%(hex)s:){0,3}%(hex)s)?::%(hex)s:%(ls32)s",
# [ *4( h16 ":" ) h16 ] "::" ls32
"(?:(?:%(hex)s:){0,4}%(hex)s)?::%(ls32)s",
# [ *5( h16 ":" ) h16 ] "::" h16
"(?:(?:%(hex)s:){0,5}%(hex)s)?::%(hex)s",
# [ *6( h16 ":" ) h16 ] "::"
"(?:(?:%(hex)s:){0,6}%(hex)s)?::",
]
_UNRESERVED_PAT = r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._\-~"
_IPV6_PAT = "(?:" + "|".join([x % _subs for x in _variations]) + ")"
_ZONE_ID_PAT = "(?:%25|%)(?:[" + _UNRESERVED_PAT + "]|%[a-fA-F0-9]{2})+"
_IPV6_ADDRZ_PAT = r"\[" + _IPV6_PAT + r"(?:" + _ZONE_ID_PAT + r")?\]"
_REG_NAME_PAT = r"(?:[^\[\]%:/?#]|%[a-fA-F0-9]{2})*"
_TARGET_RE = re.compile(r"^(/[^?#]*)(?:\?([^#]*))?(?:#.*)?$")
_IPV4_RE = re.compile("^" + _IPV4_PAT + "$")
_IPV6_RE = re.compile("^" + _IPV6_PAT + "$")
_IPV6_ADDRZ_RE = re.compile("^" + _IPV6_ADDRZ_PAT + "$")
_BRACELESS_IPV6_ADDRZ_RE = re.compile("^" + _IPV6_ADDRZ_PAT[2:-2] + "$")
_ZONE_ID_RE = re.compile("(" + _ZONE_ID_PAT + r")\]$")
_HOST_PORT_PAT = ("^(%s|%s|%s)(?::0*?(|0|[1-9][0-9]{0,4}))?$") % (
_REG_NAME_PAT,
_IPV4_PAT,
_IPV6_ADDRZ_PAT,
)
_HOST_PORT_RE = re.compile(_HOST_PORT_PAT, re.UNICODE | re.DOTALL)
_UNRESERVED_CHARS = set("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._-~")
_SUB_DELIM_CHARS = set("!$&'()*+,;=")
_USERINFO_CHARS = _UNRESERVED_CHARS | _SUB_DELIM_CHARS | {":"}
_PATH_CHARS = _USERINFO_CHARS | {"@", "/"}
_QUERY_CHARS = _FRAGMENT_CHARS = _PATH_CHARS | {"?"}
class Url(
typing.NamedTuple(
"Url",
[
("scheme", typing.Optional[str]),
("auth", typing.Optional[str]),
("host", typing.Optional[str]),
("port", typing.Optional[int]),
("path", typing.Optional[str]),
("query", typing.Optional[str]),
("fragment", typing.Optional[str]),
],
)
):
"""
Data structure for representing an HTTP URL. Used as a return value for
:func:`parse_url`. Both the scheme and host are normalized as they are
both case-insensitive according to RFC 3986.
"""
def __new__( # type: ignore[no-untyped-def]
cls,
scheme: str | None = None,
auth: str | None = None,
host: str | None = None,
port: int | None = None,
path: str | None = None,
query: str | None = None,
fragment: str | None = None,
):
if path and not path.startswith("/"):
path = "/" + path
if scheme is not None:
scheme = scheme.lower()
return super().__new__(cls, scheme, auth, host, port, path, query, fragment)
@property
def hostname(self) -> str | None:
"""For backwards-compatibility with urlparse. We're nice like that."""
return self.host
@property
def request_uri(self) -> str:
"""Absolute path including the query string."""
uri = self.path or "/"
if self.query is not None:
uri += "?" + self.query
return uri
@property
def authority(self) -> str | None:
"""
Authority component as defined in RFC 3986 3.2.
This includes userinfo (auth), host and port.
i.e.
userinfo@host:port
"""
userinfo = self.auth
netloc = self.netloc
if netloc is None or userinfo is None:
return netloc
else:
return f"{userinfo}@{netloc}"
@property
def netloc(self) -> str | None:
"""
Network location including host and port.
If you need the equivalent of urllib.parse's ``netloc``,
use the ``authority`` property instead.
"""
if self.host is None:
return None
if self.port:
return f"{self.host}:{self.port}"
return self.host
@property
def url(self) -> str:
"""
Convert self into a url
This function should more or less round-trip with :func:`.parse_url`. The
returned url may not be exactly the same as the url inputted to
:func:`.parse_url`, but it should be equivalent by the RFC (e.g., urls
with a blank port will have : removed).
Example:
.. code-block:: python
import urllib3
U = urllib3.util.parse_url("https://google.com/mail/")
print(U.url)
# "https://google.com/mail/"
print( urllib3.util.Url("https", "username:password",
"host.com", 80, "/path", "query", "fragment"
).url
)
# "https://username:password@host.com:80/path?query#fragment"
"""
scheme, auth, host, port, path, query, fragment = self
url = ""
# We use "is not None" we want things to happen with empty strings (or 0 port)
if scheme is not None:
url += scheme + "://"
if auth is not None:
url += auth + "@"
if host is not None:
url += host
if port is not None:
url += ":" + str(port)
if path is not None:
url += path
if query is not None:
url += "?" + query
if fragment is not None:
url += "#" + fragment
return url
def __str__(self) -> str:
return self.url
@typing.overload
def _encode_invalid_chars(component: str, allowed_chars: typing.Container[str]) -> str: # Abstract
...
@typing.overload
def _encode_invalid_chars(component: None, allowed_chars: typing.Container[str]) -> None: # Abstract
...
def _encode_invalid_chars(component: str | None, allowed_chars: typing.Container[str]) -> str | None:
"""Percent-encodes a URI component without reapplying
onto an already percent-encoded component.
"""
if component is None:
return component
component = to_str(component)
# Normalize existing percent-encoded bytes.
# Try to see if the component we're encoding is already percent-encoded
# so we can skip all '%' characters but still encode all others.
component, percent_encodings = _PERCENT_RE.subn(lambda match: match.group(0).upper(), component)
uri_bytes = component.encode("utf-8", "surrogatepass")
is_percent_encoded = percent_encodings == uri_bytes.count(b"%")
encoded_component = bytearray()
for i in range(0, len(uri_bytes)):
# Will return a single character bytestring
byte = uri_bytes[i : i + 1]
byte_ord = ord(byte)
if (is_percent_encoded and byte == b"%") or (byte_ord < 128 and byte.decode() in allowed_chars):
encoded_component += byte
continue
encoded_component.extend(b"%" + (hex(byte_ord)[2:].encode().zfill(2).upper()))
return encoded_component.decode()
def _remove_path_dot_segments(path: str) -> str:
# See http://tools.ietf.org/html/rfc3986#section-5.2.4 for pseudo-code
segments = path.split("/") # Turn the path into a list of segments
output = [] # Initialize the variable to use to store output
for segment in segments:
# '.' is the current directory, so ignore it, it is superfluous
if segment == ".":
continue
# Anything other than '..', should be appended to the output
if segment != "..":
output.append(segment)
# In this case segment == '..', if we can, we should pop the last
# element
elif output:
output.pop()
# If the path starts with '/' and the output is empty or the first string
# is non-empty
if path.startswith("/") and (not output or output[0]):
output.insert(0, "")
# If the path starts with '/.' or '/..' ensure we add one more empty
# string to add a trailing '/'
if path.endswith(("/.", "/..")):
output.append("")
return "/".join(output)
@typing.overload
def _normalize_host(host: None, scheme: str | None) -> None: ...
@typing.overload
def _normalize_host(host: str, scheme: str | None) -> str: ...
def _normalize_host(host: str | None, scheme: str | None) -> str | None:
if host:
if scheme in _NORMALIZABLE_SCHEMES:
is_ipv6 = _IPV6_ADDRZ_RE.match(host)
if is_ipv6:
# IPv6 hosts of the form 'a::b%zone' are encoded in a URL as
# such per RFC 6874: 'a::b%25zone'. Unquote the ZoneID
# separator as necessary to return a valid RFC 4007 scoped IP.
match = _ZONE_ID_RE.search(host)
if match:
start, end = match.span(1)
zone_id = host[start:end]
if zone_id.startswith("%25") and zone_id != "%25":
zone_id = zone_id[3:]
else:
zone_id = zone_id[1:]
zone_id = _encode_invalid_chars(zone_id, _UNRESERVED_CHARS)
return f"{host[:start].lower()}%{zone_id}{host[end:]}"
else:
return host.lower()
elif not _IPV4_RE.match(host):
return to_str(
b".".join([_idna_encode(label) for label in host.split(".")]),
"ascii",
)
return host
def _idna_encode(name: str) -> bytes:
if not name.isascii():
try:
import idna
except ImportError:
raise LocationParseError("Unable to parse URL without the 'idna' module") from None
try:
return idna.encode(name.lower(), strict=True, std3_rules=True)
except idna.IDNAError:
raise LocationParseError(f"Name '{name}' is not a valid IDNA label") from None
return name.lower().encode("ascii")
def _encode_target(target: str) -> str:
"""Percent-encodes a request target so that there are no invalid characters
Pre-condition for this function is that 'target' must start with '/'.
If that is the case then _TARGET_RE will always produce a match.
"""
match = _TARGET_RE.match(target)
if not match: # Defensive:
raise LocationParseError(f"{target!r} is not a valid request URI")
path, query = match.groups()
encoded_target = _encode_invalid_chars(path, _PATH_CHARS)
if query is not None:
query = _encode_invalid_chars(query, _QUERY_CHARS)
encoded_target += "?" + query
return encoded_target
def parse_url(url: str) -> Url:
"""
Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is
performed to parse incomplete urls. Fields not provided will be None.
This parser is RFC 3986 and RFC 6874 compliant.
The parser logic and helper functions are based heavily on
work done in the ``rfc3986`` module.
:param str url: URL to parse into a :class:`.Url` namedtuple.
Partly backwards-compatible with :mod:`urllib.parse`.
Example:
.. code-block:: python
import urllib3
print( urllib3.util.parse_url('http://google.com/mail/'))
# Url(scheme='http', host='google.com', port=None, path='/mail/', ...)
print( urllib3.util.parse_url('google.com:80'))
# Url(scheme=None, host='google.com', port=80, path=None, ...)
print( urllib3.util.parse_url('/foo?bar'))
# Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...)
"""
if not url:
# Empty
return Url()
source_url = url
if not _SCHEME_RE.search(url):
url = "//" + url
scheme: str | None
authority: str | None
auth: str | None
host: str | None
port: str | None
port_int: int | None
path: str | None
query: str | None
fragment: str | None
try:
scheme, authority, path, query, fragment = _URI_RE.match(url).groups() # type: ignore[union-attr]
normalize_uri = scheme is None or scheme.lower() in _NORMALIZABLE_SCHEMES
if scheme:
scheme = scheme.lower()
if authority:
auth, _, host_port = authority.rpartition("@")
auth = auth or None
host, port = _HOST_PORT_RE.match(host_port).groups() # type: ignore[union-attr]
if auth and normalize_uri:
auth = _encode_invalid_chars(auth, _USERINFO_CHARS)
if port == "":
port = None
else:
auth, host, port = None, None, None
if port is not None:
port_int = int(port)
if not (0 <= port_int <= 65535):
raise LocationParseError(url)
else:
port_int = None
host = _normalize_host(host, scheme)
if normalize_uri and path:
path = _remove_path_dot_segments(path)
path = _encode_invalid_chars(path, _PATH_CHARS)
if normalize_uri and query:
query = _encode_invalid_chars(query, _QUERY_CHARS)
if normalize_uri and fragment:
fragment = _encode_invalid_chars(fragment, _FRAGMENT_CHARS)
except (ValueError, AttributeError) as e:
raise LocationParseError(source_url) from e
# For the sake of backwards compatibility we put empty
# string values for path if there are any defined values
# beyond the path in the URL.
# TODO: Remove this when we break backwards compatibility.
if not path:
if query is not None or fragment is not None:
path = ""
else:
path = None
return Url(
scheme=scheme,
auth=auth,
host=host,
port=port_int,
path=path,
query=query,
fragment=fragment,
)
__all__ = ("Url", "parse_url", "HTTPError", "LocationParseError", "LocationValueError")

View File

@ -0,0 +1,13 @@
from __future__ import annotations
from comicapi.archivers.archiver import Archiver
from comicapi.archivers.folder import FolderArchiver
from comicapi.archivers.zip import ZipArchiver
class UnknownArchiver(Archiver):
def name(self) -> str:
return "Unknown"
__all__ = ["Archiver", "UnknownArchiver", "FolderArchiver", "ZipArchiver"]

View File

@ -0,0 +1,137 @@
from __future__ import annotations
import pathlib
from typing import Protocol, runtime_checkable
@runtime_checkable
class Archiver(Protocol):
"""Archiver Protocol"""
"""The path to the archive"""
path: pathlib.Path
"""
The name of the executable used for this archiver. This should be the base name of the executable.
For example if 'rar.exe' is needed this should be "rar".
If an executable is not used this should be the empty string.
"""
exe: str = ""
"""
Whether or not this archiver is enabled.
If external imports are required and are not available this should be false. See rar.py and sevenzip.py.
"""
enabled: bool = True
def __init__(self) -> None:
self.path = pathlib.Path()
def get_comment(self) -> str:
"""
Returns the comment from the current archive as a string.
Should always return a string. If comments are not supported in the archive the empty string should be returned.
"""
return ""
def set_comment(self, comment: str) -> bool:
"""
Returns True if the comment was successfully set on the current archive.
Should always return a boolean. If comments are not supported in the archive False should be returned.
"""
return False
def supports_comment(self) -> bool:
"""
Returns True if the current archive supports comments.
Should always return a boolean. If comments are not supported in the archive False should be returned.
"""
return False
def read_file(self, archive_file: str) -> bytes:
"""
Reads the named file from the current archive.
archive_file should always come from the output of get_filename_list.
Should always return a bytes object. Exceptions should be of the type OSError.
"""
raise NotImplementedError
def remove_file(self, archive_file: str) -> bool:
"""
Removes the named file from the current archive.
archive_file should always come from the output of get_filename_list.
Should always return a boolean. Failures should return False.
Rebuilding the archive without the named file is a standard way to remove a file.
"""
return False
def write_file(self, archive_file: str, data: bytes) -> bool:
"""
Writes the named file to the current archive.
Should always return a boolean. Failures should return False.
"""
return False
def get_filename_list(self) -> list[str]:
"""
Returns a list of filenames in the current archive.
Should always return a list of string. Failures should return an empty list.
"""
return []
def supports_files(self) -> bool:
"""
Returns True if the current archive supports arbitrary non-picture files.
Should always return a boolean.
If arbitrary non-picture files are not supported in the archive False should be returned.
"""
return False
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""
Copies the contents of another achive to the current archive.
Should always return a boolean. Failures should return False.
"""
return False
def is_writable(self) -> bool:
"""
Retuns True if the current archive is writeable
Should always return a boolean. Failures should return False.
"""
return False
def extension(self) -> str:
"""
Returns the extension that this archiver should use eg ".cbz".
Should always return a string. Failures should return the empty string.
"""
return ""
def name(self) -> str:
"""
Returns the name of this archiver for display purposes eg "CBZ".
Should always return a string. Failures should return the empty string.
"""
return ""
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
"""
Returns True if the given path can be opened by this archiver.
Should always return a boolean. Failures should return False.
"""
return False
@classmethod
def open(cls, path: pathlib.Path) -> Archiver:
"""
Opens the given archive.
Should always return a an Archver.
Should never cause an exception no file operations should take place in this method,
is_valid will always be called before open.
"""
archiver = cls()
archiver.path = path
return archiver

View File

@ -0,0 +1,104 @@
from __future__ import annotations
import logging
import os
import pathlib
from comicapi.archivers import Archiver
logger = logging.getLogger(__name__)
class FolderArchiver(Archiver):
"""Folder implementation"""
def __init__(self) -> None:
super().__init__()
self.comment_file_name = "ComicTaggerFolderComment.txt"
def get_comment(self) -> str:
try:
return (self.path / self.comment_file_name).read_text()
except OSError:
return ""
def set_comment(self, comment: str) -> bool:
if (self.path / self.comment_file_name).exists() or comment:
return self.write_file(self.comment_file_name, comment.encode("utf-8"))
return True
def supports_comment(self) -> bool:
return True
def read_file(self, archive_file: str) -> bytes:
try:
data = (self.path / archive_file).read_bytes()
except OSError as e:
logger.error("Error reading folder archive [%s]: %s :: %s", e, self.path, archive_file)
raise
return data
def remove_file(self, archive_file: str) -> bool:
try:
(self.path / archive_file).unlink(missing_ok=True)
except OSError as e:
logger.error("Error removing file for folder archive [%s]: %s :: %s", e, self.path, archive_file)
return False
else:
return True
def write_file(self, archive_file: str, data: bytes) -> bool:
try:
file_path = self.path / archive_file
file_path.parent.mkdir(exist_ok=True, parents=True)
with open(self.path / archive_file, mode="wb") as f:
f.write(data)
except OSError as e:
logger.error("Error writing folder archive [%s]: %s :: %s", e, self.path, archive_file)
return False
else:
return True
def get_filename_list(self) -> list[str]:
filenames = []
try:
for root, _dirs, files in os.walk(self.path):
for f in files:
filenames.append(os.path.relpath(os.path.join(root, f), self.path).replace(os.path.sep, "/"))
return filenames
except OSError as e:
logger.error("Error listing files in folder archive [%s]: %s", e, self.path)
return []
def supports_files(self) -> bool:
return True
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""Replace the current zip with one copied from another archive"""
try:
for filename in other_archive.get_filename_list():
data = other_archive.read_file(filename)
if data is not None:
self.write_file(filename, data)
# preserve the old comment
comment = other_archive.get_comment()
if comment is not None:
if not self.set_comment(comment):
return False
except Exception:
logger.exception("Error while copying archive from %s to %s", other_archive.path, self.path)
return False
else:
return True
def is_writable(self) -> bool:
return True
def name(self) -> str:
return "Folder"
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
return path.is_dir()

312
comicapi/archivers/rar.py Normal file
View File

@ -0,0 +1,312 @@
from __future__ import annotations
import logging
import os
import pathlib
import platform
import shutil
import subprocess
import tempfile
import time
from comicapi.archivers import Archiver
try:
import rarfile
rar_support = True
except ImportError:
rar_support = False
logger = logging.getLogger(__name__)
if not rar_support:
logger.error("rar unavailable")
class RarArchiver(Archiver):
"""RAR implementation"""
enabled = rar_support
exe = "rar"
def __init__(self) -> None:
super().__init__()
# windows only, keeps the cmd.exe from popping up
if platform.system() == "Windows":
self.startupinfo = subprocess.STARTUPINFO() # type: ignore
self.startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW # type: ignore
else:
self.startupinfo = None
def get_comment(self) -> str:
rarc = self.get_rar_obj()
return (rarc.comment if rarc else "") or ""
def set_comment(self, comment: str) -> bool:
if rar_support and self.exe:
try:
# write comment to temp file
with tempfile.TemporaryDirectory() as tmp_dir:
tmp_file = pathlib.Path(tmp_dir) / "rar_comment.txt"
tmp_file.write_text(comment, encoding="utf-8")
working_dir = os.path.dirname(os.path.abspath(self.path))
# use external program to write comment to Rar archive
proc_args = [
self.exe,
"c",
f"-w{working_dir}",
"-c-",
f"-z{tmp_file}",
str(self.path),
]
result = subprocess.run(
proc_args,
startupinfo=self.startupinfo,
stdin=subprocess.DEVNULL,
capture_output=True,
encoding="utf-8",
cwd=tmp_dir,
)
if result.returncode != 0:
logger.error(
"Error writing comment to rar archive [exitcode: %d]: %s :: %s",
result.returncode,
self.path,
result.stderr,
)
return False
if platform.system() == "Darwin":
time.sleep(1)
except OSError as e:
logger.exception("Error writing comment to rar archive [%s]: %s", e, self.path)
return False
else:
return True
else:
return False
def supports_comment(self) -> bool:
return True
def read_file(self, archive_file: str) -> bytes:
rarc = self.get_rar_obj()
if rarc is None:
return b""
tries = 0
while tries < 7:
try:
tries = tries + 1
data: bytes = rarc.open(archive_file).read()
entries = [(rarc.getinfo(archive_file), data)]
if entries[0][0].file_size != len(entries[0][1]):
logger.info(
"Error reading rar archive [file is not expected size: %d vs %d] %s :: %s :: tries #%d",
entries[0][0].file_size,
len(entries[0][1]),
self.path,
archive_file,
tries,
)
continue
except OSError as e:
logger.error("Error reading rar archive [%s]: %s :: %s :: tries #%d", e, self.path, archive_file, tries)
time.sleep(1)
except Exception as e:
logger.error(
"Unexpected exception reading rar archive [%s]: %s :: %s :: tries #%d",
e,
self.path,
archive_file,
tries,
)
break
else:
# Success. Entries is a list of of tuples: ( rarinfo, filedata)
if len(entries) == 1:
return entries[0][1]
raise OSError
raise OSError
def remove_file(self, archive_file: str) -> bool:
if self.exe:
# use external program to remove file from Rar archive
result = subprocess.run(
[self.exe, "d", "-c-", self.path, archive_file],
startupinfo=self.startupinfo,
stdin=subprocess.DEVNULL,
capture_output=True,
encoding="utf-8",
cwd=self.path.absolute().parent,
)
if platform.system() == "Darwin":
time.sleep(1)
if result.returncode != 0:
logger.error(
"Error removing file from rar archive [exitcode: %d]: %s :: %s",
result.returncode,
self.path,
archive_file,
)
return False
return True
else:
return False
def write_file(self, archive_file: str, data: bytes) -> bool:
if self.exe:
archive_path = pathlib.PurePosixPath(archive_file)
archive_name = archive_path.name
archive_parent = str(archive_path.parent).lstrip("./")
# use external program to write file to Rar archive
result = subprocess.run(
[self.exe, "a", f"-si{archive_name}", f"-ap{archive_parent}", "-c-", "-ep", self.path],
input=data,
startupinfo=self.startupinfo,
capture_output=True,
cwd=self.path.absolute().parent,
)
if platform.system() == "Darwin":
time.sleep(1)
if result.returncode != 0:
logger.error(
"Error writing rar archive [exitcode: %d]: %s :: %s :: %s",
result.returncode,
self.path,
archive_file,
result.stderr,
)
return False
else:
return True
else:
return False
def get_filename_list(self) -> list[str]:
rarc = self.get_rar_obj()
tries = 0
if rar_support and rarc:
while tries < 7:
try:
tries = tries + 1
namelist = []
for item in rarc.infolist():
if item.file_size != 0:
namelist.append(item.filename)
except OSError as e:
logger.error("Error listing files in rar archive [%s]: %s :: attempt #%d", e, self.path, tries)
time.sleep(1)
else:
return namelist
return []
def supports_files(self) -> bool:
return True
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""Replace the current archive with one copied from another archive"""
try:
with tempfile.TemporaryDirectory() as tmp_dir:
tmp_path = pathlib.Path(tmp_dir)
rar_cwd = tmp_path / "rar"
rar_cwd.mkdir(exist_ok=True)
rar_path = (tmp_path / self.path.name).with_suffix(".rar")
for filename in other_archive.get_filename_list():
(rar_cwd / filename).parent.mkdir(exist_ok=True, parents=True)
data = other_archive.read_file(filename)
if data is not None:
with open(rar_cwd / filename, mode="w+b") as tmp_file:
tmp_file.write(data)
result = subprocess.run(
[self.exe, "a", "-r", "-c-", str(rar_path.absolute()), "."],
cwd=rar_cwd.absolute(),
startupinfo=self.startupinfo,
stdin=subprocess.DEVNULL,
capture_output=True,
encoding="utf-8",
)
if result.returncode != 0:
logger.error(
"Error while copying to rar archive [exitcode: %d]: %s: %s",
result.returncode,
self.path,
result.stderr,
)
return False
self.path.unlink(missing_ok=True)
shutil.move(rar_path, self.path)
except Exception as e:
logger.exception("Error while copying to rar archive [%s]: from %s to %s", e, other_archive.path, self.path)
return False
else:
return True
def is_writable(self) -> bool:
try:
if bool(self.exe and (os.path.exists(self.exe) or shutil.which(self.exe))):
return (
subprocess.run(
(self.exe,),
startupinfo=self.startupinfo,
capture_output=True,
cwd=self.path.absolute().parent,
)
.stdout.strip()
.startswith(b"RAR")
)
except OSError:
...
return False
def extension(self) -> str:
return ".cbr"
def name(self) -> str:
return "RAR"
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
if rar_support:
# Try using exe
orig = rarfile.UNRAR_TOOL
rarfile.UNRAR_TOOL = cls.exe
try:
return rarfile.is_rarfile(str(path)) and rarfile.tool_setup(sevenzip=False, sevenzip2=False, force=True)
except rarfile.RarCannotExec:
rarfile.UNRAR_TOOL = orig
# Fallback to standard
try:
return rarfile.is_rarfile(str(path)) and rarfile.tool_setup(force=True)
except rarfile.RarCannotExec as e:
logger.info(e)
return False
def get_rar_obj(self) -> rarfile.RarFile | None:
if rar_support:
try:
rarc = rarfile.RarFile(str(self.path))
except (OSError, rarfile.RarFileError) as e:
logger.error("Unable to get rar object [%s]: %s", e, self.path)
else:
return rarc
return None

View File

@ -0,0 +1,134 @@
from __future__ import annotations
import logging
import os
import pathlib
import shutil
import tempfile
from comicapi.archivers import Archiver
try:
import py7zr
z7_support = True
except ImportError:
z7_support = False
logger = logging.getLogger(__name__)
class SevenZipArchiver(Archiver):
"""7Z implementation"""
enabled = z7_support
def __init__(self) -> None:
super().__init__()
# @todo: Implement Comment?
def get_comment(self) -> str:
return ""
def set_comment(self, comment: str) -> bool:
return False
def read_file(self, archive_file: str) -> bytes:
data = b""
try:
with py7zr.SevenZipFile(self.path, "r") as zf:
data = zf.read([archive_file])[archive_file].read()
except (py7zr.Bad7zFile, OSError) as e:
logger.error("Error reading 7zip archive [%s]: %s :: %s", e, self.path, archive_file)
raise
return data
def remove_file(self, archive_file: str) -> bool:
return self.rebuild([archive_file])
def write_file(self, archive_file: str, data: bytes) -> bool:
# At the moment, no other option but to rebuild the whole
# archive w/o the indicated file. Very sucky, but maybe
# another solution can be found
files = self.get_filename_list()
if archive_file in files:
if not self.rebuild([archive_file]):
return False
try:
# now just add the archive file as a new one
with py7zr.SevenZipFile(self.path, "a") as zf:
zf.writestr(data, archive_file)
return True
except (py7zr.Bad7zFile, OSError) as e:
logger.error("Error writing 7zip archive [%s]: %s :: %s", e, self.path, archive_file)
return False
def get_filename_list(self) -> list[str]:
try:
with py7zr.SevenZipFile(self.path, "r") as zf:
namelist: list[str] = [file.filename for file in zf.list() if not file.is_directory]
return namelist
except (py7zr.Bad7zFile, OSError) as e:
logger.error("Error listing files in 7zip archive [%s]: %s", e, self.path)
return []
def supports_files(self) -> bool:
return True
def rebuild(self, exclude_list: list[str]) -> bool:
"""Zip helper func
This recompresses the zip archive, without the files in the exclude_list
"""
try:
# py7zr treats all archives as if they used solid compression
# so we need to get the filename list first to read all the files at once
with py7zr.SevenZipFile(self.path, mode="r") as zin:
targets = [f for f in zin.getnames() if f not in exclude_list]
with tempfile.NamedTemporaryFile(dir=os.path.dirname(self.path), delete=False) as tmp_file:
with py7zr.SevenZipFile(tmp_file.file, mode="w") as zout:
with py7zr.SevenZipFile(self.path, mode="r") as zin:
for filename, buffer in zin.read(targets).items():
zout.writef(buffer, filename)
self.path.unlink(missing_ok=True)
tmp_file.close() # Required on windows
shutil.move(tmp_file.name, self.path)
except (py7zr.Bad7zFile, OSError) as e:
logger.error("Error rebuilding 7zip file [%s]: %s", e, self.path)
return False
return True
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""Replace the current zip with one copied from another archive"""
try:
with py7zr.SevenZipFile(self.path, "w") as zout:
for filename in other_archive.get_filename_list():
data = other_archive.read_file(
filename
) # This will be very inefficient if other_archive is a 7z file
if data is not None:
zout.writestr(data, filename)
except Exception as e:
logger.error("Error while copying to 7zip archive [%s]: from %s to %s", e, other_archive.path, self.path)
return False
else:
return True
def is_writable(self) -> bool:
return True
def extension(self) -> str:
return ".cb7"
def name(self) -> str:
return "Seven Zip"
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
return py7zr.is_7zfile(path)

204
comicapi/archivers/zip.py Normal file
View File

@ -0,0 +1,204 @@
from __future__ import annotations
import logging
import os
import pathlib
import shutil
import struct
import tempfile
import zipfile
from typing import cast
import chardet
from comicapi.archivers import Archiver
logger = logging.getLogger(__name__)
class ZipArchiver(Archiver):
"""ZIP implementation"""
def __init__(self) -> None:
super().__init__()
def supports_comment(self) -> bool:
return True
def get_comment(self) -> str:
with zipfile.ZipFile(self.path, "r") as zf:
encoding = chardet.detect(zf.comment, True)
if encoding["confidence"] > 60:
try:
comment = zf.comment.decode(encoding["encoding"])
except UnicodeDecodeError:
comment = zf.comment.decode("utf-8", errors="replace")
else:
comment = zf.comment.decode("utf-8", errors="replace")
return comment
def set_comment(self, comment: str) -> bool:
with zipfile.ZipFile(self.path, mode="a") as zf:
zf.comment = bytes(comment, "utf-8")
return True
def read_file(self, archive_file: str) -> bytes:
with zipfile.ZipFile(self.path, mode="r") as zf:
try:
data = zf.read(archive_file)
except (zipfile.BadZipfile, OSError) as e:
logger.error("Error reading zip archive [%s]: %s :: %s", e, self.path, archive_file)
raise
return data
def remove_file(self, archive_file: str) -> bool:
return self.rebuild([archive_file])
def write_file(self, archive_file: str, data: bytes) -> bool:
# At the moment, no other option but to rebuild the whole
# zip archive w/o the indicated file. Very sucky, but maybe
# another solution can be found
files = self.get_filename_list()
if archive_file in files:
if not self.rebuild([archive_file]):
return False
try:
# now just add the archive file as a new one
with zipfile.ZipFile(self.path, mode="a", allowZip64=True, compression=zipfile.ZIP_DEFLATED) as zf:
zf.writestr(archive_file, data)
return True
except (zipfile.BadZipfile, OSError) as e:
logger.error("Error writing zip archive [%s]: %s :: %s", e, self.path, archive_file)
return False
def get_filename_list(self) -> list[str]:
try:
with zipfile.ZipFile(self.path, mode="r") as zf:
namelist = [file.filename for file in zf.infolist() if not file.is_dir()]
return namelist
except (zipfile.BadZipfile, OSError) as e:
logger.error("Error listing files in zip archive [%s]: %s", e, self.path)
return []
def supports_files(self) -> bool:
return True
def rebuild(self, exclude_list: list[str]) -> bool:
"""Zip helper func
This recompresses the zip archive, without the files in the exclude_list
"""
try:
with zipfile.ZipFile(
tempfile.NamedTemporaryFile(dir=os.path.dirname(self.path), delete=False), "w", allowZip64=True
) as zout:
with zipfile.ZipFile(self.path, mode="r") as zin:
for item in zin.infolist():
buffer = zin.read(item.filename)
if item.filename not in exclude_list:
zout.writestr(item, buffer)
# preserve the old comment
zout.comment = zin.comment
# replace with the new file
self.path.unlink(missing_ok=True)
zout.close() # Required on windows
shutil.move(cast(str, zout.filename), self.path)
except (zipfile.BadZipfile, OSError) as e:
logger.error("Error rebuilding zip file [%s]: %s", e, self.path)
return False
return True
def copy_from_archive(self, other_archive: Archiver) -> bool:
"""Replace the current zip with one copied from another archive"""
try:
with zipfile.ZipFile(self.path, mode="w", allowZip64=True) as zout:
for filename in other_archive.get_filename_list():
data = other_archive.read_file(filename)
if data is not None:
zout.writestr(filename, data)
# preserve the old comment
comment = other_archive.get_comment()
if comment is not None:
if not self.write_zip_comment(self.path, comment):
return False
except Exception as e:
logger.error("Error while copying to zip archive [%s]: from %s to %s", e, other_archive.path, self.path)
return False
else:
return True
def is_writable(self) -> bool:
return True
def extension(self) -> str:
return ".cbz"
def name(self) -> str:
return "ZIP"
@classmethod
def is_valid(cls, path: pathlib.Path) -> bool:
return zipfile.is_zipfile(path)
def write_zip_comment(self, filename: pathlib.Path | str, comment: str) -> bool:
"""
This is a custom function for writing a comment to a zip file,
since the built-in one doesn't seem to work on Windows and Mac OS/X
Fortunately, the zip comment is at the end of the file, and it's
easy to manipulate. See this website for more info:
see: http://en.wikipedia.org/wiki/Zip_(file_format)#Structure
"""
# get file size
statinfo = os.stat(filename)
file_length = statinfo.st_size
try:
with open(filename, mode="r+b") as file:
# the starting position, relative to EOF
pos = -4
found = False
# walk backwards to find the "End of Central Directory" record
while (not found) and (-pos != file_length):
# seek, relative to EOF
file.seek(pos, 2)
value = file.read(4)
# look for the end of central directory signature
if bytearray(value) == bytearray([0x50, 0x4B, 0x05, 0x06]):
found = True
else:
# not found, step back another byte
pos = pos - 1
if found:
# now skip forward 20 bytes to the comment length word
pos += 20
file.seek(pos, 2)
# Pack the length of the comment string
fmt = "H" # one 2-byte integer
comment_length = struct.pack(fmt, len(comment)) # pack integer in a binary string
# write out the length
file.write(comment_length)
file.seek(pos + 2, 2)
# write out the comment itself
file.write(comment.encode("utf-8"))
file.truncate()
else:
raise Exception("Could not find the End of Central Directory record!")
except Exception as e:
logger.error("Error writing comment to zip archive [%s]: %s", e, self.path)
return False
else:
return True

413
comicapi/comicarchive.py Normal file
View File

@ -0,0 +1,413 @@
"""A class to represent a single comic, be it file or folder of images"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import importlib.util
import io
import itertools
import logging
import os
import pathlib
import shutil
import sys
import traceback
from collections.abc import Sequence
from typing import TYPE_CHECKING
from comicapi import utils
from comicapi.archivers import Archiver, UnknownArchiver, ZipArchiver
from comicapi.genericmetadata import GenericMetadata
from comicapi.metadata import Metadata
from comictaggerlib.ctversion import version
if TYPE_CHECKING:
from importlib.machinery import ModuleSpec
from importlib.metadata import EntryPoint
logger = logging.getLogger(__name__)
archivers: list[type[Archiver]] = []
metadata_styles: dict[str, Metadata] = {}
def load_archive_plugins(local_plugins: Sequence[EntryPoint] = tuple()) -> None:
if not archivers:
if sys.version_info < (3, 10):
from importlib_metadata import entry_points
else:
from importlib.metadata import entry_points
builtin: list[type[Archiver]] = []
# A list is used first matching plugin wins
for ep in itertools.chain(local_plugins, entry_points(group="comicapi.archiver")):
try:
archiver: type[Archiver] = ep.load()
if ep.module.startswith("comicapi"):
builtin.append(archiver)
else:
archivers.append(archiver)
except Exception:
try:
spec = importlib.util.find_spec(ep.module)
except ValueError:
spec = None
if spec and spec.has_location:
logger.exception("Failed to load archive plugin: %s from %s", ep.name, spec.origin)
else:
logger.exception("Failed to load archive plugin: %s", ep.name)
archivers.extend(builtin)
def load_metadata_plugins(version: str = f"ComicAPI/{version}", local_plugins: Sequence[EntryPoint] = tuple()) -> None:
if not metadata_styles:
if sys.version_info < (3, 10):
from importlib_metadata import entry_points
else:
from importlib.metadata import entry_points
builtin: dict[str, Metadata] = {}
styles: dict[str, tuple[Metadata, ModuleSpec | None]] = {}
# A dict is used, last plugin wins
for ep in itertools.chain(entry_points(group="comicapi.metadata"), local_plugins):
try:
spec = importlib.util.find_spec(ep.module)
except ValueError:
spec = None
try:
style: type[Metadata] = ep.load()
if style.enabled:
if ep.module.startswith("comicapi"):
builtin[style.short_name] = style(version)
else:
if style.short_name in styles:
if spec and spec.has_location:
logger.warning(
"Plugin %s from %s is overriding the existing metadata plugin for %s tags",
ep.module,
spec.origin,
style.short_name,
)
else:
logger.warning(
"Plugin %s is overriding the existing metadata plugin for %s tags",
ep.module,
style.short_name,
)
styles[style.short_name] = (style(version), spec)
except Exception:
if spec and spec.has_location:
logger.exception("Failed to load metadata plugin: %s from %s", ep.name, spec.origin)
else:
logger.exception("Failed to load metadata plugin: %s", ep.name)
for style_name in set(builtin.keys()).intersection(styles):
spec = styles[style_name][1]
if spec and spec.has_location:
logger.warning(
"Builtin metadata for %s tags are being overridden by a plugin from %s", style_name, spec.origin
)
else:
logger.warning("Builtin metadata for %s tags are being overridden by a plugin", style_name)
metadata_styles.clear()
metadata_styles.update(builtin)
metadata_styles.update({s[0]: s[1][0] for s in styles.items()})
class ComicArchive:
logo_data = b""
pil_available = True
def __init__(
self, path: pathlib.Path | str | Archiver, default_image_path: pathlib.Path | str | None = None
) -> None:
self.md: dict[str, GenericMetadata] = {}
self.page_count: int | None = None
self.page_list: list[str] = []
self.reset_cache()
self.default_image_path = default_image_path
if isinstance(path, Archiver):
self.path = path.path
self.archiver: Archiver = path
else:
self.path = pathlib.Path(path).absolute()
self.archiver = UnknownArchiver.open(self.path)
load_archive_plugins()
load_metadata_plugins()
for archiver in archivers:
if archiver.enabled and archiver.is_valid(self.path):
self.archiver = archiver.open(self.path)
break
if not ComicArchive.logo_data and self.default_image_path:
with open(self.default_image_path, mode="rb") as fd:
ComicArchive.logo_data = fd.read()
def reset_cache(self) -> None:
"""Clears the cached data"""
self.page_count = None
self.page_list.clear()
self.md.clear()
def load_cache(self, style_list: list[str]) -> None:
for style in style_list:
if style in metadata_styles:
md = metadata_styles[style].get_metadata(self.archiver)
if not md.is_empty:
self.md[style] = md
def get_supported_metadata(self) -> list[str]:
return [style[0] for style in metadata_styles.items() if style[1].supports_metadata(self.archiver)]
def rename(self, path: pathlib.Path | str) -> None:
new_path = pathlib.Path(path).absolute()
if new_path == self.path:
return
os.makedirs(new_path.parent, 0o777, True)
shutil.move(self.path, new_path)
self.path = new_path
self.archiver.path = pathlib.Path(path)
def is_writable(self, check_archive_status: bool = True) -> bool:
if isinstance(self.archiver, UnknownArchiver):
return False
if check_archive_status and not self.archiver.is_writable():
return False
if not (os.access(self.path, os.W_OK) or os.access(self.path.parent, os.W_OK)):
return False
return True
def is_writable_for_style(self, style: str) -> bool:
if style in metadata_styles:
return self.archiver.is_writable() and metadata_styles[style].supports_metadata(self.archiver)
return False
def is_zip(self) -> bool:
return self.archiver.name() == "ZIP"
def seems_to_be_a_comic_archive(self) -> bool:
if not (isinstance(self.archiver, UnknownArchiver)) and self.get_number_of_pages() > 0:
return True
return False
def extension(self) -> str:
return self.archiver.extension()
def read_metadata(self, style: str) -> GenericMetadata:
if style in self.md:
return self.md[style]
md = GenericMetadata()
if metadata_styles[style].has_metadata(self.archiver):
md = metadata_styles[style].get_metadata(self.archiver)
md.apply_default_page_list(self.get_page_name_list())
return md
def read_metadata_string(self, style: str) -> str:
return metadata_styles[style].get_metadata_string(self.archiver)
def write_metadata(self, metadata: GenericMetadata, style: str) -> bool:
if style in self.md:
del self.md[style]
metadata.apply_default_page_list(self.get_page_name_list())
return metadata_styles[style].set_metadata(metadata, self.archiver)
def has_metadata(self, style: str) -> bool:
if style in self.md:
return True
return metadata_styles[style].has_metadata(self.archiver)
def remove_metadata(self, style: str) -> bool:
if style in self.md:
del self.md[style]
return metadata_styles[style].remove_metadata(self.archiver)
def get_page(self, index: int) -> bytes:
image_data = b""
filename = self.get_page_name(index)
if filename:
try:
image_data = self.archiver.read_file(filename) or b""
except Exception as e:
tb = traceback.extract_tb(e.__traceback__)
logger.error(
"%s:%s: Error reading in page %d. Substituting logo page.", tb[1].filename, tb[1].lineno, index
)
image_data = ComicArchive.logo_data
return image_data
def get_page_name(self, index: int) -> str:
if index is None:
return ""
page_list = self.get_page_name_list()
num_pages = len(page_list)
if num_pages == 0 or index >= num_pages:
return ""
return page_list[index]
def get_scanner_page_index(self) -> int | None:
scanner_page_index = None
# make a guess at the scanner page
name_list = self.get_page_name_list()
count = self.get_number_of_pages()
# too few pages to really know
if count < 5:
return None
# count the length of every filename, and count occurrences
length_buckets: dict[int, int] = {}
for name in name_list:
fname = os.path.split(name)[1]
length = len(fname)
if length in length_buckets:
length_buckets[length] += 1
else:
length_buckets[length] = 1
# sort by most common
sorted_buckets = sorted(length_buckets.items(), key=lambda tup: (tup[1], tup[0]), reverse=True)
# statistical mode occurrence is first
mode_length = sorted_buckets[0][0]
# we are only going to consider the final image file:
final_name = os.path.split(name_list[count - 1])[1]
common_length_list = []
for name in name_list:
if len(os.path.split(name)[1]) == mode_length:
common_length_list.append(os.path.split(name)[1])
prefix = os.path.commonprefix(common_length_list)
if mode_length <= 7 and prefix == "":
# probably all numbers
if len(final_name) > mode_length:
scanner_page_index = count - 1
# see if the last page doesn't start with the same prefix as most others
elif not final_name.startswith(prefix):
scanner_page_index = count - 1
return scanner_page_index
def get_page_name_list(self) -> list[str]:
if not self.page_list:
self.page_list = utils.get_page_name_list(self.archiver.get_filename_list())
return self.page_list
def get_number_of_pages(self) -> int:
if self.page_count is None:
self.page_count = len(self.get_page_name_list())
return self.page_count
def apply_archive_info_to_metadata(self, md: GenericMetadata, calc_page_sizes: bool = False) -> None:
md.page_count = self.get_number_of_pages()
if calc_page_sizes:
for index, p in enumerate(md.pages):
idx = int(p["image_index"])
p["filename"] = self.get_page_name(idx)
if self.pil_available:
try:
from PIL import Image
self.pil_available = True
except ImportError:
self.pil_available = False
if "size" not in p or "height" not in p or "width" not in p:
data = self.get_page(idx)
if data:
try:
if isinstance(data, bytes):
im = Image.open(io.BytesIO(data))
else:
im = Image.open(io.StringIO(data))
w, h = im.size
p["size"] = str(len(data))
p["height"] = str(h)
p["width"] = str(w)
except Exception as e:
logger.warning("Error decoding image [%s] %s :: image %s", e, self.path, index)
p["size"] = str(len(data))
else:
if "size" not in p:
data = self.get_page(idx)
p["size"] = str(len(data))
def metadata_from_filename(
self,
parser: utils.Parser = utils.Parser.ORIGINAL,
remove_c2c: bool = False,
remove_fcbd: bool = False,
remove_publisher: bool = False,
split_words: bool = False,
allow_issue_start_with_letter: bool = False,
protofolius_issue_number_scheme: bool = False,
) -> GenericMetadata:
metadata = GenericMetadata()
filename_info = utils.parse_filename(
self.path.name,
parser=parser,
remove_c2c=remove_c2c,
remove_fcbd=remove_fcbd,
remove_publisher=remove_publisher,
split_words=split_words,
allow_issue_start_with_letter=allow_issue_start_with_letter,
protofolius_issue_number_scheme=protofolius_issue_number_scheme,
)
metadata.alternate_number = utils.xlate(filename_info.get("alternate", None))
metadata.issue = utils.xlate(filename_info.get("issue", None))
metadata.issue_count = utils.xlate_int(filename_info.get("issue_count", None))
metadata.publisher = utils.xlate(filename_info.get("publisher", None))
metadata.series = utils.xlate(filename_info.get("series", None))
metadata.title = utils.xlate(filename_info.get("title", None))
metadata.volume = utils.xlate_int(filename_info.get("volume", None))
metadata.volume_count = utils.xlate_int(filename_info.get("volume_count", None))
metadata.year = utils.xlate_int(filename_info.get("year", None))
metadata.scan_info = utils.xlate(filename_info.get("remainder", None))
metadata.format = "FCBD" if filename_info.get("fcbd", None) else None
if filename_info.get("annual", None):
metadata.format = "Annual"
if filename_info.get("format", None):
metadata.format = filename_info["format"]
metadata.is_empty = False
return metadata
def export_as_zip(self, zip_filename: pathlib.Path) -> bool:
if self.archiver.name() == "ZIP":
# nothing to do, we're already a zip
return True
zip_archiver = ZipArchiver.open(zip_filename)
return zip_archiver.copy_from_archive(self.archiver)

View File

@ -0,0 +1,5 @@
from __future__ import annotations
import importlib.resources
data_path = importlib.resources.files(__package__)

View File

@ -0,0 +1,130 @@
{
"Marvel":{
"marvel comics": "",
"aircel comics": "Aircel Comics",
"aircel": "Aircel Comics",
"atlas comics": "Atlas Comics",
"atlas": "Atlas Comics",
"crossgen comics": "CrossGen comics",
"crossgen": "CrossGen comics",
"curtis magazines": "Curtis Magazines",
"disney books group": "Disney Books Group",
"disney books": "Disney Books Group",
"disney kingdoms": "Disney Kingdoms",
"epic comics group": "Epic Comics",
"epic comics": "Epic Comics",
"epic": "Epic Comics",
"eternity comics": "Eternity Comics",
"humorama": "Humorama",
"icon comics": "Icon Comics",
"infinite comics": "Infinite Comics",
"malibu comics": "Malibu Comics",
"malibu": "Malibu Comics",
"marvel 2099": "Marvel 2099",
"marvel absurd": "Marvel Absurd",
"marvel adventures": "Marvel Adventures",
"marvel age": "Marvel Age",
"marvel books": "Marvel Books",
"marvel comics 2": "Marvel Comics 2",
"marvel digital comics unlimited": "Marvel Unlimited",
"marvel edge": "Marvel Edge",
"marvel frontier": "Marvel Frontier",
"marvel illustrated": "Marvel Illustrated",
"marvel knights": "Marvel Knights",
"marvel magazine group": "Marvel Magazine Group",
"marvel mangaverse": "Marvel Mangaverse",
"marvel monsters group": "Marvel Monsters Group",
"marvel music": "Marvel Music",
"marvel next": "Marvel Next",
"marvel noir": "Marvel Noir",
"marvel press": "Marvel Press",
"marvel uk": "Marvel UK",
"marvel unlimited": "Marvel Unlimited",
"max": "MAX",
"mc2": "Marvel Comics 2",
"new universe": "New Universe",
"non-pareil publishing corp.": "Non-Pareil Publishing Corp.",
"paramount comics": "Paramount Comics",
"power comics": "Power Comics",
"razorline": "Razorline",
"star comics": "Star Comics",
"timely comics": "Timely Comics",
"timely": "Timely Comics",
"tsunami": "Tsunami",
"ultimate comics": "Ultimate Comics",
"ultimate marvel": "Ultimate Marvel",
"vital publications, inc.": "Vital Publications, Inc."
},
"DC Comics":{
"dc_comics": "",
"dc": "",
"dccomics": "",
"!mpact comics": "Impact Comics",
"all star dc": "All-Star",
"all star": "All-Star",
"all-star dc": "All-Star",
"all-star": "All-Star",
"america's best comics": "America's Best Comics",
"black label": "DC Black Label",
"cliffhanger": "Cliffhanger",
"cmx manga": "CMX Manga",
"dc black label": "DC Black Label",
"dc focus": "DC Focus",
"dc ink": "DC Ink",
"dc zoom": "DC Zoom",
"earth m": "Earth M",
"earth one": "Earth One",
"earth-m": "Earth M",
"elseworlds": "Elseworlds",
"eo": "Earth One",
"first wave": "First Wave",
"focus": "DC Focus",
"helix": "Helix",
"homage comics": "Homage Comics",
"impact comics": "Impact Comics",
"impact! comics": "Impact Comics",
"johnny dc": "Johnny DC",
"mad": "Mad",
"minx": "Minx",
"paradox press": "Paradox Press",
"piranha press": "Piranha Press",
"sandman universe": "Sandman Universe",
"tangent comics": "Tangent Comics",
"tsr": "TSR",
"vertigo": "Vertigo",
"wildstorm productions": "WildStorm Productions",
"wildstorm signature": "WildStorm Productions",
"wildstorm": "WildStorm Productions",
"wonder comics": "Wonder Comics",
"young animal": "Young Animal",
"zuda comics": "Zuda Comics",
"zuda": "Zuda Comics"
},
"Dark Horse Comics":{
"berger books": "Berger Books",
"comics' greatest world": "Dark Horse Heroes",
"dark horse digital": "Dark Horse Digital",
"dark horse heroes": "Dark Horse Heroes",
"dark horse manga": "Dark Horse Manga",
"dh deluxe": "DH Deluxe",
"dh press": "DH Press",
"kitchen sink books": "Kitchen Sink Books",
"legend": "Legend",
"m press": "M Press",
"maverick": "Maverick"
},
"Archie Comics":{
"archie action": "Archie Action",
"archie adventure Series": "Archie Adventure Series",
"archie horror": "Archie Horror",
"dark circle Comics": "Dark Circle Comics",
"dark circle": "Dark Circle Comics",
"mighty comics Group": "Mighty Comics Group",
"radio comics": "Mighty Comics Group",
"red circle Comics": "Dark Circle Comics",
"red circle": "Dark Circle Comics"
}
}

420
comicapi/filenamelexer.py Normal file
View File

@ -0,0 +1,420 @@
# Extracted and mutilated from https://github.com/lordwelch/wsfmt
# Which was extracted and mutilated from https://github.com/golang/go/tree/master/src/text/template/parse
from __future__ import annotations
import calendar
import os
import unicodedata
from enum import Enum, auto
from typing import Any, Callable, Protocol
class ItemType(Enum):
Error = auto() # Error occurred; value is text of error
EOF = auto()
Text = auto() # Text
LeftParen = auto()
Number = auto() # Simple number
IssueNumber = auto() # Preceded by a # Symbol
RightParen = auto()
Space = auto() # Run of spaces separating arguments
Dot = auto()
LeftBrace = auto()
RightBrace = auto()
LeftSBrace = auto()
RightSBrace = auto()
Symbol = auto()
Skip = auto() # __ or -- no title, issue or series information beyond
Operator = auto()
Calendar = auto()
InfoSpecifier = auto() # Specifies type of info e.g. v1 for 'volume': 1
ArchiveType = auto()
Honorific = auto()
Publisher = auto()
Keywords = auto()
FCBD = auto()
ComicType = auto()
C2C = auto()
braces = [
ItemType.LeftBrace,
ItemType.LeftParen,
ItemType.LeftSBrace,
ItemType.RightBrace,
ItemType.RightParen,
ItemType.RightSBrace,
]
eof = chr(0)
key = {
"fcbd": ItemType.FCBD,
"freecomicbookday": ItemType.FCBD,
"cbr": ItemType.ArchiveType,
"cbz": ItemType.ArchiveType,
"cbt": ItemType.ArchiveType,
"cb7": ItemType.ArchiveType,
"rar": ItemType.ArchiveType,
"zip": ItemType.ArchiveType,
"tar": ItemType.ArchiveType,
"7z": ItemType.ArchiveType,
"annual": ItemType.ComicType,
"volume": ItemType.InfoSpecifier,
"vol.": ItemType.InfoSpecifier,
"vol": ItemType.InfoSpecifier,
"v": ItemType.InfoSpecifier,
"of": ItemType.InfoSpecifier,
"dc": ItemType.Publisher,
"marvel": ItemType.Publisher,
"covers": ItemType.InfoSpecifier,
"c2c": ItemType.C2C,
"mr": ItemType.Honorific,
"ms": ItemType.Honorific,
"mrs": ItemType.Honorific,
"dr": ItemType.Honorific,
}
class Item:
def __init__(self, typ: ItemType, pos: int, val: str) -> None:
self.typ: ItemType = typ
self.pos: int = pos
self.val: str = val
self.no_space = False
def __repr__(self) -> str:
return f"{self.val}: index: {self.pos}: {self.typ}"
class LexerFunc(Protocol):
def __call__(self, __origin: Lexer) -> LexerFunc | None: ...
class Lexer:
def __init__(self, string: str, allow_issue_start_with_letter: bool = False) -> None:
self.input: str = string # The string being scanned
# The next lexing function to enter
self.state: LexerFunc | None = None
self.pos: int = -1 # Current position in the input
self.start: int = 0 # Start position of this item
self.lastPos: int = 0 # Position of most recent item returned by nextItem
self.paren_depth: int = 0 # Nesting depth of ( ) exprs
self.brace_depth: int = 0 # Nesting depth of { }
self.sbrace_depth: int = 0 # Nesting depth of [ ]
self.items: list[Item] = []
self.allow_issue_start_with_letter = allow_issue_start_with_letter
# Next returns the next rune in the input.
def get(self) -> str:
if int(self.pos) >= len(self.input) - 1:
self.pos += 1
return eof
self.pos += 1
return self.input[self.pos]
# Peek returns but does not consume the next rune in the input.
def peek(self) -> str:
if int(self.pos) >= len(self.input) - 1:
return eof
return self.input[self.pos + 1]
def backup(self) -> None:
self.pos -= 1
# Emit passes an item back to the client.
def emit(self, t: ItemType) -> None:
self.items.append(Item(t, self.start, self.input[self.start : self.pos + 1]))
self.start = self.pos + 1
# Ignore skips over the pending input before this point.
def ignore(self) -> None:
self.start = self.pos
# Accept consumes the next rune if it's from the valid se:
def accept(self, valid: str | Callable[[str], bool]) -> bool:
if isinstance(valid, str):
if self.get() in valid:
return True
else:
if valid(self.get()):
return True
self.backup()
return False
# AcceptRun consumes a run of runes from the valid set.
def accept_run(self, valid: str | Callable[[str], bool]) -> None:
if isinstance(valid, str):
while self.get() in valid:
continue
else:
while valid(self.get()):
continue
self.backup()
def scan_number(self) -> bool:
digits = "0123456789.,"
self.accept_run(digits)
if self.input[self.pos] == ".":
self.backup()
self.accept_run(str.isalpha)
return True
# Runs the state machine for the lexer.
def run(self) -> None:
self.state = lex_filename
while self.state is not None:
self.state = self.state(self)
# Errorf returns an error token and terminates the scan by passing
# Back a nil pointer that will be the next state, terminating self.nextItem.
def errorf(lex: Lexer, message: str) -> Any:
lex.items.append(Item(ItemType.Error, lex.start, message))
return None
# Scans the elements inside action delimiters.
def lex_filename(lex: Lexer) -> LexerFunc | None:
r = lex.get()
if r == eof:
if lex.paren_depth != 0:
errorf(lex, "unclosed left paren")
return None
if lex.brace_depth != 0:
errorf(lex, "unclosed left paren")
return None
lex.emit(ItemType.EOF)
return None
elif is_space(r):
if r == "_" and lex.peek() == "_":
lex.get()
lex.emit(ItemType.Skip)
else:
return lex_space
elif r == ".":
r = lex.peek()
if r.isnumeric() and lex.pos > 0 and is_space(lex.input[lex.pos - 1]):
return lex_number
lex.emit(ItemType.Dot)
return lex_filename
elif r == "'":
r = lex.peek()
if r.isdigit():
return lex_number
lex.accept_run(is_symbol)
lex.emit(ItemType.Symbol)
elif r.isnumeric():
lex.backup()
return lex_number
elif r == "#":
if lex.allow_issue_start_with_letter and is_alpha_numeric(lex.peek()):
return lex_issue_number
elif lex.peek().isdigit() or lex.peek() in "-+.":
return lex_issue_number
lex.emit(ItemType.Symbol)
elif is_operator(r):
if r == "-" and lex.peek() == "-":
lex.get()
lex.emit(ItemType.Skip)
else:
return lex_operator
elif is_alpha_numeric(r):
lex.backup()
return lex_text
elif r == "(":
lex.emit(ItemType.LeftParen)
lex.paren_depth += 1
elif r == ")":
lex.emit(ItemType.RightParen)
lex.paren_depth -= 1
if lex.paren_depth < 0:
errorf(lex, "unexpected right paren " + r)
return None
elif r == "{":
lex.emit(ItemType.LeftBrace)
lex.brace_depth += 1
elif r == "}":
lex.emit(ItemType.RightBrace)
lex.brace_depth -= 1
if lex.brace_depth < 0:
errorf(lex, "unexpected right brace " + r)
return None
elif r == "[":
lex.emit(ItemType.LeftSBrace)
lex.sbrace_depth += 1
elif r == "]":
lex.emit(ItemType.RightSBrace)
lex.sbrace_depth -= 1
if lex.sbrace_depth < 0:
errorf(lex, "unexpected right brace " + r)
return None
elif is_symbol(r):
if unicodedata.category(r) == "Sc":
return lex_currency
lex.accept_run(is_symbol)
lex.emit(ItemType.Symbol)
else:
errorf(lex, "unrecognized character in action: " + repr(r))
return None
return lex_filename
def lex_currency(lex: Lexer) -> LexerFunc:
orig = lex.pos
lex.accept_run(is_space)
if lex.peek().isnumeric():
return lex_number
else:
lex.pos = orig
# We don't have a number with this currency symbol. Don't treat it special
lex.emit(ItemType.Symbol)
return lex_filename
def lex_operator(lex: Lexer) -> LexerFunc:
lex.accept_run("-|:;")
lex.emit(ItemType.Operator)
return lex_filename
# LexSpace scans a run of space characters.
# One space has already been seen.
def lex_space(lex: Lexer) -> LexerFunc:
lex.accept_run(is_space)
lex.emit(ItemType.Space)
return lex_filename
# Lex_text scans an alphanumeric.
def lex_text(lex: Lexer) -> LexerFunc:
while True:
r = lex.get()
if is_alpha_numeric(r):
if r.isnumeric(): # E.g. v1
word = lex.input[lex.start : lex.pos]
if word.casefold() in key and key[word.casefold()] == ItemType.InfoSpecifier:
lex.backup()
lex.emit(key[word.casefold()])
return lex_filename
else:
if r == "'" and lex.peek() == "s":
lex.get()
else:
lex.backup()
word = lex.input[lex.start : lex.pos + 1]
if word.casefold() == "vol" and lex.peek() == ".":
lex.get()
word = lex.input[lex.start : lex.pos + 1]
if word.casefold() in key:
lex.emit(key[word.casefold()])
elif cal(word):
lex.emit(ItemType.Calendar)
else:
lex.emit(ItemType.Text)
break
return lex_filename
def cal(value: str) -> set[Any]:
month_abbr = [i for i, x in enumerate(calendar.month_abbr) if x == value.title()]
month_name = [i for i, x in enumerate(calendar.month_name) if x == value.title()]
day_abbr = [i for i, x in enumerate(calendar.day_abbr) if x == value.title()]
day_name = [i for i, x in enumerate(calendar.day_name) if x == value.title()]
return set(month_abbr + month_name + day_abbr + day_name)
def lex_number(lex: Lexer) -> LexerFunc | None:
if not lex.scan_number():
return errorf(lex, "bad number syntax: " + lex.input[lex.start : lex.pos])
# Complex number logic removed. Messes with math operations without space
if lex.input[lex.start] == "#":
lex.emit(ItemType.IssueNumber)
elif not lex.input[lex.pos].isdigit():
# Assume that 80th is just text and not a number
lex.emit(ItemType.Text)
else:
# Used to check for a '$'
endNumber = lex.pos
# Consume any spaces
lex.accept_run(is_space)
# This number starts with a '$' emit it as Text instead of a Number
if "Sc" == unicodedata.category(lex.input[lex.start]):
lex.pos = endNumber
lex.emit(ItemType.Text)
# This number ends in a '$' if there is a number on the other side we assume it belongs to the following number
elif "Sc" == unicodedata.category(lex.get()):
# Store the end of the number '$'. We still need to check to see if there is a number coming up
endCurrency = lex.pos
# Consume any spaces
lex.accept_run(is_space)
# This is a number
if lex.peek().isnumeric():
# We go back to the original number before the '$' and emit a number
lex.pos = endNumber
lex.emit(ItemType.Number)
else:
# There was no following number, reset to the '$' and emit a number
lex.pos = endCurrency
lex.emit(ItemType.Text)
else:
# We go back to the original number there is no '$'
lex.pos = endNumber
lex.emit(ItemType.Number)
return lex_filename
def lex_issue_number(lex: Lexer) -> Callable[[Lexer], Callable | None] | None: # type: ignore[type-arg]
# Only called when lex.input[lex.start] == "#"
original_start = lex.pos
lex.accept_run(str.isalpha)
if lex.peek().isnumeric():
return lex_number
else:
lex.pos = original_start
lex.emit(ItemType.Symbol)
return lex_filename
def is_space(character: str) -> bool:
return character in "_ \t"
# IsAlphaNumeric reports whether r is an alphabetic, digit, or underscore.
def is_alpha_numeric(character: str) -> bool:
return character.isalpha() or character.isnumeric()
def is_operator(character: str) -> bool:
return character in "-|:;/\\"
def is_symbol(character: str) -> bool:
return unicodedata.category(character)[0] in "PS"
def Lex(filename: str, allow_issue_start_with_letter: bool = False) -> Lexer:
lex = Lexer(os.path.basename(filename), allow_issue_start_with_letter)
lex.run()
return lex

1273
comicapi/filenameparser.py Normal file

File diff suppressed because it is too large Load Diff

581
comicapi/genericmetadata.py Normal file
View File

@ -0,0 +1,581 @@
"""A class for internal metadata storage
The goal of this class is to handle ALL the data that might come from various
tagging schemes and databases, such as ComicVine or GCD. This makes conversion
possible, however lossy it might be
"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import copy
import dataclasses
import logging
from collections.abc import Sequence
from enum import Enum, auto
from typing import TYPE_CHECKING, Any, TypedDict, Union
from typing_extensions import NamedTuple, Required
from comicapi import utils
from comicapi._url import Url, parse_url
if TYPE_CHECKING:
Union
logger = logging.getLogger(__name__)
class __remove(Enum):
REMOVE = auto()
REMOVE = __remove.REMOVE
class PageType:
"""
These page info classes are exactly the same as the CIX scheme, since
it's unique
"""
FrontCover = "FrontCover"
InnerCover = "InnerCover"
Roundup = "Roundup"
Story = "Story"
Advertisement = "Advertisement"
Editorial = "Editorial"
Letters = "Letters"
Preview = "Preview"
BackCover = "BackCover"
Other = "Other"
Deleted = "Deleted"
class ImageMetadata(TypedDict, total=False):
filename: str
type: str
bookmark: str
double_page: bool
image_index: Required[int]
size: str
height: str
width: str
class Credit(TypedDict):
person: str
role: str
primary: bool
@dataclasses.dataclass
class ComicSeries:
id: str
name: str
aliases: set[str]
count_of_issues: int | None
count_of_volumes: int | None
description: str
image_url: str
publisher: str
start_year: int | None
format: str | None
def copy(self) -> ComicSeries:
return copy.deepcopy(self)
class TagOrigin(NamedTuple):
id: str
name: str
@dataclasses.dataclass
class GenericMetadata:
writer_synonyms = ("writer", "plotter", "scripter", "script")
penciller_synonyms = ("artist", "penciller", "penciler", "breakdowns", "pencils", "painting")
inker_synonyms = ("inker", "artist", "finishes", "inks", "painting")
colorist_synonyms = ("colorist", "colourist", "colorer", "colourer", "colors", "painting")
letterer_synonyms = ("letterer", "letters")
cover_synonyms = ("cover", "covers", "coverartist", "cover artist")
editor_synonyms = ("editor", "edits", "editing")
translator_synonyms = ("translator", "translation")
is_empty: bool = True
tag_origin: TagOrigin | None = None
issue_id: str | None = None
series_id: str | None = None
series: str | None = None
series_aliases: set[str] = dataclasses.field(default_factory=set)
issue: str | None = None
issue_count: int | None = None
title: str | None = None
title_aliases: set[str] = dataclasses.field(default_factory=set)
volume: int | None = None
volume_count: int | None = None
genres: set[str] = dataclasses.field(default_factory=set)
description: str | None = None # use same way as Summary in CIX
notes: str | None = None
alternate_series: str | None = None
alternate_number: str | None = None
alternate_count: int | None = None
story_arcs: list[str] = dataclasses.field(default_factory=list)
series_groups: list[str] = dataclasses.field(default_factory=list)
publisher: str | None = None
imprint: str | None = None
day: int | None = None
month: int | None = None
year: int | None = None
language: str | None = None # 2 letter iso code
country: str | None = None
web_links: list[Url] = dataclasses.field(default_factory=list)
format: str | None = None
manga: str | None = None
black_and_white: bool | None = None
maturity_rating: str | None = None
critical_rating: float | None = None # rating in CBL; CommunityRating in CIX
scan_info: str | None = None
tags: set[str] = dataclasses.field(default_factory=set)
pages: list[ImageMetadata] = dataclasses.field(default_factory=list)
page_count: int | None = None
characters: set[str] = dataclasses.field(default_factory=set)
teams: set[str] = dataclasses.field(default_factory=set)
locations: set[str] = dataclasses.field(default_factory=set)
credits: list[Credit] = dataclasses.field(default_factory=list)
# Some CoMet-only items
price: float | None = None
is_version_of: str | None = None
rights: str | None = None
identifier: str | None = None
last_mark: str | None = None
# urls to cover image, not generally part of the metadata
_cover_image: str | None = None
_alternate_images: list[str] = dataclasses.field(default_factory=list)
def __post_init__(self) -> None:
for key, value in self.__dict__.items():
if value and key != "is_empty":
self.is_empty = False
break
def copy(self) -> GenericMetadata:
return copy.deepcopy(self)
def replace(self, /, **kwargs: Any) -> GenericMetadata:
tmp = self.copy()
tmp.__dict__.update(kwargs)
return tmp
def get_clean_metadata(self, *attributes: str) -> GenericMetadata:
new_md = GenericMetadata()
for attr in sorted(attributes):
if "." in attr:
lst, _, name = attr.partition(".")
old_value = getattr(self, lst)
new_value = getattr(new_md, lst)
if old_value:
if not new_value:
for x in old_value:
new_value.append(x.__class__())
for i, x in enumerate(old_value):
if isinstance(x, dict):
if name in x:
new_value[i][name] = x[name]
else:
setattr(new_value[i], name, getattr(x, name))
else:
old_value = getattr(self, attr)
if isinstance(old_value, list):
continue
setattr(new_md, attr, old_value)
new_md.__post_init__()
return new_md
def overlay(self, new_md: GenericMetadata) -> None:
"""Overlay a metadata object on this one
That is, when the new object has non-None values, over-write them
to this one.
"""
def assign(cur: str, new: Any) -> None:
if new is not None:
if new is REMOVE:
if isinstance(getattr(self, cur), (list, set)):
getattr(self, cur).clear()
else:
setattr(self, cur, None)
return
if isinstance(new, str) and len(new) == 0:
setattr(self, cur, None)
elif isinstance(new, (list, set)) and len(new) == 0:
pass
else:
setattr(self, cur, new)
if not new_md.is_empty:
self.is_empty = False
assign("tag_origin", new_md.tag_origin)
assign("issue_id", new_md.issue_id)
assign("series_id", new_md.series_id)
assign("series", new_md.series)
assign("series_aliases", new_md.series_aliases)
assign("issue", new_md.issue)
assign("issue_count", new_md.issue_count)
assign("title", new_md.title)
assign("title_aliases", new_md.title_aliases)
assign("volume", new_md.volume)
assign("volume_count", new_md.volume_count)
assign("genres", new_md.genres)
assign("description", new_md.description)
assign("notes", new_md.notes)
assign("alternate_series", new_md.alternate_series)
assign("alternate_number", new_md.alternate_number)
assign("alternate_count", new_md.alternate_count)
assign("story_arcs", new_md.story_arcs)
assign("series_groups", new_md.series_groups)
assign("publisher", new_md.publisher)
assign("imprint", new_md.imprint)
assign("day", new_md.day)
assign("month", new_md.month)
assign("year", new_md.year)
assign("language", new_md.language)
assign("country", new_md.country)
assign("web_links", new_md.web_links)
assign("format", new_md.format)
assign("manga", new_md.manga)
assign("black_and_white", new_md.black_and_white)
assign("maturity_rating", new_md.maturity_rating)
assign("critical_rating", new_md.critical_rating)
assign("scan_info", new_md.scan_info)
assign("tags", new_md.tags)
assign("pages", new_md.pages)
assign("page_count", new_md.page_count)
assign("characters", new_md.characters)
assign("teams", new_md.teams)
assign("locations", new_md.locations)
self.overlay_credits(new_md.credits)
assign("price", new_md.price)
assign("is_version_of", new_md.is_version_of)
assign("rights", new_md.rights)
assign("identifier", new_md.identifier)
assign("last_mark", new_md.last_mark)
assign("_cover_image", new_md._cover_image)
assign("_alternate_images", new_md._alternate_images)
def overlay_credits(self, new_credits: list[Credit]) -> None:
if new_credits is REMOVE:
self.credits = []
return
for c in new_credits:
primary = bool("primary" in c and c["primary"])
# Remove credit role if person is blank
if c["person"] == "":
for r in reversed(self.credits):
if r["role"].casefold() == c["role"].casefold():
self.credits.remove(r)
# otherwise, add it!
else:
self.add_credit(c["person"], c["role"], primary)
def apply_default_page_list(self, page_list: Sequence[str]) -> None:
# generate a default page list, with the first page marked as the cover
# Create a dictionary of all pages in the metadata
pages = {p["image_index"]: p for p in self.pages}
cover_set = False
# Go through each page in the archive
# The indexes should always match up
# It might be a good idea to validate that each page in `pages` is found
for i, filename in enumerate(page_list):
if i not in pages:
pages[i] = ImageMetadata(image_index=i, filename=filename)
else:
pages[i]["filename"] = filename
# Check if we know what the cover is
cover_set = pages[i].get("type", None) == PageType.FrontCover or cover_set
self.pages = [p[1] for p in sorted(pages.items())]
# Set the cover to the first image if we don't know what the cover is
if not cover_set:
self.pages[0]["type"] = PageType.FrontCover
def get_archive_page_index(self, pagenum: int) -> int:
# convert the displayed page number to the page index of the file in the archive
if pagenum < len(self.pages):
return int(self.pages[pagenum]["image_index"])
return 0
def get_cover_page_index_list(self) -> list[int]:
# return a list of archive page indices of cover pages
coverlist = []
for p in self.pages:
if "type" in p and p["type"] == PageType.FrontCover:
coverlist.append(int(p["image_index"]))
if len(coverlist) == 0:
coverlist.append(0)
return coverlist
def add_credit(self, person: str, role: str, primary: bool = False) -> None:
credit = Credit(person=person, role=role, primary=primary)
# look to see if it's not already there...
found = False
for c in self.credits:
if c["person"].casefold() == person.casefold() and c["role"].casefold() == role.casefold():
# no need to add it. just adjust the "primary" flag as needed
c["primary"] = primary
found = True
break
if not found:
self.credits.append(credit)
def get_primary_credit(self, role: str) -> str:
primary = ""
for credit in self.credits:
if "role" not in credit or "person" not in credit:
continue
if (primary == "" and credit["role"].casefold() == role.casefold()) or (
credit["role"].casefold() == role.casefold() and "primary" in credit and credit["primary"]
):
primary = credit["person"]
return primary
def __str__(self) -> str:
vals: list[tuple[str, Any]] = []
if self.is_empty:
return "No metadata"
def add_string(tag: str, val: Any) -> None:
if isinstance(val, Sequence):
if val:
vals.append((tag, val))
elif val is not None:
vals.append((tag, val))
add_string("series", self.series)
add_string("issue", self.issue)
add_string("issue_count", self.issue_count)
add_string("title", self.title)
add_string("publisher", self.publisher)
add_string("year", self.year)
add_string("month", self.month)
add_string("day", self.day)
add_string("volume", self.volume)
add_string("volume_count", self.volume_count)
add_string("genres", ", ".join(self.genres))
add_string("language", self.language)
add_string("country", self.country)
add_string("critical_rating", self.critical_rating)
add_string("alternate_series", self.alternate_series)
add_string("alternate_number", self.alternate_number)
add_string("alternate_count", self.alternate_count)
add_string("imprint", self.imprint)
add_string("web_links", [str(x) for x in self.web_links])
add_string("format", self.format)
add_string("manga", self.manga)
add_string("price", self.price)
add_string("is_version_of", self.is_version_of)
add_string("rights", self.rights)
add_string("identifier", self.identifier)
add_string("last_mark", self.last_mark)
if self.black_and_white:
add_string("black_and_white", self.black_and_white)
add_string("maturity_rating", self.maturity_rating)
add_string("story_arcs", self.story_arcs)
add_string("series_groups", self.series_groups)
add_string("scan_info", self.scan_info)
add_string("characters", ", ".join(self.characters))
add_string("teams", ", ".join(self.teams))
add_string("locations", ", ".join(self.locations))
add_string("description", self.description)
add_string("notes", self.notes)
add_string("tags", ", ".join(self.tags))
for c in self.credits:
primary = ""
if "primary" in c and c["primary"]:
primary = " [P]"
add_string("credit", c["role"] + ": " + c["person"] + primary)
# find the longest field name
flen = 0
for i in vals:
flen = max(flen, len(i[0]))
flen += 1
# format the data nicely
outstr = ""
fmt_str = "{0: <" + str(flen) + "} {1}\n"
for i in vals:
outstr += fmt_str.format(i[0] + ":", i[1])
return outstr
def fix_publisher(self) -> None:
if self.publisher is None:
return
if self.imprint is None:
self.imprint = ""
imprint, publisher = utils.get_publisher(self.publisher)
self.publisher = publisher
if self.imprint.casefold() in publisher.casefold():
self.imprint = None
if self.imprint is None or self.imprint == "":
self.imprint = imprint
elif self.imprint.casefold() in imprint.casefold():
self.imprint = imprint
md_test: GenericMetadata = GenericMetadata(
is_empty=False,
tag_origin=TagOrigin("comicvine", "Comic Vine"),
series="Cory Doctorow's Futuristic Tales of the Here and Now",
series_id="23437",
issue="1",
issue_id="140529",
title="Anda's Game",
publisher="IDW Publishing",
month=10,
year=2007,
day=1,
issue_count=6,
volume=1,
genres={"Sci-Fi"},
language="en",
description=(
"For 12-year-old Anda, getting paid real money to kill the characters of players who were cheating"
" in her favorite online computer game was a win-win situation. Until she found out who was paying her,"
" and what those characters meant to the livelihood of children around the world."
),
volume_count=None,
critical_rating=3.0,
country=None,
alternate_series="Tales",
alternate_number="2",
alternate_count=7,
imprint="craphound.com",
notes="Tagged with ComicTagger 1.3.2a5 using info from Comic Vine on 2022-04-16 15:52:26. [Issue ID 140529]",
web_links=[
parse_url("https://comicvine.gamespot.com/cory-doctorows-futuristic-tales-of-the-here-and-no/4000-140529/")
],
format="Series",
manga="No",
black_and_white=None,
page_count=24,
maturity_rating="Everyone 10+",
story_arcs=["Here and Now"],
series_groups=["Futuristic Tales"],
scan_info="(CC BY-NC-SA 3.0)",
characters={"Anda"},
teams={"Fahrenheit"},
locations=set(utils.split("lonely cottage ", ",")),
credits=[
Credit(primary=False, person="Dara Naraghi", role="Writer"),
Credit(primary=False, person="Esteve Polls", role="Penciller"),
Credit(primary=False, person="Esteve Polls", role="Inker"),
Credit(primary=False, person="Neil Uyetake", role="Letterer"),
Credit(primary=False, person="Sam Kieth", role="Cover"),
Credit(primary=False, person="Ted Adams", role="Editor"),
],
tags=set(),
pages=[
ImageMetadata(
image_index=0, height="1280", size="195977", width="800", type=PageType.FrontCover, filename="!cover.jpg"
),
ImageMetadata(image_index=1, height="2039", size="611993", width="1327", filename="01.jpg"),
ImageMetadata(image_index=2, height="2039", size="783726", width="1327", filename="02.jpg"),
ImageMetadata(image_index=3, height="2039", size="679584", width="1327", filename="03.jpg"),
ImageMetadata(image_index=4, height="2039", size="788179", width="1327", filename="04.jpg"),
ImageMetadata(image_index=5, height="2039", size="864433", width="1327", filename="05.jpg"),
ImageMetadata(image_index=6, height="2039", size="765606", width="1327", filename="06.jpg"),
ImageMetadata(image_index=7, height="2039", size="876427", width="1327", filename="07.jpg"),
ImageMetadata(image_index=8, height="2039", size="852622", width="1327", filename="08.jpg"),
ImageMetadata(image_index=9, height="2039", size="800205", width="1327", filename="09.jpg"),
ImageMetadata(image_index=10, height="2039", size="746243", width="1326", filename="10.jpg"),
ImageMetadata(image_index=11, height="2039", size="718062", width="1327", filename="11.jpg"),
ImageMetadata(image_index=12, height="2039", size="532179", width="1326", filename="12.jpg"),
ImageMetadata(image_index=13, height="2039", size="686708", width="1327", filename="13.jpg"),
ImageMetadata(image_index=14, height="2039", size="641907", width="1327", filename="14.jpg"),
ImageMetadata(image_index=15, height="2039", size="805388", width="1327", filename="15.jpg"),
ImageMetadata(image_index=16, height="2039", size="668927", width="1326", filename="16.jpg"),
ImageMetadata(image_index=17, height="2039", size="710605", width="1327", filename="17.jpg"),
ImageMetadata(image_index=18, height="2039", size="761398", width="1326", filename="18.jpg"),
ImageMetadata(image_index=19, height="2039", size="743807", width="1327", filename="19.jpg"),
ImageMetadata(image_index=20, height="2039", size="552911", width="1326", filename="20.jpg"),
ImageMetadata(image_index=21, height="2039", size="556827", width="1327", filename="21.jpg"),
ImageMetadata(image_index=22, height="2039", size="675078", width="1326", filename="22.jpg"),
ImageMetadata(
bookmark="Interview",
image_index=23,
height="2032",
size="800965",
width="1338",
type=PageType.Letters,
filename="23.jpg",
),
],
price=None,
is_version_of=None,
rights=None,
identifier=None,
last_mark=None,
_cover_image=None,
)
__all__ = (
"Url",
"parse_url",
"PageType",
"ImageMetadata",
"Credit",
"ComicSeries",
"TagOrigin",
"GenericMetadata",
)

130
comicapi/issuestring.py Normal file
View File

@ -0,0 +1,130 @@
"""Support for mixed digit/string type Issue field
Class for handling the odd permutations of an 'issue number' that the
comics industry throws at us.
e.g.: "12", "12.1", "0", "-1", "5AU", "100-2"
"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import unicodedata
logger = logging.getLogger(__name__)
class IssueString:
def __init__(self, text: str | None) -> None:
# break up the issue number string into 2 parts: the numeric and suffix string.
# (assumes that the numeric portion is always first)
self.num = None
self.suffix = ""
self.prefix = ""
if text is None:
return
text = str(text)
if len(text) == 0:
return
for idx, r in enumerate(text):
if not r.isalpha():
break
self.prefix = text[:idx]
self.num, self.suffix = self.get_number(text[idx:])
def get_number(self, text: str) -> tuple[float | None, str]:
num, suffix = None, ""
start = 0
# skip the minus sign if it's first
if text[0] in ("-", "+"):
start = 1
# if it's still not numeric at start skip it
if text[start].isdigit() or text[start] == ".":
# walk through the string, look for split point (the first non-numeric)
decimal_count = 0
for idx in range(start, len(text)):
if not (text[idx].isdigit() or text[idx] in "."):
break
# special case: also split on second "."
if text[idx] == ".":
decimal_count += 1
if decimal_count > 1:
break
else:
idx = len(text)
# move trailing numeric decimal to suffix
# (only if there is other junk after )
if text[idx - 1] == "." and len(text) != idx:
idx = idx - 1
# if there is no numeric after the minus, make the minus part of the suffix
if idx == 1 and start == 1:
idx = 0
if text[0:idx]:
num = float(text[0:idx])
suffix = text[idx : len(text)]
else:
suffix = text
return num, suffix
def as_string(self, pad: int = 0) -> str:
"""return the number, left side zero-padded, with suffix attached"""
# if there is no number return the text
if self.num is None:
return self.prefix + self.suffix
# negative is added back in last
negative = self.num < 0
num_f = abs(self.num)
# used for padding
num_int = int(num_f)
if num_f.is_integer():
num_s = str(num_int)
else:
num_s = str(num_f)
# create padding
padding = ""
# we only pad the whole number part, we don't care about the decimal
length = len(str(num_int))
if length < pad:
padding = "0" * (pad - length)
# add the padding to the front
num_s = padding + num_s
# finally add the negative back in
if negative:
num_s = "-" + num_s
# return the prefix + formatted number + suffix
return self.prefix + num_s + self.suffix
def as_float(self) -> float | None:
# return the float, with no suffix
if len(self.suffix) == 1 and self.suffix.isnumeric():
return (self.num or 0) + unicodedata.numeric(self.suffix)
return self.num

View File

@ -0,0 +1,5 @@
from __future__ import annotations
from comicapi.metadata.metadata import Metadata
__all__ = ["Metadata"]

315
comicapi/metadata/comet.py Normal file
View File

@ -0,0 +1,315 @@
"""A class to encapsulate CoMet data"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import os
import xml.etree.ElementTree as ET
from typing import Any
from comicapi import utils
from comicapi.archivers import Archiver
from comicapi.comicarchive import ComicArchive
from comicapi.genericmetadata import GenericMetadata, ImageMetadata, PageType
from comicapi.metadata import Metadata
logger = logging.getLogger(__name__)
class CoMet(Metadata):
enabled = True
short_name = "comet"
def __init__(self, version: str) -> None:
super().__init__(version)
self.comet_filename = "CoMet.xml"
self.file = "CoMet.xml"
self.supported_attributes = {
"series",
"issue",
"title",
"volume",
"genres",
"description",
"publisher",
"language",
"format",
"maturity_rating",
"month",
"year",
"page_count",
"characters",
"credits",
"credits.person",
"credits.primary",
"credits.role",
"price",
"is_version_of",
"rights",
"identifier",
"last_mark",
"pages.type", # This is required for setting the cover image none of the other types will be saved
"pages",
}
def supports_credit_role(self, role: str) -> bool:
return role.casefold() in self._get_parseable_credits()
def supports_metadata(self, archive: Archiver) -> bool:
return archive.supports_files()
def has_metadata(self, archive: Archiver) -> bool:
if not self.supports_metadata(archive):
return False
has_metadata = False
# look at all xml files in root, and search for CoMet data, get first
for n in archive.get_filename_list():
if os.path.dirname(n) == "" and os.path.splitext(n)[1].casefold() == ".xml":
# read in XML file, and validate it
data = b""
try:
data = archive.read_file(n)
except Exception as e:
logger.warning("Error reading in Comet XML for validation! from %s: %s", archive.path, e)
if self._validate_bytes(data):
# since we found it, save it!
self.file = n
has_metadata = True
break
return has_metadata
def remove_metadata(self, archive: Archiver) -> bool:
return self.has_metadata(archive) and archive.remove_file(self.file)
def get_metadata(self, archive: Archiver) -> GenericMetadata:
if self.has_metadata(archive):
metadata = archive.read_file(self.file) or b""
if self._validate_bytes(metadata):
return self._metadata_from_bytes(metadata, archive)
return GenericMetadata()
def get_metadata_string(self, archive: Archiver) -> str:
if self.has_metadata(archive):
return ET.tostring(ET.fromstring(archive.read_file(self.file)), encoding="unicode", xml_declaration=True)
return ""
def set_metadata(self, metadata: GenericMetadata, archive: Archiver) -> bool:
if self.supports_metadata(archive):
success = True
xml = b""
if self.has_metadata(archive):
xml = archive.read_file(self.file)
if self.file != self.comet_filename:
success = self.remove_metadata(archive)
return success and archive.write_file(self.comet_filename, self._bytes_from_metadata(metadata, xml))
else:
logger.warning(f"Archive ({archive.name()}) does not support {self.name()} metadata")
return False
def name(self) -> str:
return "Comic Metadata (CoMet)"
@classmethod
def _get_parseable_credits(cls) -> list[str]:
parsable_credits: list[str] = []
parsable_credits.extend(GenericMetadata.writer_synonyms)
parsable_credits.extend(GenericMetadata.penciller_synonyms)
parsable_credits.extend(GenericMetadata.inker_synonyms)
parsable_credits.extend(GenericMetadata.colorist_synonyms)
parsable_credits.extend(GenericMetadata.letterer_synonyms)
parsable_credits.extend(GenericMetadata.cover_synonyms)
parsable_credits.extend(GenericMetadata.editor_synonyms)
return parsable_credits
def _metadata_from_bytes(self, string: bytes, archive: Archiver) -> GenericMetadata:
tree = ET.ElementTree(ET.fromstring(string))
return self._convert_xml_to_metadata(tree, archive)
def _bytes_from_metadata(self, metadata: GenericMetadata, xml: bytes = b"") -> bytes:
tree = self._convert_metadata_to_xml(metadata, xml)
return ET.tostring(tree.getroot(), encoding="utf-8", xml_declaration=True)
def _convert_metadata_to_xml(self, metadata: GenericMetadata, xml: bytes = b"") -> ET.ElementTree:
# shorthand for the metadata
md = metadata
if xml:
root = ET.fromstring(xml)
else:
# build a tree structure
root = ET.Element("comet")
root.attrib["xmlns:comet"] = "http://www.denvog.com/comet/"
root.attrib["xmlns:xsi"] = "http://www.w3.org/2001/XMLSchema-instance"
root.attrib["xsi:schemaLocation"] = "http://www.denvog.com http://www.denvog.com/comet/comet.xsd"
# helper func
def assign(comet_entry: str, md_entry: Any) -> None:
if md_entry is not None:
ET.SubElement(root, comet_entry).text = str(md_entry)
# title is manditory
assign("title", md.title or "")
assign("series", md.series)
assign("issue", md.issue) # must be int??
assign("volume", md.volume)
assign("description", md.description)
assign("publisher", md.publisher)
assign("pages", md.page_count)
assign("format", md.format)
assign("language", md.language)
assign("rating", md.maturity_rating)
assign("price", md.price)
assign("isVersionOf", md.is_version_of)
assign("rights", md.rights)
assign("identifier", md.identifier)
assign("lastMark", md.last_mark)
assign("genre", ",".join(md.genres)) # TODO repeatable
for c in md.characters:
assign("character", c.strip())
if md.manga is not None and md.manga == "YesAndRightToLeft":
assign("readingDirection", "rtl")
if md.year is not None:
date_str = f"{md.year:04}"
if md.month is not None:
date_str += f"-{md.month:02}"
assign("date", date_str)
page = md.get_cover_page_index_list()[0]
assign("coverImage", md.pages[page]["filename"])
# loop thru credits, and build a list for each role that CoMet supports
for credit in metadata.credits:
if credit["role"].casefold() in set(GenericMetadata.writer_synonyms):
ET.SubElement(root, "writer").text = str(credit["person"])
if credit["role"].casefold() in set(GenericMetadata.penciller_synonyms):
ET.SubElement(root, "penciller").text = str(credit["person"])
if credit["role"].casefold() in set(GenericMetadata.inker_synonyms):
ET.SubElement(root, "inker").text = str(credit["person"])
if credit["role"].casefold() in set(GenericMetadata.colorist_synonyms):
ET.SubElement(root, "colorist").text = str(credit["person"])
if credit["role"].casefold() in set(GenericMetadata.letterer_synonyms):
ET.SubElement(root, "letterer").text = str(credit["person"])
if credit["role"].casefold() in set(GenericMetadata.cover_synonyms):
ET.SubElement(root, "coverDesigner").text = str(credit["person"])
if credit["role"].casefold() in set(GenericMetadata.editor_synonyms):
ET.SubElement(root, "editor").text = str(credit["person"])
ET.indent(root)
# wrap it in an ElementTree instance, and save as XML
tree = ET.ElementTree(root)
return tree
def _convert_xml_to_metadata(self, tree: ET.ElementTree, archive: Archiver) -> GenericMetadata:
root = tree.getroot()
if root.tag != "comet":
raise Exception("Not a CoMet file")
metadata = GenericMetadata()
md = metadata
# Helper function
def get(tag: str) -> Any:
node = root.find(tag)
if node is not None:
return node.text
return None
md.series = utils.xlate(get("series"))
md.title = utils.xlate(get("title"))
md.issue = utils.xlate(get("issue"))
md.volume = utils.xlate_int(get("volume"))
md.description = utils.xlate(get("description"))
md.publisher = utils.xlate(get("publisher"))
md.language = utils.xlate(get("language"))
md.format = utils.xlate(get("format"))
md.page_count = utils.xlate_int(get("pages"))
md.maturity_rating = utils.xlate(get("rating"))
md.price = utils.xlate_float(get("price"))
md.is_version_of = utils.xlate(get("isVersionOf"))
md.rights = utils.xlate(get("rights"))
md.identifier = utils.xlate(get("identifier"))
md.last_mark = utils.xlate(get("lastMark"))
_, md.month, md.year = utils.parse_date_str(utils.xlate(get("date")))
ca = ComicArchive(archive)
cover_filename = utils.xlate(get("coverImage"))
page_list = ca.get_page_name_list()
if cover_filename in page_list:
cover_index = page_list.index(cover_filename)
md.pages = [ImageMetadata(image_index=cover_index, filename=cover_filename, type=PageType.FrontCover)]
reading_direction = utils.xlate(get("readingDirection"))
if reading_direction is not None and reading_direction == "rtl":
md.manga = "YesAndRightToLeft"
# loop for genre tags
for n in root:
if n.tag == "genre":
md.genres.add((n.text or "").strip())
# loop for character tags
for n in root:
if n.tag == "character":
md.characters.add((n.text or "").strip())
# Now extract the credit info
for n in root:
if any(
[
n.tag == "writer",
n.tag == "penciller",
n.tag == "inker",
n.tag == "colorist",
n.tag == "letterer",
n.tag == "editor",
]
):
metadata.add_credit((n.text or "").strip(), n.tag.title())
if n.tag == "coverDesigner":
metadata.add_credit((n.text or "").strip(), "Cover")
metadata.is_empty = False
return metadata
# verify that the string actually contains CoMet data in XML format
def _validate_bytes(self, string: bytes) -> bool:
try:
tree = ET.ElementTree(ET.fromstring(string))
root = tree.getroot()
if root.tag != "comet":
return False
except ET.ParseError:
return False
return True

View File

@ -0,0 +1,223 @@
"""A class to encapsulate the ComicBookInfo data"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import json
import logging
from datetime import datetime
from typing import Any, Literal, TypedDict
from comicapi import utils
from comicapi.archivers import Archiver
from comicapi.genericmetadata import Credit, GenericMetadata
from comicapi.metadata import Metadata
logger = logging.getLogger(__name__)
_CBILiteralType = Literal[
"series",
"title",
"issue",
"publisher",
"publicationMonth",
"publicationYear",
"numberOfIssues",
"comments",
"genre",
"volume",
"numberOfVolumes",
"language",
"country",
"rating",
"credits",
"tags",
]
class _ComicBookInfoJson(TypedDict, total=False):
series: str
title: str
publisher: str
publicationMonth: int
publicationYear: int
issue: int
numberOfIssues: int
volume: int
numberOfVolumes: int
rating: int
genre: str
language: str
country: str
credits: list[Credit]
tags: list[str]
comments: str
_CBIContainer = TypedDict("_CBIContainer", {"appID": str, "lastModified": str, "ComicBookInfo/1.0": _ComicBookInfoJson})
class ComicBookInfo(Metadata):
enabled = True
short_name = "cbi"
def __init__(self, version: str) -> None:
super().__init__(version)
self.supported_attributes = {
"series",
"issue",
"issue_count",
"title",
"volume",
"volume_count",
"genres",
"description",
"publisher",
"month",
"year",
"language",
"country",
"critical_rating",
"tags",
"credits",
"credits.person",
"credits.primary",
"credits.role",
}
def supports_credit_role(self, role: str) -> bool:
return True
def supports_metadata(self, archive: Archiver) -> bool:
return archive.supports_comment()
def has_metadata(self, archive: Archiver) -> bool:
return self.supports_metadata(archive) and self._validate_string(archive.get_comment())
def remove_metadata(self, archive: Archiver) -> bool:
return archive.set_comment("")
def get_metadata(self, archive: Archiver) -> GenericMetadata:
if self.has_metadata(archive):
comment = archive.get_comment()
if self._validate_string(comment):
return self._metadata_from_string(comment)
return GenericMetadata()
def get_metadata_string(self, archive: Archiver) -> str:
if self.has_metadata(archive):
return json.dumps(json.loads(archive.get_comment()), indent=2)
return ""
def set_metadata(self, metadata: GenericMetadata, archive: Archiver) -> bool:
if self.supports_metadata(archive):
return archive.set_comment(self._string_from_metadata(metadata))
else:
logger.warning(f"Archive ({archive.name()}) does not support {self.name()} metadata")
return False
def name(self) -> str:
return "ComicBookInfo"
def _metadata_from_string(self, string: str) -> GenericMetadata:
cbi_container: _CBIContainer = json.loads(string)
metadata = GenericMetadata()
cbi = cbi_container["ComicBookInfo/1.0"]
metadata.series = utils.xlate(cbi.get("series"))
metadata.title = utils.xlate(cbi.get("title"))
metadata.issue = utils.xlate(cbi.get("issue"))
metadata.publisher = utils.xlate(cbi.get("publisher"))
metadata.month = utils.xlate_int(cbi.get("publicationMonth"))
metadata.year = utils.xlate_int(cbi.get("publicationYear"))
metadata.issue_count = utils.xlate_int(cbi.get("numberOfIssues"))
metadata.description = utils.xlate(cbi.get("comments"))
metadata.genres = set(utils.split(cbi.get("genre"), ","))
metadata.volume = utils.xlate_int(cbi.get("volume"))
metadata.volume_count = utils.xlate_int(cbi.get("numberOfVolumes"))
metadata.language = utils.xlate(cbi.get("language"))
metadata.country = utils.xlate(cbi.get("country"))
metadata.critical_rating = utils.xlate_int(cbi.get("rating"))
metadata.credits = [
Credit(
person=x["person"] if "person" in x else "",
role=x["role"] if "role" in x else "",
primary=x["primary"] if "primary" in x else False,
)
for x in cbi.get("credits", [])
]
metadata.tags.update(cbi.get("tags", set()))
# need the language string to be ISO
if metadata.language:
metadata.language = utils.get_language_iso(metadata.language)
metadata.is_empty = False
return metadata
def _string_from_metadata(self, metadata: GenericMetadata) -> str:
cbi_container = self._create_json_dictionary(metadata)
return json.dumps(cbi_container)
def _validate_string(self, string: bytes | str) -> bool:
"""Verify that the string actually contains CBI data in JSON format"""
try:
cbi_container = json.loads(string)
except json.JSONDecodeError:
return False
return "ComicBookInfo/1.0" in cbi_container
def _create_json_dictionary(self, metadata: GenericMetadata) -> _CBIContainer:
"""Create the dictionary that we will convert to JSON text"""
cbi_container = _CBIContainer(
{
"appID": "ComicTagger/1.0.0",
"lastModified": str(datetime.now()),
"ComicBookInfo/1.0": {},
}
) # TODO: ctversion.version,
# helper func
def assign(cbi_entry: _CBILiteralType, md_entry: Any) -> None:
if md_entry is not None or isinstance(md_entry, str) and md_entry != "":
cbi_container["ComicBookInfo/1.0"][cbi_entry] = md_entry
assign("series", utils.xlate(metadata.series))
assign("title", utils.xlate(metadata.title))
assign("issue", utils.xlate(metadata.issue))
assign("publisher", utils.xlate(metadata.publisher))
assign("publicationMonth", utils.xlate_int(metadata.month))
assign("publicationYear", utils.xlate_int(metadata.year))
assign("numberOfIssues", utils.xlate_int(metadata.issue_count))
assign("comments", utils.xlate(metadata.description))
assign("genre", utils.xlate(",".join(metadata.genres)))
assign("volume", utils.xlate_int(metadata.volume))
assign("numberOfVolumes", utils.xlate_int(metadata.volume_count))
assign("language", utils.xlate(utils.get_language_from_iso(metadata.language)))
assign("country", utils.xlate(metadata.country))
assign("rating", utils.xlate_int(metadata.critical_rating))
assign("credits", metadata.credits)
assign("tags", list(metadata.tags))
return cbi_container

View File

@ -0,0 +1,389 @@
"""A class to encapsulate ComicRack's ComicInfo.xml data"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import xml.etree.ElementTree as ET
from collections import OrderedDict
from typing import Any
from comicapi import utils
from comicapi.archivers import Archiver
from comicapi.genericmetadata import GenericMetadata, ImageMetadata
from comicapi.metadata import Metadata
logger = logging.getLogger(__name__)
class ComicRack(Metadata):
enabled = True
short_name = "cr"
def __init__(self, version: str) -> None:
super().__init__(version)
self.file = "ComicInfo.xml"
self.supported_attributes = {
"series",
"issue",
"issue_count",
"title",
"volume",
"genres",
"description",
"notes",
"alternate_series",
"alternate_number",
"alternate_count",
"story_arcs",
"series_groups",
"publisher",
"imprint",
"day",
"month",
"year",
"language",
"web_links",
"format",
"manga",
"black_and_white",
"maturity_rating",
"critical_rating",
"scan_info",
"pages",
"pages.bookmark",
"pages.double_page",
"pages.height",
"pages.image_index",
"pages.size",
"pages.type",
"pages.width",
"page_count",
"characters",
"teams",
"locations",
"credits",
"credits.person",
"credits.role",
}
def supports_credit_role(self, role: str) -> bool:
return role.casefold() in self._get_parseable_credits()
def supports_metadata(self, archive: Archiver) -> bool:
return archive.supports_files()
def has_metadata(self, archive: Archiver) -> bool:
return (
self.supports_metadata(archive)
and self.file in archive.get_filename_list()
and self._validate_bytes(archive.read_file(self.file))
)
def remove_metadata(self, archive: Archiver) -> bool:
return self.has_metadata(archive) and archive.remove_file(self.file)
def get_metadata(self, archive: Archiver) -> GenericMetadata:
if self.has_metadata(archive):
metadata = archive.read_file(self.file) or b""
if self._validate_bytes(metadata):
return self._metadata_from_bytes(metadata)
return GenericMetadata()
def get_metadata_string(self, archive: Archiver) -> str:
if self.has_metadata(archive):
return ET.tostring(ET.fromstring(archive.read_file(self.file)), encoding="unicode", xml_declaration=True)
return ""
def set_metadata(self, metadata: GenericMetadata, archive: Archiver) -> bool:
if self.supports_metadata(archive):
xml = b""
if self.has_metadata(archive):
xml = archive.read_file(self.file)
return archive.write_file(self.file, self._bytes_from_metadata(metadata, xml))
else:
logger.warning(f"Archive ({archive.name()}) does not support {self.name()} metadata")
return False
def name(self) -> str:
return "Comic Rack"
@classmethod
def _get_parseable_credits(cls) -> list[str]:
parsable_credits: list[str] = []
parsable_credits.extend(GenericMetadata.writer_synonyms)
parsable_credits.extend(GenericMetadata.penciller_synonyms)
parsable_credits.extend(GenericMetadata.inker_synonyms)
parsable_credits.extend(GenericMetadata.colorist_synonyms)
parsable_credits.extend(GenericMetadata.letterer_synonyms)
parsable_credits.extend(GenericMetadata.cover_synonyms)
parsable_credits.extend(GenericMetadata.editor_synonyms)
return parsable_credits
def _metadata_from_bytes(self, string: bytes) -> GenericMetadata:
root = ET.fromstring(string)
return self._convert_xml_to_metadata(root)
def _bytes_from_metadata(self, metadata: GenericMetadata, xml: bytes = b"") -> bytes:
root = self._convert_metadata_to_xml(metadata, xml)
return ET.tostring(root, encoding="utf-8", xml_declaration=True)
def _convert_metadata_to_xml(self, metadata: GenericMetadata, xml: bytes = b"") -> ET.Element:
# shorthand for the metadata
md = metadata
if xml:
root = ET.fromstring(xml)
else:
# build a tree structure
root = ET.Element("ComicInfo")
root.attrib["xmlns:xsi"] = "http://www.w3.org/2001/XMLSchema-instance"
root.attrib["xmlns:xsd"] = "http://www.w3.org/2001/XMLSchema"
# helper func
def assign(cr_entry: str, md_entry: Any) -> None:
if md_entry:
text = ""
if isinstance(md_entry, str):
text = md_entry
elif isinstance(md_entry, (list, set)):
text = ",".join(md_entry)
else:
text = str(md_entry)
et_entry = root.find(cr_entry)
if et_entry is not None:
et_entry.text = text
else:
ET.SubElement(root, cr_entry).text = text
else:
et_entry = root.find(cr_entry)
if et_entry is not None:
root.remove(et_entry)
# need to specially process the credits, since they are structured
# differently than CIX
credit_writer_list = []
credit_penciller_list = []
credit_inker_list = []
credit_colorist_list = []
credit_letterer_list = []
credit_cover_list = []
credit_editor_list = []
# first, loop thru credits, and build a list for each role that CIX
# supports
for credit in metadata.credits:
if credit["role"].casefold() in set(GenericMetadata.writer_synonyms):
credit_writer_list.append(credit["person"].replace(",", ""))
if credit["role"].casefold() in set(GenericMetadata.penciller_synonyms):
credit_penciller_list.append(credit["person"].replace(",", ""))
if credit["role"].casefold() in set(GenericMetadata.inker_synonyms):
credit_inker_list.append(credit["person"].replace(",", ""))
if credit["role"].casefold() in set(GenericMetadata.colorist_synonyms):
credit_colorist_list.append(credit["person"].replace(",", ""))
if credit["role"].casefold() in set(GenericMetadata.letterer_synonyms):
credit_letterer_list.append(credit["person"].replace(",", ""))
if credit["role"].casefold() in set(GenericMetadata.cover_synonyms):
credit_cover_list.append(credit["person"].replace(",", ""))
if credit["role"].casefold() in set(GenericMetadata.editor_synonyms):
credit_editor_list.append(credit["person"].replace(",", ""))
assign("Series", md.series)
assign("Number", md.issue)
assign("Count", md.issue_count)
assign("Title", md.title)
assign("Volume", md.volume)
assign("Genre", md.genres)
assign("Summary", md.description)
assign("Notes", md.notes)
assign("AlternateSeries", md.alternate_series)
assign("AlternateNumber", md.alternate_number)
assign("AlternateCount", md.alternate_count)
assign("StoryArc", md.story_arcs)
assign("SeriesGroup", md.series_groups)
assign("Publisher", md.publisher)
assign("Imprint", md.imprint)
assign("Day", md.day)
assign("Month", md.month)
assign("Year", md.year)
assign("LanguageISO", md.language)
assign("Web", " ".join(u.url for u in md.web_links))
assign("Format", md.format)
assign("Manga", md.manga)
assign("BlackAndWhite", "Yes" if md.black_and_white else None)
assign("AgeRating", md.maturity_rating)
assign("CommunityRating", md.critical_rating)
assign("ScanInformation", md.scan_info)
assign("PageCount", md.page_count)
assign("Characters", md.characters)
assign("Teams", md.teams)
assign("Locations", md.locations)
assign("Writer", ", ".join(credit_writer_list))
assign("Penciller", ", ".join(credit_penciller_list))
assign("Inker", ", ".join(credit_inker_list))
assign("Colorist", ", ".join(credit_colorist_list))
assign("Letterer", ", ".join(credit_letterer_list))
assign("CoverArtist", ", ".join(credit_cover_list))
assign("Editor", ", ".join(credit_editor_list))
# loop and add the page entries under pages node
pages_node = root.find("Pages")
if pages_node is not None:
pages_node.clear()
else:
pages_node = ET.SubElement(root, "Pages")
for page_dict in md.pages:
page_node = ET.SubElement(pages_node, "Page")
page_node.attrib = {}
if "bookmark" in page_dict:
page_node.attrib["Bookmark"] = str(page_dict["bookmark"])
if "double_page" in page_dict:
page_node.attrib["DoublePage"] = str(page_dict["double_page"])
if "image_index" in page_dict:
page_node.attrib["Image"] = str(page_dict["image_index"])
if "height" in page_dict:
page_node.attrib["ImageHeight"] = str(page_dict["height"])
if "size" in page_dict:
page_node.attrib["ImageSize"] = str(page_dict["size"])
if "width" in page_dict:
page_node.attrib["ImageWidth"] = str(page_dict["width"])
if "type" in page_dict:
page_node.attrib["Type"] = str(page_dict["type"])
page_node.attrib = OrderedDict(sorted(page_node.attrib.items()))
ET.indent(root)
return root
def _convert_xml_to_metadata(self, root: ET.Element) -> GenericMetadata:
if root.tag != "ComicInfo":
raise Exception("Not a ComicInfo file")
def get(name: str) -> str | None:
tag = root.find(name)
if tag is None:
return None
return tag.text
md = GenericMetadata()
md.series = utils.xlate(get("Series"))
md.issue = utils.xlate(get("Number"))
md.issue_count = utils.xlate_int(get("Count"))
md.title = utils.xlate(get("Title"))
md.volume = utils.xlate_int(get("Volume"))
md.genres = set(utils.split(get("Genre"), ","))
md.description = utils.xlate(get("Summary"))
md.notes = utils.xlate(get("Notes"))
md.alternate_series = utils.xlate(get("AlternateSeries"))
md.alternate_number = utils.xlate(get("AlternateNumber"))
md.alternate_count = utils.xlate_int(get("AlternateCount"))
md.story_arcs = utils.split(get("StoryArc"), ",")
md.series_groups = utils.split(get("SeriesGroup"), ",")
md.publisher = utils.xlate(get("Publisher"))
md.imprint = utils.xlate(get("Imprint"))
md.day = utils.xlate_int(get("Day"))
md.month = utils.xlate_int(get("Month"))
md.year = utils.xlate_int(get("Year"))
md.language = utils.xlate(get("LanguageISO"))
md.web_links = utils.split_urls(utils.xlate(get("Web")))
md.format = utils.xlate(get("Format"))
md.manga = utils.xlate(get("Manga"))
md.maturity_rating = utils.xlate(get("AgeRating"))
md.critical_rating = utils.xlate_float(get("CommunityRating"))
md.scan_info = utils.xlate(get("ScanInformation"))
md.page_count = utils.xlate_int(get("PageCount"))
md.characters = set(utils.split(get("Characters"), ","))
md.teams = set(utils.split(get("Teams"), ","))
md.locations = set(utils.split(get("Locations"), ","))
tmp = utils.xlate(get("BlackAndWhite"))
if tmp is not None:
md.black_and_white = tmp.casefold() in ["yes", "true", "1"]
# Now extract the credit info
for n in root:
if any(
[
n.tag == "Writer",
n.tag == "Penciller",
n.tag == "Inker",
n.tag == "Colorist",
n.tag == "Letterer",
n.tag == "Editor",
]
):
if n.text is not None:
for name in utils.split(n.text, ","):
md.add_credit(name.strip(), n.tag)
if n.tag == "CoverArtist":
if n.text is not None:
for name in utils.split(n.text, ","):
md.add_credit(name.strip(), "Cover")
# parse page data now
pages_node = root.find("Pages")
if pages_node is not None:
for i, page in enumerate(pages_node):
p: dict[str, Any] = page.attrib
md_page = ImageMetadata(image_index=int(p.get("Image", i)))
if "Bookmark" in p:
md_page["bookmark"] = p["Bookmark"]
if "DoublePage" in p:
md_page["double_page"] = True if p["DoublePage"].casefold() in ("yes", "true", "1") else False
if "ImageHeight" in p:
md_page["height"] = p["ImageHeight"]
if "ImageSize" in p:
md_page["size"] = p["ImageSize"]
if "ImageWidth" in p:
md_page["width"] = p["ImageWidth"]
if "Type" in p:
md_page["type"] = p["Type"]
md.pages.append(md_page)
md.is_empty = False
return md
def _validate_bytes(self, string: bytes) -> bool:
"""verify that the string actually contains CIX data in XML format"""
try:
root = ET.fromstring(string)
if root.tag != "ComicInfo":
return False
except ET.ParseError:
return False
return True

View File

@ -0,0 +1,123 @@
from __future__ import annotations
from comicapi.archivers import Archiver
from comicapi.genericmetadata import GenericMetadata
class Metadata:
enabled: bool = False
short_name: str = ""
def __init__(self, version: str) -> None:
self.version: str = version
self.supported_attributes = {
"tag_origin",
"issue_id",
"series_id",
"series",
"series_aliases",
"issue",
"issue_count",
"title",
"title_aliases",
"volume",
"volume_count",
"genres",
"description",
"notes",
"alternate_series",
"alternate_number",
"alternate_count",
"story_arcs",
"series_groups",
"publisher",
"imprint",
"day",
"month",
"year",
"language",
"country",
"web_link",
"format",
"manga",
"black_and_white",
"maturity_rating",
"critical_rating",
"scan_info",
"tags",
"pages",
"pages.type",
"pages.bookmark",
"pages.double_page",
"pages.image_index",
"pages.size",
"pages.height",
"pages.width",
"page_count",
"characters",
"teams",
"locations",
"credits",
"credits.person",
"credits.role",
"credits.primary",
"price",
"is_version_of",
"rights",
"identifier",
"last_mark",
}
def supports_credit_role(self, role: str) -> bool:
return False
def supports_metadata(self, archive: Archiver) -> bool:
"""
Checks the given archive for the ability to save this metadata style.
Should always return a bool. Failures should return False.
Typically consists of a call to either `archive.supports_comment` or `archive.supports_file`
"""
return False
def has_metadata(self, archive: Archiver) -> bool:
"""
Checks the given archive for metadata.
Should always return a bool. Failures should return False.
"""
return False
def remove_metadata(self, archive: Archiver) -> bool:
"""
Removes the metadata from the given archive.
Should always return a bool. Failures should return False.
"""
return False
def get_metadata(self, archive: Archiver) -> GenericMetadata:
"""
Returns a GenericMetadata representing the data saved in the given archive.
Should always return a GenericMetadata. Failures should return an empty metadata object.
"""
return GenericMetadata()
def get_metadata_string(self, archive: Archiver) -> str:
"""
Returns the raw metadata as a string.
If the metadata is a binary format a roughly similar text format should be used.
Should always return a string. Failures should return the empty string.
"""
return ""
def set_metadata(self, metadata: GenericMetadata, archive: Archiver) -> bool:
"""
Saves the given metadata to the given archive.
Should always return a bool. Failures should return False.
"""
return False
def name(self) -> str:
"""
Returns the name of this metadata for display purposes eg "Comic Rack".
Should always return a string. Failures should return the empty string.
"""
return ""

592
comicapi/utils.py Normal file
View File

@ -0,0 +1,592 @@
"""Some generic utilities"""
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import json
import logging
import os
import pathlib
import platform
import sys
import unicodedata
from collections import defaultdict
from collections.abc import Iterable, Mapping
from enum import Enum, auto
from shutil import which # noqa: F401
from typing import Any, TypeVar, cast
from comicfn2dict import comicfn2dict
import comicapi.data
from comicapi import filenamelexer, filenameparser
from ._url import Url as Url
from ._url import parse_url as parse_url
try:
import icu
del icu
icu_available = True
except ImportError:
icu_available = False
if sys.version_info < (3, 11):
class StrEnum(str, Enum):
"""
Enum where members are also (and must be) strings
"""
def __new__(cls, *values: Any) -> Any:
"values must already be of type `str`"
if len(values) > 3:
raise TypeError(f"too many arguments for str(): {values!r}")
if len(values) == 1:
# it must be a string
if not isinstance(values[0], str):
raise TypeError(f"{values[0]!r} is not a string")
if len(values) >= 2:
# check that encoding argument is a string
if not isinstance(values[1], str):
raise TypeError(f"encoding must be a string, not {values[1]!r}")
if len(values) == 3:
# check that errors argument is a string
if not isinstance(values[2], str):
raise TypeError("errors must be a string, not %r" % (values[2]))
value = str(*values)
member = str.__new__(cls, value)
member._value_ = value
return member
@staticmethod
def _generate_next_value_(name: str, start: int, count: int, last_values: Any) -> str:
"""
Return the lower-cased version of the member name.
"""
return name.lower()
else:
from enum import StrEnum
logger = logging.getLogger(__name__)
class Parser(StrEnum):
ORIGINAL = auto()
COMPLICATED = auto()
COMICFN2DICT = auto()
def _custom_key(tup: Any) -> Any:
import natsort
lst = []
for x in natsort.os_sort_keygen()(tup):
ret = x
if len(x) > 1 and isinstance(x[1], int) and isinstance(x[0], str) and x[0] == "":
ret = ("a", *x[1:])
lst.append(ret)
return tuple(lst)
T = TypeVar("T")
def os_sorted(lst: Iterable[T]) -> Iterable[T]:
import natsort
key = _custom_key
if icu_available or platform.system() == "Windows":
key = natsort.os_sort_keygen()
return sorted(lst, key=key)
def parse_filename(
filename: str,
parser: Parser = Parser.ORIGINAL,
remove_c2c: bool = False,
remove_fcbd: bool = False,
remove_publisher: bool = False,
split_words: bool = False,
allow_issue_start_with_letter: bool = False,
protofolius_issue_number_scheme: bool = False,
) -> filenameparser.FilenameInfo:
if not filename:
return filenameparser.FilenameInfo(
alternate="",
annual=False,
archive="",
c2c=False,
fcbd=False,
issue="",
issue_count="",
publisher="",
remainder="",
series="",
title="",
volume="",
volume_count="",
year="",
format="",
)
if split_words:
import wordninja
filename, ext = os.path.splitext(filename)
filename = " ".join(wordninja.split(filename)) + ext
fni = filenameparser.FilenameInfo(
alternate="",
annual=False,
archive="",
c2c=False,
fcbd=False,
format="",
issue="",
issue_count="",
publisher="",
remainder="",
series="",
title="",
volume="",
volume_count="",
year="",
)
if parser == Parser.COMPLICATED:
lex = filenamelexer.Lex(filename, allow_issue_start_with_letter)
p = filenameparser.Parse(
lex.items,
remove_c2c=remove_c2c,
remove_fcbd=remove_fcbd,
remove_publisher=remove_publisher,
protofolius_issue_number_scheme=protofolius_issue_number_scheme,
)
fni = p.filename_info
elif parser == Parser.COMICFN2DICT:
fn2d = comicfn2dict(filename)
fni = filenameparser.FilenameInfo(
alternate="",
annual=False,
archive=fn2d.get("ext", ""),
c2c=False,
fcbd=False,
issue=fn2d.get("issue", ""),
issue_count=fn2d.get("issue_count", ""),
publisher=fn2d.get("publisher", ""),
remainder=fn2d.get("scan_info", ""),
series=fn2d.get("series", ""),
title=fn2d.get("title", ""),
volume=fn2d.get("volume", ""),
volume_count=fn2d.get("volume_count", ""),
year=fn2d.get("year", ""),
format=fn2d.get("original_format", ""),
)
else:
fnp = filenameparser.FileNameParser()
fnp.parse_filename(filename)
fni = filenameparser.FilenameInfo(
alternate="",
annual=False,
archive="",
c2c=False,
fcbd=False,
issue=fnp.issue,
issue_count=fnp.issue_count,
publisher="",
remainder=fnp.remainder,
series=fnp.series,
title="",
volume=fnp.volume,
volume_count="",
year=fnp.year,
format="",
)
return fni
def combine_notes(existing_notes: str | None, new_notes: str | None, split: str) -> str:
split_notes, split_str, untouched_notes = (existing_notes or "").rpartition(split)
if split_notes or split_str:
return (split_notes + (new_notes or "")).strip()
else:
return (untouched_notes + "\n" + (new_notes or "")).strip()
def parse_date_str(date_str: str | None) -> tuple[int | None, int | None, int | None]:
day = None
month = None
year = None
if date_str:
parts = date_str.split("-")
year = xlate_int(parts[0])
if len(parts) > 1:
month = xlate_int(parts[1])
if len(parts) > 2:
day = xlate_int(parts[2])
return day, month, year
def shorten_path(path: pathlib.Path, path2: pathlib.Path | None = None) -> tuple[pathlib.Path, pathlib.Path]:
if path2:
path2 = path2.absolute()
path = path.absolute()
shortened_path: pathlib.Path = path
relative_path = pathlib.Path(path.anchor)
if path.is_relative_to(path.home()):
relative_path = path.home()
shortened_path = path.relative_to(path.home())
if path.is_relative_to(path.cwd()):
relative_path = path.cwd()
shortened_path = path.relative_to(path.cwd())
if path2 and shortened_path.is_relative_to(path2.parent):
relative_path = path2
shortened_path = shortened_path.relative_to(path2)
return relative_path, shortened_path
def path_to_short_str(original_path: pathlib.Path, renamed_path: pathlib.Path | None = None) -> str:
rel, _original_path = shorten_path(original_path)
path_str = str(_original_path)
if rel.samefile(rel.cwd()):
path_str = f"./{_original_path}"
elif rel.samefile(rel.home()):
path_str = f"~/{_original_path}"
if renamed_path:
rel, path = shorten_path(renamed_path, original_path.parent)
rename_str = f" -> {path}"
if rel.samefile(rel.cwd()):
rename_str = f" -> ./{_original_path}"
elif rel.samefile(rel.home()):
rename_str = f" -> ~/{_original_path}"
path_str += rename_str
return path_str
def get_page_name_list(files: list[str]) -> list[str]:
# get the list file names in the archive, and sort
files = cast(list[str], os_sorted(files))
# make a sub-list of image files
page_list = []
for name in files:
if (
os.path.splitext(name)[1].casefold() in [".jpg", ".jpeg", ".png", ".gif", ".webp", ".avif"]
and os.path.basename(name)[0] != "."
):
page_list.append(name)
return page_list
def get_recursive_filelist(pathlist: list[str]) -> list[str]:
"""Get a recursive list of of all files under all path items in the list"""
filelist: list[str] = []
for p in pathlist:
if os.path.isdir(p):
for root, _, files in os.walk(p):
for f in files:
filelist.append(os.path.join(root, f))
elif os.path.exists(p):
filelist.append(p)
return filelist
def add_to_path(dirname: str) -> None:
if dirname:
dirname = os.path.abspath(dirname)
paths = [os.path.normpath(x) for x in split(os.environ["PATH"], os.pathsep)]
if dirname not in paths:
paths.insert(0, dirname)
os.environ["PATH"] = os.pathsep.join(paths)
def remove_from_path(dirname: str) -> None:
if dirname:
dirname = os.path.abspath(dirname)
paths = [os.path.normpath(x) for x in split(os.environ["PATH"], os.pathsep) if dirname != os.path.normpath(x)]
os.environ["PATH"] = os.pathsep.join(paths)
def xlate_int(data: Any) -> int | None:
data = xlate_float(data)
if data is None:
return None
return int(data)
def xlate_float(data: Any) -> float | None:
if isinstance(data, str):
data = data.strip()
if data is None or data == "":
return None
i: str | int | float
if isinstance(data, (int, float)):
i = data
else:
i = str(data).translate(defaultdict(lambda: None, zip((ord(c) for c in "1234567890."), "1234567890.")))
if i == "":
return None
try:
return float(i)
except ValueError:
return None
def xlate(data: Any) -> str | None:
if data is None or isinstance(data, str) and data.strip() == "":
return None
return str(data).strip()
def split(s: str | None, c: str) -> list[str]:
s = xlate(s)
if s:
return [x.strip() for x in s.strip().split(c) if x.strip()]
return []
def split_urls(s: str | None) -> list[Url]:
if s is None:
return []
# Find occurences of ' http'
if s.count("http") > 1 and s.count(" http") >= 1:
urls = []
# Split urls out
url_strings = split(s, " http")
# Return the scheme 'http' and parse the url
for i, url_string in enumerate(url_strings):
if not url_string.startswith("http"):
url_string = "http" + url_string
urls.append(parse_url(url_string))
return urls
else:
return [parse_url(s)]
def remove_articles(text: str) -> str:
text = text.casefold()
articles = [
"&",
"a",
"am",
"an",
"and",
"as",
"at",
"be",
"but",
"by",
"for",
"if",
"is",
"issue",
"it",
"it's",
"its",
"itself",
"of",
"or",
"so",
"the",
"the",
"with",
]
new_text = ""
for word in text.split():
if word not in articles:
new_text += word + " "
new_text = new_text[:-1]
return new_text
def sanitize_title(text: str, basic: bool = False) -> str:
# normalize unicode and convert to ascii. Does not work for everything eg ½ to 12 not 1/2
text = unicodedata.normalize("NFKD", text).casefold()
# comicvine keeps apostrophes a part of the word
text = text.replace("'", "")
text = text.replace('"', "")
if not basic:
# comicvine ignores punctuation and accents
# remove all characters that are not a letter, separator (space) or number
# replace any "dash punctuation" with a space
# makes sure that batman-superman and self-proclaimed stay separate words
text = "".join(
c if unicodedata.category(c)[0] not in "P" else " " for c in text if unicodedata.category(c)[0] in "LZNP"
)
# remove extra space and articles and all lower case
text = remove_articles(text).strip()
return text
def titles_match(search_title: str, record_title: str, threshold: int = 90) -> bool:
import rapidfuzz.fuzz
sanitized_search = sanitize_title(search_title)
sanitized_record = sanitize_title(record_title)
ratio = int(rapidfuzz.fuzz.ratio(sanitized_search, sanitized_record))
logger.debug(
"search title: %s ; record title: %s ; ratio: %d ; match threshold: %d",
search_title,
record_title,
ratio,
threshold,
)
return ratio >= threshold
def unique_file(file_name: pathlib.Path) -> pathlib.Path:
name = file_name.stem
counter = 1
while True:
if not file_name.exists():
return file_name
file_name = file_name.with_stem(name + " (" + str(counter) + ")")
counter += 1
def parse_version(s: str) -> tuple[int, int, int]:
str_parts = s.split(".")[:3]
parts = [int(x) if x.isdigit() else 0 for x in str_parts]
parts.extend([0] * (3 - len(parts))) # Ensure exactly three elements in the resulting list
return (parts[0], parts[1], parts[2])
_languages: dict[str | None, str | None] = defaultdict(lambda: None)
_countries: dict[str | None, str | None] = defaultdict(lambda: None)
def countries() -> dict[str | None, str | None]:
if not _countries:
import isocodes
for alpha_2, c in isocodes.countries.by_alpha_2:
_countries[alpha_2] = c["name"]
return _countries
def languages() -> dict[str | None, str | None]:
if not _languages:
import isocodes
for alpha_2, lng in isocodes.extendend_languages._sorted_by_index(index="alpha_2"):
_languages[alpha_2] = lng["name"]
return _languages
def get_language_from_iso(iso: str | None) -> str | None:
return languages()[iso]
def get_language_iso(string: str | None) -> str | None:
if string is None:
return None
import isocodes
# Return current string if all else fails
lang = string.casefold()
found = None
for lng in isocodes.extendend_languages.items:
for x in ("alpha_2", "alpha_3", "bibliographic", "common_name", "name"):
if x in lng and lng[x].casefold() == lang:
found = lng
if found:
break
if found:
return found.get("alpha_2", None)
return lang
def get_country_from_iso(iso: str | None) -> str | None:
return countries()[iso]
def get_publisher(publisher: str) -> tuple[str, str]:
imprint = ""
for pub in publishers.values():
imprint, publisher, ok = pub[publisher]
if ok:
break
return imprint, publisher
def update_publishers(new_publishers: Mapping[str, Mapping[str, str]]) -> None:
for publisher in new_publishers:
if publisher in publishers:
publishers[publisher].update(new_publishers[publisher])
else:
publishers[publisher] = ImprintDict(publisher, new_publishers[publisher])
class ImprintDict(dict): # type: ignore
"""
ImprintDict takes a publisher and a dict or mapping of lowercased
imprint names to the proper imprint name. Retrieving a value from an
ImprintDict returns a tuple of (imprint, publisher, keyExists).
if the key does not exist the key is returned as the publisher unchanged
"""
def __init__(self, publisher: str, mapping: tuple | Mapping = (), **kwargs: dict) -> None: # type: ignore
super().__init__(mapping, **kwargs)
self.publisher = publisher
def __missing__(self, key: str) -> None:
return None
def __getitem__(self, k: str) -> tuple[str, str, bool]:
item = super().__getitem__(k.casefold())
if k.casefold() == self.publisher.casefold():
return "", self.publisher, True
if item is None:
return "", k, False
else:
return item, self.publisher, True
def copy(self) -> ImprintDict:
return ImprintDict(self.publisher, super().copy())
publishers: dict[str, ImprintDict] = {}
def load_publishers() -> None:
try:
update_publishers(json.loads((comicapi.data.data_path / "publishers.json").read_text("utf-8")))
except Exception:
logger.exception("Failed to load publishers.json; The are no publishers or imprints loaded")

View File

@ -1,5 +0,0 @@
#!/usr/bin/env python
from comictaggerlib.main import ctmain
if __name__ == '__main__':
ctmain()

View File

@ -1,18 +0,0 @@
The unrar.dll library is freeware. This means:
1. All copyrights to RAR and the unrar.dll are exclusively
owned by the author - Alexander Roshal.
2. The unrar.dll library may be used in any software to handle RAR
archives without limitations free of charge.
3. THE RAR ARCHIVER AND THE UNRAR.DLL LIBRARY ARE DISTRIBUTED "AS IS".
NO WARRANTY OF ANY KIND IS EXPRESSED OR IMPLIED. YOU USE AT
YOUR OWN RISK. THE AUTHOR WILL NOT BE LIABLE FOR DATA LOSS,
DAMAGES, LOSS OF PROFITS OR ANY OTHER KIND OF LOSS WHILE USING
OR MISUSING THIS SOFTWARE.
Thank you for your interest in RAR and unrar.dll.
Alexander L. Roshal

View File

@ -1,140 +0,0 @@
#ifndef _UNRAR_DLL_
#define _UNRAR_DLL_
#define ERAR_END_ARCHIVE 10
#define ERAR_NO_MEMORY 11
#define ERAR_BAD_DATA 12
#define ERAR_BAD_ARCHIVE 13
#define ERAR_UNKNOWN_FORMAT 14
#define ERAR_EOPEN 15
#define ERAR_ECREATE 16
#define ERAR_ECLOSE 17
#define ERAR_EREAD 18
#define ERAR_EWRITE 19
#define ERAR_SMALL_BUF 20
#define ERAR_UNKNOWN 21
#define ERAR_MISSING_PASSWORD 22
#define RAR_OM_LIST 0
#define RAR_OM_EXTRACT 1
#define RAR_OM_LIST_INCSPLIT 2
#define RAR_SKIP 0
#define RAR_TEST 1
#define RAR_EXTRACT 2
#define RAR_VOL_ASK 0
#define RAR_VOL_NOTIFY 1
#define RAR_DLL_VERSION 4
#ifdef _UNIX
#define CALLBACK
#define PASCAL
#define LONG long
#define HANDLE void *
#define LPARAM long
#define UINT unsigned int
#endif
struct RARHeaderData
{
char ArcName[260];
char FileName[260];
unsigned int Flags;
unsigned int PackSize;
unsigned int UnpSize;
unsigned int HostOS;
unsigned int FileCRC;
unsigned int FileTime;
unsigned int UnpVer;
unsigned int Method;
unsigned int FileAttr;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
};
struct RARHeaderDataEx
{
char ArcName[1024];
wchar_t ArcNameW[1024];
char FileName[1024];
wchar_t FileNameW[1024];
unsigned int Flags;
unsigned int PackSize;
unsigned int PackSizeHigh;
unsigned int UnpSize;
unsigned int UnpSizeHigh;
unsigned int HostOS;
unsigned int FileCRC;
unsigned int FileTime;
unsigned int UnpVer;
unsigned int Method;
unsigned int FileAttr;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
unsigned int Reserved[1024];
};
struct RAROpenArchiveData
{
char *ArcName;
unsigned int OpenMode;
unsigned int OpenResult;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
};
struct RAROpenArchiveDataEx
{
char *ArcName;
wchar_t *ArcNameW;
unsigned int OpenMode;
unsigned int OpenResult;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
unsigned int Flags;
unsigned int Reserved[32];
};
enum UNRARCALLBACK_MESSAGES {
UCM_CHANGEVOLUME,UCM_PROCESSDATA,UCM_NEEDPASSWORD
};
typedef int (CALLBACK *UNRARCALLBACK)(UINT msg,LPARAM UserData,LPARAM P1,LPARAM P2);
typedef int (PASCAL *CHANGEVOLPROC)(char *ArcName,int Mode);
typedef int (PASCAL *PROCESSDATAPROC)(unsigned char *Addr,int Size);
#ifdef __cplusplus
extern "C" {
#endif
HANDLE PASCAL RAROpenArchive(struct RAROpenArchiveData *ArchiveData);
HANDLE PASCAL RAROpenArchiveEx(struct RAROpenArchiveDataEx *ArchiveData);
int PASCAL RARCloseArchive(HANDLE hArcData);
int PASCAL RARReadHeader(HANDLE hArcData,struct RARHeaderData *HeaderData);
int PASCAL RARReadHeaderEx(HANDLE hArcData,struct RARHeaderDataEx *HeaderData);
int PASCAL RARProcessFile(HANDLE hArcData,int Operation,char *DestPath,char *DestName);
int PASCAL RARProcessFileW(HANDLE hArcData,int Operation,wchar_t *DestPath,wchar_t *DestName);
void PASCAL RARSetCallback(HANDLE hArcData,UNRARCALLBACK Callback,LPARAM UserData);
void PASCAL RARSetChangeVolProc(HANDLE hArcData,CHANGEVOLPROC ChangeVolProc);
void PASCAL RARSetProcessDataProc(HANDLE hArcData,PROCESSDATAPROC ProcessDataProc);
void PASCAL RARSetPassword(HANDLE hArcData,char *Password);
int PASCAL RARGetDllVersion();
#ifdef __cplusplus
}
#endif
#endif

View File

@ -1,606 +0,0 @@
UnRAR.dll Manual
~~~~~~~~~~~~~~~~
UnRAR.dll is a 32-bit Windows dynamic-link library which provides
file extraction from RAR archives.
Exported functions
====================================================================
HANDLE PASCAL RAROpenArchive(struct RAROpenArchiveData *ArchiveData)
====================================================================
Description
~~~~~~~~~~~
Open RAR archive and allocate memory structures
Parameters
~~~~~~~~~~
ArchiveData Points to RAROpenArchiveData structure
struct RAROpenArchiveData
{
char *ArcName;
UINT OpenMode;
UINT OpenResult;
char *CmtBuf;
UINT CmtBufSize;
UINT CmtSize;
UINT CmtState;
};
Structure fields:
ArcName
Input parameter which should point to zero terminated string
containing the archive name.
OpenMode
Input parameter.
Possible values
RAR_OM_LIST
Open archive for reading file headers only.
RAR_OM_EXTRACT
Open archive for testing and extracting files.
RAR_OM_LIST_INCSPLIT
Open archive for reading file headers only. If you open an archive
in such mode, RARReadHeader[Ex] will return all file headers,
including those with "file continued from previous volume" flag.
In case of RAR_OM_LIST such headers are automatically skipped.
So if you process RAR volumes in RAR_OM_LIST_INCSPLIT mode, you will
get several file header records for same file if file is split between
volumes. For such files only the last file header record will contain
the correct file CRC and if you wish to get the correct packed size,
you need to sum up packed sizes of all parts.
OpenResult
Output parameter.
Possible values
0 Success
ERAR_NO_MEMORY Not enough memory to initialize data structures
ERAR_BAD_DATA Archive header broken
ERAR_BAD_ARCHIVE File is not valid RAR archive
ERAR_UNKNOWN_FORMAT Unknown encryption used for archive headers
ERAR_EOPEN File open error
CmtBuf
Input parameter which should point to the buffer for archive
comments. Maximum comment size is limited to 64Kb. Comment text is
zero terminated. If the comment text is larger than the buffer
size, the comment text will be truncated. If CmtBuf is set to
NULL, comments will not be read.
CmtBufSize
Input parameter which should contain size of buffer for archive
comments.
CmtSize
Output parameter containing size of comments actually read into the
buffer, cannot exceed CmtBufSize.
CmtState
Output parameter.
Possible values
0 comments not present
1 Comments read completely
ERAR_NO_MEMORY Not enough memory to extract comments
ERAR_BAD_DATA Broken comment
ERAR_UNKNOWN_FORMAT Unknown comment format
ERAR_SMALL_BUF Buffer too small, comments not completely read
Return values
~~~~~~~~~~~~~
Archive handle or NULL in case of error
========================================================================
HANDLE PASCAL RAROpenArchiveEx(struct RAROpenArchiveDataEx *ArchiveData)
========================================================================
Description
~~~~~~~~~~~
Similar to RAROpenArchive, but uses RAROpenArchiveDataEx structure
allowing to specify Unicode archive name and returning information
about archive flags.
Parameters
~~~~~~~~~~
ArchiveData Points to RAROpenArchiveDataEx structure
struct RAROpenArchiveDataEx
{
char *ArcName;
wchar_t *ArcNameW;
unsigned int OpenMode;
unsigned int OpenResult;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
unsigned int Flags;
unsigned int Reserved[32];
};
Structure fields:
ArcNameW
Input parameter which should point to zero terminated Unicode string
containing the archive name or NULL if Unicode name is not specified.
Flags
Output parameter. Combination of bit flags.
Possible values
0x0001 - Volume attribute (archive volume)
0x0002 - Archive comment present
0x0004 - Archive lock attribute
0x0008 - Solid attribute (solid archive)
0x0010 - New volume naming scheme ('volname.partN.rar')
0x0020 - Authenticity information present
0x0040 - Recovery record present
0x0080 - Block headers are encrypted
0x0100 - First volume (set only by RAR 3.0 and later)
Reserved[32]
Reserved for future use. Must be zero.
Information on other structure fields and function return values
is available above, in RAROpenArchive function description.
====================================================================
int PASCAL RARCloseArchive(HANDLE hArcData)
====================================================================
Description
~~~~~~~~~~~
Close RAR archive and release allocated memory. It must be called when
archive processing is finished, even if the archive processing was stopped
due to an error.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
Return values
~~~~~~~~~~~~~
0 Success
ERAR_ECLOSE Archive close error
====================================================================
int PASCAL RARReadHeader(HANDLE hArcData,
struct RARHeaderData *HeaderData)
====================================================================
Description
~~~~~~~~~~~
Read header of file in archive.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
HeaderData
It should point to RARHeaderData structure:
struct RARHeaderData
{
char ArcName[260];
char FileName[260];
UINT Flags;
UINT PackSize;
UINT UnpSize;
UINT HostOS;
UINT FileCRC;
UINT FileTime;
UINT UnpVer;
UINT Method;
UINT FileAttr;
char *CmtBuf;
UINT CmtBufSize;
UINT CmtSize;
UINT CmtState;
};
Structure fields:
ArcName
Output parameter which contains a zero terminated string of the
current archive name. May be used to determine the current volume
name.
FileName
Output parameter which contains a zero terminated string of the
file name in OEM (DOS) encoding.
Flags
Output parameter which contains file flags:
0x01 - file continued from previous volume
0x02 - file continued on next volume
0x04 - file encrypted with password
0x08 - file comment present
0x10 - compression of previous files is used (solid flag)
bits 7 6 5
0 0 0 - dictionary size 64 Kb
0 0 1 - dictionary size 128 Kb
0 1 0 - dictionary size 256 Kb
0 1 1 - dictionary size 512 Kb
1 0 0 - dictionary size 1024 Kb
1 0 1 - dictionary size 2048 KB
1 1 0 - dictionary size 4096 KB
1 1 1 - file is directory
Other bits are reserved.
PackSize
Output parameter means packed file size or size of the
file part if file was split between volumes.
UnpSize
Output parameter - unpacked file size.
HostOS
Output parameter - operating system used for archiving:
0 - MS DOS;
1 - OS/2.
2 - Win32
3 - Unix
FileCRC
Output parameter which contains unpacked file CRC. In case of file parts
split between volumes only the last part contains the correct CRC
and it is accessible only in RAR_OM_LIST_INCSPLIT listing mode.
FileTime
Output parameter - contains date and time in standard MS DOS format.
UnpVer
Output parameter - RAR version needed to extract file.
It is encoded as 10 * Major version + minor version.
Method
Output parameter - packing method.
FileAttr
Output parameter - file attributes.
CmtBuf
File comments support is not implemented in the new DLL version yet.
Now CmtState is always 0.
/*
* Input parameter which should point to the buffer for file
* comments. Maximum comment size is limited to 64Kb. Comment text is
* a zero terminated string in OEM encoding. If the comment text is
* larger than the buffer size, the comment text will be truncated.
* If CmtBuf is set to NULL, comments will not be read.
*/
CmtBufSize
Input parameter which should contain size of buffer for archive
comments.
CmtSize
Output parameter containing size of comments actually read into the
buffer, should not exceed CmtBufSize.
CmtState
Output parameter.
Possible values
0 Absent comments
1 Comments read completely
ERAR_NO_MEMORY Not enough memory to extract comments
ERAR_BAD_DATA Broken comment
ERAR_UNKNOWN_FORMAT Unknown comment format
ERAR_SMALL_BUF Buffer too small, comments not completely read
Return values
~~~~~~~~~~~~~
0 Success
ERAR_END_ARCHIVE End of archive
ERAR_BAD_DATA File header broken
====================================================================
int PASCAL RARReadHeaderEx(HANDLE hArcData,
struct RARHeaderDataEx *HeaderData)
====================================================================
Description
~~~~~~~~~~~
Similar to RARReadHeader, but uses RARHeaderDataEx structure,
containing information about Unicode file names and 64 bit file sizes.
struct RARHeaderDataEx
{
char ArcName[1024];
wchar_t ArcNameW[1024];
char FileName[1024];
wchar_t FileNameW[1024];
unsigned int Flags;
unsigned int PackSize;
unsigned int PackSizeHigh;
unsigned int UnpSize;
unsigned int UnpSizeHigh;
unsigned int HostOS;
unsigned int FileCRC;
unsigned int FileTime;
unsigned int UnpVer;
unsigned int Method;
unsigned int FileAttr;
char *CmtBuf;
unsigned int CmtBufSize;
unsigned int CmtSize;
unsigned int CmtState;
unsigned int Reserved[1024];
};
====================================================================
int PASCAL RARProcessFile(HANDLE hArcData,
int Operation,
char *DestPath,
char *DestName)
====================================================================
Description
~~~~~~~~~~~
Performs action and moves the current position in the archive to
the next file. Extract or test the current file from the archive
opened in RAR_OM_EXTRACT mode. If the mode RAR_OM_LIST is set,
then a call to this function will simply skip the archive position
to the next file.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
Operation
File operation.
Possible values
RAR_SKIP Move to the next file in the archive. If the
archive is solid and RAR_OM_EXTRACT mode was set
when the archive was opened, the current file will
be processed - the operation will be performed
slower than a simple seek.
RAR_TEST Test the current file and move to the next file in
the archive. If the archive was opened with
RAR_OM_LIST mode, the operation is equal to
RAR_SKIP.
RAR_EXTRACT Extract the current file and move to the next file.
If the archive was opened with RAR_OM_LIST mode,
the operation is equal to RAR_SKIP.
DestPath
This parameter should point to a zero terminated string containing the
destination directory to which to extract files to. If DestPath is equal
to NULL, it means extract to the current directory. This parameter has
meaning only if DestName is NULL.
DestName
This parameter should point to a string containing the full path and name
to assign to extracted file or it can be NULL to use the default name.
If DestName is defined (not NULL), it overrides both the original file
name saved in the archive and path specigied in DestPath setting.
Both DestPath and DestName must be in OEM encoding. If necessary,
use CharToOem to convert text to OEM before passing to this function.
Return values
~~~~~~~~~~~~~
0 Success
ERAR_BAD_DATA File CRC error
ERAR_BAD_ARCHIVE Volume is not valid RAR archive
ERAR_UNKNOWN_FORMAT Unknown archive format
ERAR_EOPEN Volume open error
ERAR_ECREATE File create error
ERAR_ECLOSE File close error
ERAR_EREAD Read error
ERAR_EWRITE Write error
Note: if you wish to cancel extraction, return -1 when processing
UCM_PROCESSDATA callback message.
====================================================================
int PASCAL RARProcessFileW(HANDLE hArcData,
int Operation,
wchar_t *DestPath,
wchar_t *DestName)
====================================================================
Description
~~~~~~~~~~~
Unicode version of RARProcessFile. It uses Unicode DestPath
and DestName parameters, other parameters and return values
are the same as in RARProcessFile.
====================================================================
void PASCAL RARSetCallback(HANDLE hArcData,
int PASCAL (*CallbackProc)(UINT msg,LPARAM UserData,LPARAM P1,LPARAM P2),
LPARAM UserData);
====================================================================
Description
~~~~~~~~~~~
Set a user-defined callback function to process Unrar events.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
CallbackProc
It should point to a user-defined callback function.
The function will be passed four parameters:
msg Type of event. Described below.
UserData User defined value passed to RARSetCallback.
P1 and P2 Event dependent parameters. Described below.
Possible events
UCM_CHANGEVOLUME Process volume change.
P1 Points to the zero terminated name
of the next volume.
P2 The function call mode:
RAR_VOL_ASK Required volume is absent. The function should
prompt user and return a positive value
to retry or return -1 value to terminate
operation. The function may also specify a new
volume name, placing it to the address specified
by P1 parameter.
RAR_VOL_NOTIFY Required volume is successfully opened.
This is a notification call and volume name
modification is not allowed. The function should
return a positive value to continue or -1
to terminate operation.
UCM_PROCESSDATA Process unpacked data. It may be used to read
a file while it is being extracted or tested
without actual extracting file to disk.
Return a positive value to continue process
or -1 to cancel the archive operation
P1 Address pointing to the unpacked data.
Function may refer to the data but must not
change it.
P2 Size of the unpacked data. It is guaranteed
only that the size will not exceed the maximum
dictionary size (4 Mb in RAR 3.0).
UCM_NEEDPASSWORD DLL needs a password to process archive.
This message must be processed if you wish
to be able to handle archives with encrypted
file names. It can be also used as replacement
of RARSetPassword function even for usual
encrypted files with non-encrypted names.
P1 Address pointing to the buffer for a password.
You need to copy a password here.
P2 Size of the password buffer.
UserData
User data passed to callback function.
Other functions of UnRAR.dll should not be called from the callback
function.
Return values
~~~~~~~~~~~~~
None
====================================================================
void PASCAL RARSetChangeVolProc(HANDLE hArcData,
int PASCAL (*ChangeVolProc)(char *ArcName,int Mode));
====================================================================
Obsoleted, use RARSetCallback instead.
====================================================================
void PASCAL RARSetProcessDataProc(HANDLE hArcData,
int PASCAL (*ProcessDataProc)(unsigned char *Addr,int Size))
====================================================================
Obsoleted, use RARSetCallback instead.
====================================================================
void PASCAL RARSetPassword(HANDLE hArcData,
char *Password);
====================================================================
Description
~~~~~~~~~~~
Set a password to decrypt files.
Parameters
~~~~~~~~~~
hArcData
This parameter should contain the archive handle obtained from the
RAROpenArchive function call.
Password
It should point to a string containing a zero terminated password.
Return values
~~~~~~~~~~~~~
None
====================================================================
void PASCAL RARGetDllVersion();
====================================================================
Description
~~~~~~~~~~~
Returns API version.
Parameters
~~~~~~~~~~
None.
Return values
~~~~~~~~~~~~~
Returns an integer value denoting UnRAR.dll API version, which is also
defined in unrar.h as RAR_DLL_VERSION. API version number is incremented
only in case of noticeable changes in UnRAR.dll API. Do not confuse it
with version of UnRAR.dll stored in DLL resources, which is incremented
with every DLL rebuild.
If RARGetDllVersion() returns a value lower than UnRAR.dll which your
application was designed for, it may indicate that DLL version is too old
and it will fail to provide all necessary functions to your application.
This function is absent in old versions of UnRAR.dll, so it is safer
to use LoadLibrary and GetProcAddress to access this function.

View File

@ -1,80 +0,0 @@
List of unrar.dll API changes. We do not include performance and reliability
improvements into this list, but this library and RAR/UnRAR tools share
the same source code. So the latest version of unrar.dll usually contains
same decompression algorithm changes as the latest UnRAR version.
============================================================================
-- 18 January 2008
all LONG parameters of CallbackProc function were changed
to LPARAM type for 64 bit mode compatibility.
-- 12 December 2007
Added new RAR_OM_LIST_INCSPLIT open mode for function RAROpenArchive.
-- 14 August 2007
Added NoCrypt\unrar_nocrypt.dll without decryption code for those
applications where presence of encryption or decryption code is not
allowed because of legal restrictions.
-- 14 December 2006
Added ERAR_MISSING_PASSWORD error type. This error is returned
if empty password is specified for encrypted file.
-- 12 June 2003
Added RARProcessFileW function, Unicode version of RARProcessFile
-- 9 August 2002
Added RAROpenArchiveEx function allowing to specify Unicode archive
name and get archive flags.
-- 24 January 2002
Added RARReadHeaderEx function allowing to read Unicode file names
and 64 bit file sizes.
-- 23 January 2002
Added ERAR_UNKNOWN error type (it is used for all errors which
do not have special ERAR code yet) and UCM_NEEDPASSWORD callback
message.
Unrar.dll automatically opens all next volumes not only when extracting,
but also in RAR_OM_LIST mode.
-- 27 November 2001
RARSetChangeVolProc and RARSetProcessDataProc are replaced by
the single callback function installed with RARSetCallback.
Unlike old style callbacks, the new function accepts the user defined
parameter. Unrar.dll still supports RARSetChangeVolProc and
RARSetProcessDataProc for compatibility purposes, but if you write
a new application, better use RARSetCallback.
File comments support is not implemented in the new DLL version yet.
Now CmtState is always 0.
-- 13 August 2001
Added RARGetDllVersion function, so you may distinguish old unrar.dll,
which used C style callback functions and the new one with PASCAL callbacks.
-- 10 May 2001
Callback functions in RARSetChangeVolProc and RARSetProcessDataProc
use PASCAL style call convention now.

View File

@ -1 +0,0 @@
This is x64 version of unrar.dll.

View File

@ -1,177 +0,0 @@
# Copyright (c) 2003-2005 Jimmy Retzlaff, 2008 Konstantin Yegupov
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""
pyUnRAR2 is a ctypes based wrapper around the free UnRAR.dll.
It is an modified version of Jimmy Retzlaff's pyUnRAR - more simple,
stable and foolproof.
Notice that it has INCOMPATIBLE interface.
It enables reading and unpacking of archives created with the
RAR/WinRAR archivers. There is a low-level interface which is very
similar to the C interface provided by UnRAR. There is also a
higher level interface which makes some common operations easier.
"""
__version__ = '0.99.2'
try:
WindowsError
in_windows = True
except NameError:
in_windows = False
if in_windows:
from windows import RarFileImplementation
else:
from unix import RarFileImplementation
import fnmatch, time, weakref
class RarInfo(object):
"""Represents a file header in an archive. Don't instantiate directly.
Use only to obtain information about file.
YOU CANNOT EXTRACT FILE CONTENTS USING THIS OBJECT.
USE METHODS OF RarFile CLASS INSTEAD.
Properties:
index - index of file within the archive
filename - name of the file in the archive including path (if any)
datetime - file date/time as a struct_time suitable for time.strftime
isdir - True if the file is a directory
size - size in bytes of the uncompressed file
comment - comment associated with the file
Note - this is not currently intended to be a Python file-like object.
"""
def __init__(self, rarfile, data):
self.rarfile = weakref.proxy(rarfile)
self.index = data['index']
self.filename = data['filename']
self.isdir = data['isdir']
self.size = data['size']
self.datetime = data['datetime']
self.comment = data['comment']
def __str__(self):
try :
arcName = self.rarfile.archiveName
except ReferenceError:
arcName = "[ARCHIVE_NO_LONGER_LOADED]"
return '<RarInfo "%s" in "%s">' % (self.filename, arcName)
class RarFile(RarFileImplementation):
def __init__(self, archiveName, password=None):
"""Instantiate the archive.
archiveName is the name of the RAR file.
password is used to decrypt the files in the archive.
Properties:
comment - comment associated with the archive
>>> print RarFile('test.rar').comment
This is a test.
"""
self.archiveName = archiveName
RarFileImplementation.init(self, password)
def __del__(self):
self.destruct()
def infoiter(self):
"""Iterate over all the files in the archive, generating RarInfos.
>>> import os
>>> for fileInArchive in RarFile('test.rar').infoiter():
... print os.path.split(fileInArchive.filename)[-1],
... print fileInArchive.isdir,
... print fileInArchive.size,
... print fileInArchive.comment,
... print tuple(fileInArchive.datetime)[0:5],
... print time.strftime('%a, %d %b %Y %H:%M', fileInArchive.datetime)
test True 0 None (2003, 6, 30, 1, 59) Mon, 30 Jun 2003 01:59
test.txt False 20 None (2003, 6, 30, 2, 1) Mon, 30 Jun 2003 02:01
this.py False 1030 None (2002, 2, 8, 16, 47) Fri, 08 Feb 2002 16:47
"""
for params in RarFileImplementation.infoiter(self):
yield RarInfo(self, params)
def infolist(self):
"""Return a list of RarInfos, descripting the contents of the archive."""
return list(self.infoiter())
def read_files(self, condition='*'):
"""Read specific files from archive into memory.
If "condition" is a list of numbers, then return files which have those positions in infolist.
If "condition" is a string, then it is treated as a wildcard for names of files to extract.
If "condition" is a function, it is treated as a callback function, which accepts a RarInfo object
and returns boolean True (extract) or False (skip).
If "condition" is omitted, all files are returned.
Returns list of tuples (RarInfo info, str contents)
"""
checker = condition2checker(condition)
return RarFileImplementation.read_files(self, checker)
def extract(self, condition='*', path='.', withSubpath=True, overwrite=True):
"""Extract specific files from archive to disk.
If "condition" is a list of numbers, then extract files which have those positions in infolist.
If "condition" is a string, then it is treated as a wildcard for names of files to extract.
If "condition" is a function, it is treated as a callback function, which accepts a RarInfo object
and returns either boolean True (extract) or boolean False (skip).
DEPRECATED: If "condition" callback returns string (only supported for Windows) -
that string will be used as a new name to save the file under.
If "condition" is omitted, all files are extracted.
"path" is a directory to extract to
"withSubpath" flag denotes whether files are extracted with their full path in the archive.
"overwrite" flag denotes whether extracted files will overwrite old ones. Defaults to true.
Returns list of RarInfos for extracted files."""
checker = condition2checker(condition)
return RarFileImplementation.extract(self, checker, path, withSubpath, overwrite)
def condition2checker(condition):
"""Converts different condition types to callback"""
if type(condition) in [str, unicode]:
def smatcher(info):
return fnmatch.fnmatch(info.filename, condition)
return smatcher
elif type(condition) in [list, tuple] and type(condition[0]) in [int, long]:
def imatcher(info):
return info.index in condition
return imatcher
elif callable(condition):
return condition
else:
raise TypeError

View File

@ -1,30 +0,0 @@
# Copyright (c) 2003-2005 Jimmy Retzlaff, 2008 Konstantin Yegupov
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Low level interface - see UnRARDLL\UNRARDLL.TXT
class ArchiveHeaderBroken(Exception): pass
class InvalidRARArchive(Exception): pass
class FileOpenError(Exception): pass
class IncorrectRARPassword(Exception): pass
class InvalidRARArchiveUsage(Exception): pass

View File

@ -1,139 +0,0 @@
import os, sys
import UnRAR2
from UnRAR2.rar_exceptions import *
def cleanup(dir='test'):
for path, dirs, files in os.walk(dir):
for fn in files:
os.remove(os.path.join(path, fn))
for dir in dirs:
os.removedirs(os.path.join(path, dir))
# reuse RarArchive object, en
cleanup()
rarc = UnRAR2.RarFile('test.rar')
rarc.infolist()
for info in rarc.infoiter():
saveinfo = info
assert (str(info)=="""<RarInfo "test" in "test.rar">""")
break
rarc.extract()
assert os.path.exists('test'+os.sep+'test.txt')
assert os.path.exists('test'+os.sep+'this.py')
del rarc
assert (str(saveinfo)=="""<RarInfo "test" in "[ARCHIVE_NO_LONGER_LOADED]">""")
cleanup()
# extract all the files in test.rar
cleanup()
UnRAR2.RarFile('test.rar').extract()
assert os.path.exists('test'+os.sep+'test.txt')
assert os.path.exists('test'+os.sep+'this.py')
cleanup()
# extract all the files in test.rar matching the wildcard *.txt
cleanup()
UnRAR2.RarFile('test.rar').extract('*.txt')
assert os.path.exists('test'+os.sep+'test.txt')
assert not os.path.exists('test'+os.sep+'this.py')
cleanup()
# check the name and size of each file, extracting small ones
cleanup()
archive = UnRAR2.RarFile('test.rar')
assert archive.comment == 'This is a test.'
archive.extract(lambda rarinfo: rarinfo.size <= 1024)
for rarinfo in archive.infoiter():
if rarinfo.size <= 1024 and not rarinfo.isdir:
assert rarinfo.size == os.stat(rarinfo.filename).st_size
assert file('test'+os.sep+'test.txt', 'rt').read() == 'This is only a test.'
assert not os.path.exists('test'+os.sep+'this.py')
cleanup()
# extract this.py, overriding it's destination
cleanup('test2')
archive = UnRAR2.RarFile('test.rar')
archive.extract('*.py', 'test2', False)
assert os.path.exists('test2'+os.sep+'this.py')
cleanup('test2')
# extract test.txt to memory
cleanup()
archive = UnRAR2.RarFile('test.rar')
entries = UnRAR2.RarFile('test.rar').read_files('*test.txt')
assert len(entries)==1
assert entries[0][0].filename.endswith('test.txt')
assert entries[0][1]=='This is only a test.'
# extract all the files in test.rar with overwriting
cleanup()
fo = open('test'+os.sep+'test.txt',"wt")
fo.write("blah")
fo.close()
UnRAR2.RarFile('test.rar').extract('*.txt')
assert open('test'+os.sep+'test.txt',"rt").read()!="blah"
cleanup()
# extract all the files in test.rar without overwriting
cleanup()
fo = open('test'+os.sep+'test.txt',"wt")
fo.write("blahblah")
fo.close()
UnRAR2.RarFile('test.rar').extract('*.txt', overwrite = False)
assert open('test'+os.sep+'test.txt',"rt").read()=="blahblah"
cleanup()
# list big file in an archive
list(UnRAR2.RarFile('test_nulls.rar').infoiter())
# extract files from an archive with protected files
cleanup()
UnRAR2.RarFile('test_protected_files.rar', password="protected").extract()
assert os.path.exists('test'+os.sep+'top_secret_xxx_file.txt')
cleanup()
errored = False
try:
UnRAR2.RarFile('test_protected_files.rar', password="proteqted").extract()
except IncorrectRARPassword:
errored = True
assert not os.path.exists('test'+os.sep+'top_secret_xxx_file.txt')
assert errored
cleanup()
# extract files from an archive with protected headers
cleanup()
UnRAR2.RarFile('test_protected_headers.rar', password="secret").extract()
assert os.path.exists('test'+os.sep+'top_secret_xxx_file.txt')
cleanup()
errored = False
try:
UnRAR2.RarFile('test_protected_headers.rar', password="seqret").extract()
except IncorrectRARPassword:
errored = True
assert not os.path.exists('test'+os.sep+'top_secret_xxx_file.txt')
assert errored
cleanup()
# make sure docstring examples are working
import doctest
doctest.testmod(UnRAR2)
# update documentation
import pydoc
pydoc.writedoc(UnRAR2)
# cleanup
try:
os.remove('__init__.pyc')
except:
pass

View File

@ -1,177 +0,0 @@
# Copyright (c) 2003-2005 Jimmy Retzlaff, 2008 Konstantin Yegupov
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Unix version uses unrar command line executable
import subprocess
import gc
import os, os.path
import time, re
from rar_exceptions import *
class UnpackerNotInstalled(Exception): pass
rar_executable_cached = None
def call_unrar(params):
"Calls rar/unrar command line executable, returns stdout pipe"
global rar_executable_cached
if rar_executable_cached is None:
for command in ('unrar', 'rar'):
try:
subprocess.Popen([command], stdout=subprocess.PIPE)
rar_executable_cached = command
break
except OSError:
pass
if rar_executable_cached is None:
raise UnpackerNotInstalled("No suitable RAR unpacker installed")
assert type(params) == list, "params must be list"
args = [rar_executable_cached] + params
try:
gc.disable() # See http://bugs.python.org/issue1336
return subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
finally:
gc.enable()
class RarFileImplementation(object):
def init(self, password=None):
self.password = password
stdoutdata, stderrdata = self.call('v', []).communicate()
for line in stderrdata.splitlines():
if line.strip().startswith("Cannot open"):
raise FileOpenError
if line.find("CRC failed")>=0:
raise IncorrectRARPassword
accum = []
source = iter(stdoutdata.splitlines())
line = ''
while not (line.startswith('Comment:') or line.startswith('Pathname/Comment')):
if line.strip().endswith('is not RAR archive'):
raise InvalidRARArchive
line = source.next()
while not line.startswith('Pathname/Comment'):
accum.append(line.rstrip('\n'))
line = source.next()
if len(accum):
accum[0] = accum[0][9:]
self.comment = '\n'.join(accum[:-1])
else:
self.comment = None
def escaped_password(self):
return '-' if self.password == None else self.password
def call(self, cmd, options=[], files=[]):
options2 = options + ['p'+self.escaped_password()]
soptions = ['-'+x for x in options2]
return call_unrar([cmd]+soptions+['--',self.archiveName]+files)
def infoiter(self):
stdoutdata, stderrdata = self.call('v', ['c-']).communicate()
for line in stderrdata.splitlines():
if line.strip().startswith("Cannot open"):
raise FileOpenError
accum = []
source = iter(stdoutdata.splitlines())
line = ''
while not line.startswith('--------------'):
if line.strip().endswith('is not RAR archive'):
raise InvalidRARArchive
if line.find("CRC failed")>=0:
raise IncorrectRARPassword
line = source.next()
line = source.next()
i = 0
re_spaces = re.compile(r"\s+")
while not line.startswith('--------------'):
accum.append(line)
if len(accum)==2:
data = {}
data['index'] = i
#!!!ATB - changed this because it was choking when a folder or file started with a space.
#!!! now, just strip off the first char in the string
data['filename'] = accum[0].rstrip()[1:]
info = re_spaces.split(accum[1].strip())
data['size'] = int(info[0])
attr = info[5]
data['isdir'] = 'd' in attr.lower()
data['datetime'] = time.strptime(info[3]+" "+info[4], '%d-%m-%y %H:%M')
data['comment'] = None
yield data
accum = []
i += 1
line = source.next()
def read_files(self, checker):
res = []
for info in self.infoiter():
checkres = checker(info)
if checkres==True and not info.isdir:
pipe = self.call('p', ['inul'], [info.filename]).stdout
res.append((info, pipe.read()))
return res
def extract(self, checker, path, withSubpath, overwrite):
res = []
command = 'x'
if not withSubpath:
command = 'e'
options = []
if overwrite:
options.append('o+')
else:
options.append('o-')
if not path.endswith(os.sep):
path += os.sep
names = []
for info in self.infoiter():
checkres = checker(info)
if type(checkres) in [str, unicode]:
raise NotImplementedError("Condition callbacks returning strings are deprecated and only supported in Windows")
if checkres==True and not info.isdir:
names.append(info.filename)
res.append(info)
names.append(path)
proc = self.call(command, options, names)
stdoutdata, stderrdata = proc.communicate()
if stderrdata.find("CRC failed")>=0:
raise IncorrectRARPassword
return res
def destruct(self):
pass

View File

@ -1,309 +0,0 @@
# Copyright (c) 2003-2005 Jimmy Retzlaff, 2008 Konstantin Yegupov
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Low level interface - see UnRARDLL\UNRARDLL.TXT
from __future__ import generators
import ctypes, ctypes.wintypes
import os, os.path, sys
import Queue
import time
from rar_exceptions import *
ERAR_END_ARCHIVE = 10
ERAR_NO_MEMORY = 11
ERAR_BAD_DATA = 12
ERAR_BAD_ARCHIVE = 13
ERAR_UNKNOWN_FORMAT = 14
ERAR_EOPEN = 15
ERAR_ECREATE = 16
ERAR_ECLOSE = 17
ERAR_EREAD = 18
ERAR_EWRITE = 19
ERAR_SMALL_BUF = 20
ERAR_UNKNOWN = 21
RAR_OM_LIST = 0
RAR_OM_EXTRACT = 1
RAR_SKIP = 0
RAR_TEST = 1
RAR_EXTRACT = 2
RAR_VOL_ASK = 0
RAR_VOL_NOTIFY = 1
RAR_DLL_VERSION = 3
# enum UNRARCALLBACK_MESSAGES
UCM_CHANGEVOLUME = 0
UCM_PROCESSDATA = 1
UCM_NEEDPASSWORD = 2
architecture_bits = ctypes.sizeof(ctypes.c_voidp)*8
dll_name = "unrar.dll"
if architecture_bits == 64:
dll_name = "x64\\unrar64.dll"
try:
unrar = ctypes.WinDLL(os.path.join(os.path.split(__file__)[0], 'UnRARDLL', dll_name))
except WindowsError:
unrar = ctypes.WinDLL(dll_name)
class RAROpenArchiveDataEx(ctypes.Structure):
def __init__(self, ArcName=None, ArcNameW=u'', OpenMode=RAR_OM_LIST):
self.CmtBuf = ctypes.c_buffer(64*1024)
ctypes.Structure.__init__(self, ArcName=ArcName, ArcNameW=ArcNameW, OpenMode=OpenMode, _CmtBuf=ctypes.addressof(self.CmtBuf), CmtBufSize=ctypes.sizeof(self.CmtBuf))
_fields_ = [
('ArcName', ctypes.c_char_p),
('ArcNameW', ctypes.c_wchar_p),
('OpenMode', ctypes.c_uint),
('OpenResult', ctypes.c_uint),
('_CmtBuf', ctypes.c_voidp),
('CmtBufSize', ctypes.c_uint),
('CmtSize', ctypes.c_uint),
('CmtState', ctypes.c_uint),
('Flags', ctypes.c_uint),
('Reserved', ctypes.c_uint*32),
]
class RARHeaderDataEx(ctypes.Structure):
def __init__(self):
self.CmtBuf = ctypes.c_buffer(64*1024)
ctypes.Structure.__init__(self, _CmtBuf=ctypes.addressof(self.CmtBuf), CmtBufSize=ctypes.sizeof(self.CmtBuf))
_fields_ = [
('ArcName', ctypes.c_char*1024),
('ArcNameW', ctypes.c_wchar*1024),
('FileName', ctypes.c_char*1024),
('FileNameW', ctypes.c_wchar*1024),
('Flags', ctypes.c_uint),
('PackSize', ctypes.c_uint),
('PackSizeHigh', ctypes.c_uint),
('UnpSize', ctypes.c_uint),
('UnpSizeHigh', ctypes.c_uint),
('HostOS', ctypes.c_uint),
('FileCRC', ctypes.c_uint),
('FileTime', ctypes.c_uint),
('UnpVer', ctypes.c_uint),
('Method', ctypes.c_uint),
('FileAttr', ctypes.c_uint),
('_CmtBuf', ctypes.c_voidp),
('CmtBufSize', ctypes.c_uint),
('CmtSize', ctypes.c_uint),
('CmtState', ctypes.c_uint),
('Reserved', ctypes.c_uint*1024),
]
def DosDateTimeToTimeTuple(dosDateTime):
"""Convert an MS-DOS format date time to a Python time tuple.
"""
dosDate = dosDateTime >> 16
dosTime = dosDateTime & 0xffff
day = dosDate & 0x1f
month = (dosDate >> 5) & 0xf
year = 1980 + (dosDate >> 9)
second = 2*(dosTime & 0x1f)
minute = (dosTime >> 5) & 0x3f
hour = dosTime >> 11
return time.localtime(time.mktime((year, month, day, hour, minute, second, 0, 1, -1)))
def _wrap(restype, function, argtypes):
result = function
result.argtypes = argtypes
result.restype = restype
return result
RARGetDllVersion = _wrap(ctypes.c_int, unrar.RARGetDllVersion, [])
RAROpenArchiveEx = _wrap(ctypes.wintypes.HANDLE, unrar.RAROpenArchiveEx, [ctypes.POINTER(RAROpenArchiveDataEx)])
RARReadHeaderEx = _wrap(ctypes.c_int, unrar.RARReadHeaderEx, [ctypes.wintypes.HANDLE, ctypes.POINTER(RARHeaderDataEx)])
_RARSetPassword = _wrap(ctypes.c_int, unrar.RARSetPassword, [ctypes.wintypes.HANDLE, ctypes.c_char_p])
def RARSetPassword(*args, **kwargs):
_RARSetPassword(*args, **kwargs)
RARProcessFile = _wrap(ctypes.c_int, unrar.RARProcessFile, [ctypes.wintypes.HANDLE, ctypes.c_int, ctypes.c_char_p, ctypes.c_char_p])
RARCloseArchive = _wrap(ctypes.c_int, unrar.RARCloseArchive, [ctypes.wintypes.HANDLE])
UNRARCALLBACK = ctypes.WINFUNCTYPE(ctypes.c_int, ctypes.c_uint, ctypes.c_long, ctypes.c_long, ctypes.c_long)
RARSetCallback = _wrap(ctypes.c_int, unrar.RARSetCallback, [ctypes.wintypes.HANDLE, UNRARCALLBACK, ctypes.c_long])
RARExceptions = {
ERAR_NO_MEMORY : MemoryError,
ERAR_BAD_DATA : ArchiveHeaderBroken,
ERAR_BAD_ARCHIVE : InvalidRARArchive,
ERAR_EOPEN : FileOpenError,
}
class PassiveReader:
"""Used for reading files to memory"""
def __init__(self, usercallback = None):
self.buf = []
self.ucb = usercallback
def _callback(self, msg, UserData, P1, P2):
if msg == UCM_PROCESSDATA:
data = (ctypes.c_char*P2).from_address(P1).raw
if self.ucb!=None:
self.ucb(data)
else:
self.buf.append(data)
return 1
def get_result(self):
return ''.join(self.buf)
class RarInfoIterator(object):
def __init__(self, arc):
self.arc = arc
self.index = 0
self.headerData = RARHeaderDataEx()
self.res = RARReadHeaderEx(self.arc._handle, ctypes.byref(self.headerData))
if self.res==ERAR_BAD_DATA:
raise IncorrectRARPassword
self.arc.lockStatus = "locked"
self.arc.needskip = False
def __iter__(self):
return self
def next(self):
if self.index>0:
if self.arc.needskip:
RARProcessFile(self.arc._handle, RAR_SKIP, None, None)
self.res = RARReadHeaderEx(self.arc._handle, ctypes.byref(self.headerData))
if self.res:
raise StopIteration
self.arc.needskip = True
data = {}
data['index'] = self.index
data['filename'] = self.headerData.FileName
data['datetime'] = DosDateTimeToTimeTuple(self.headerData.FileTime)
data['isdir'] = ((self.headerData.Flags & 0xE0) == 0xE0)
data['size'] = self.headerData.UnpSize + (self.headerData.UnpSizeHigh << 32)
if self.headerData.CmtState == 1:
data['comment'] = self.headerData.CmtBuf.value
else:
data['comment'] = None
self.index += 1
return data
def __del__(self):
self.arc.lockStatus = "finished"
def generate_password_provider(password):
def password_provider_callback(msg, UserData, P1, P2):
if msg == UCM_NEEDPASSWORD and password!=None:
(ctypes.c_char*P2).from_address(P1).value = password
return 1
return password_provider_callback
class RarFileImplementation(object):
def init(self, password=None):
self.password = password
archiveData = RAROpenArchiveDataEx(ArcNameW=self.archiveName, OpenMode=RAR_OM_EXTRACT)
self._handle = RAROpenArchiveEx(ctypes.byref(archiveData))
self.c_callback = UNRARCALLBACK(generate_password_provider(self.password))
RARSetCallback(self._handle, self.c_callback, 1)
if archiveData.OpenResult != 0:
raise RARExceptions[archiveData.OpenResult]
if archiveData.CmtState == 1:
self.comment = archiveData.CmtBuf.value
else:
self.comment = None
if password:
RARSetPassword(self._handle, password)
self.lockStatus = "ready"
def destruct(self):
if self._handle and RARCloseArchive:
RARCloseArchive(self._handle)
def make_sure_ready(self):
if self.lockStatus == "locked":
raise InvalidRARArchiveUsage("cannot execute infoiter() without finishing previous one")
if self.lockStatus == "finished":
self.destruct()
self.init(self.password)
def infoiter(self):
self.make_sure_ready()
return RarInfoIterator(self)
def read_files(self, checker):
res = []
for info in self.infoiter():
if checker(info) and not info.isdir:
reader = PassiveReader()
c_callback = UNRARCALLBACK(reader._callback)
RARSetCallback(self._handle, c_callback, 1)
tmpres = RARProcessFile(self._handle, RAR_TEST, None, None)
if tmpres==ERAR_BAD_DATA:
raise IncorrectRARPassword
self.needskip = False
res.append((info, reader.get_result()))
return res
def extract(self, checker, path, withSubpath, overwrite):
res = []
for info in self.infoiter():
checkres = checker(info)
if checkres!=False and not info.isdir:
if checkres==True:
fn = info.filename
if not withSubpath:
fn = os.path.split(fn)[-1]
target = os.path.join(path, fn)
else:
raise DeprecationWarning, "Condition callbacks returning strings are deprecated and only supported in Windows"
target = checkres
if overwrite or (not os.path.exists(target)):
tmpres = RARProcessFile(self._handle, RAR_EXTRACT, None, target)
if tmpres==ERAR_BAD_DATA:
raise IncorrectRARPassword
self.needskip = False
res.append(info)
return res

View File

@ -0,0 +1 @@
from __future__ import annotations

View File

@ -0,0 +1,5 @@
from __future__ import annotations
from comictaggerlib.main import main
main()

View File

@ -0,0 +1,11 @@
from __future__ import annotations
import os
import comicapi.__pyinstaller
def get_hook_dirs() -> list[str]:
hooks = [os.path.dirname(__file__)]
hooks.extend(comicapi.__pyinstaller.get_hook_dirs())
return hooks

View File

@ -0,0 +1,8 @@
from __future__ import annotations
from PyInstaller.utils.hooks import collect_data_files, collect_entry_point, collect_submodules
datas, hiddenimports = collect_entry_point("comictagger.talker")
hiddenimports += collect_submodules("comictaggerlib")
datas += collect_data_files("comictaggerlib.ui")
datas += collect_data_files("comictaggerlib.graphics")

View File

@ -0,0 +1,7 @@
from __future__ import annotations
import os
from PyInstaller.utils.hooks import get_module_file_attribute
datas = [(os.path.join(os.path.dirname(get_module_file_attribute("wordninja")), "wordninja"), "wordninja")]

View File

@ -0,0 +1,57 @@
from __future__ import annotations
import logging
import pathlib
from PyQt5 import QtCore, QtGui, QtWidgets, uic
from comictaggerlib.ui import ui_path
logger = logging.getLogger(__name__)
class QTextEditLogger(QtCore.QObject, logging.Handler):
qlog = QtCore.pyqtSignal(str)
def __init__(self, formatter: logging.Formatter, level: int) -> None:
super().__init__()
self.setFormatter(formatter)
self.setLevel(level)
def emit(self, record: logging.LogRecord) -> None:
msg = self.format(record)
self.qlog.emit(msg.strip())
class ApplicationLogWindow(QtWidgets.QDialog):
def __init__(
self, log_folder: pathlib.Path, log_handler: QTextEditLogger, parent: QtCore.QObject | None = None
) -> None:
super().__init__(parent)
with (ui_path / "applicationlogwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.log_handler = log_handler
self.log_handler.qlog.connect(self.textEdit.append)
f = QtGui.QFont("menlo")
f.setStyleHint(QtGui.QFont.Monospace)
self.setFont(f)
self._button = QtWidgets.QPushButton(self)
self._button.setText("Test Me")
self.log_folder = log_folder
self.lblLogLocation.setText(f'Log Location: <a href="file://{log_folder}">{log_folder}</a>')
layout = self.layout()
layout.addWidget(self._button)
# Connect signal to slot
self._button.clicked.connect(self.test)
self.textEdit.setTabStopDistance(self.textEdit.tabStopDistance() * 2)
def test(self) -> None:
logger.debug("damn, a bug")
logger.info("something to remember")
logger.warning("that's not right")
logger.error("foobar")

View File

@ -1,226 +1,261 @@
"""
A PyQT4 dialog to select from automated issue matches
"""
"""A PyQT4 dialog to select from automated issue matches"""
"""
Copyright 2012 Anthony Beville
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import sys
import logging
import os
from PyQt4 import QtCore, QtGui, uic
from typing import Callable
from PyQt4.QtCore import QUrl, pyqtSignal, QByteArray
from PyQt5 import QtCore, QtGui, QtWidgets, uic
from imagefetcher import ImageFetcher
from settings import ComicTaggerSettings
from comicarchive import MetaDataStyle
from coverimagewidget import CoverImageWidget
from comicvinetalker import ComicVineTalker
import utils
from comicapi.comicarchive import ComicArchive, metadata_styles
from comicapi.genericmetadata import GenericMetadata
from comictaggerlib.coverimagewidget import CoverImageWidget
from comictaggerlib.ctsettings import ct_ns
from comictaggerlib.resulttypes import IssueResult, Result
from comictaggerlib.ui import ui_path
from comictaggerlib.ui.qtutils import reduce_widget_font_size
from comictalker.comictalker import ComicTalker
class AutoTagMatchWindow(QtGui.QDialog):
volume_id = 0
def __init__(self, parent, match_set_list, style, fetch_func):
super(AutoTagMatchWindow, self).__init__(parent)
uic.loadUi(ComicTaggerSettings.getUIFile('autotagmatchwindow.ui' ), self)
logger = logging.getLogger(__name__)
self.altCoverWidget = CoverImageWidget( self.altCoverContainer, CoverImageWidget.AltCoverMode )
gridlayout = QtGui.QGridLayout( self.altCoverContainer )
gridlayout.addWidget( self.altCoverWidget )
gridlayout.setContentsMargins(0,0,0,0)
self.archiveCoverWidget = CoverImageWidget( self.archiveCoverContainer, CoverImageWidget.ArchiveMode )
gridlayout = QtGui.QGridLayout( self.archiveCoverContainer )
gridlayout.addWidget( self.archiveCoverWidget )
gridlayout.setContentsMargins(0,0,0,0)
class AutoTagMatchWindow(QtWidgets.QDialog):
def __init__(
self,
parent: QtWidgets.QWidget,
match_set_list: list[Result],
styles: list[str],
fetch_func: Callable[[IssueResult], GenericMetadata],
config: ct_ns,
talker: ComicTalker,
) -> None:
super().__init__(parent)
utils.reduceWidgetFontSize( self.twList )
with (ui_path / "matchselectionwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.setWindowFlags(self.windowFlags() |
QtCore.Qt.WindowSystemMenuHint |
QtCore.Qt.WindowMaximizeButtonHint)
self.skipButton = QtGui.QPushButton(self.tr("Skip to Next"))
self.buttonBox.addButton(self.skipButton, QtGui.QDialogButtonBox.ActionRole)
self.buttonBox.button(QtGui.QDialogButtonBox.Ok).setText("Accept and Write Tags")
self.config = config
self.match_set_list = match_set_list
self.style = style
self.fetch_func = fetch_func
self.current_match_set: Result = match_set_list[0]
self.current_match_set_idx = 0
self.twList.currentItemChanged.connect(self.currentItemChanged)
self.twList.cellDoubleClicked.connect(self.cellDoubleClicked)
self.skipButton.clicked.connect(self.skipToNext)
self.updateData()
self.altCoverWidget = CoverImageWidget(
self.altCoverContainer, CoverImageWidget.AltCoverMode, config.Runtime_Options__config.user_cache_dir, talker
)
gridlayout = QtWidgets.QGridLayout(self.altCoverContainer)
gridlayout.addWidget(self.altCoverWidget)
gridlayout.setContentsMargins(0, 0, 0, 0)
def updateData( self):
self.archiveCoverWidget = CoverImageWidget(self.archiveCoverContainer, CoverImageWidget.ArchiveMode, None, None)
gridlayout = QtWidgets.QGridLayout(self.archiveCoverContainer)
gridlayout.addWidget(self.archiveCoverWidget)
gridlayout.setContentsMargins(0, 0, 0, 0)
self.current_match_set = self.match_set_list[ self.current_match_set_idx ]
reduce_widget_font_size(self.twList)
reduce_widget_font_size(self.teDescription, 1)
if self.current_match_set_idx + 1 == len( self.match_set_list ):
self.buttonBox.button(QtGui.QDialogButtonBox.Cancel).setDisabled(True)
#self.buttonBox.button(QtGui.QDialogButtonBox.Ok).setText("Accept")
self.skipButton.setText(self.tr("Skip"))
self.setCoverImage()
self.populateTable()
self.twList.resizeColumnsToContents()
self.twList.selectRow( 0 )
path = self.current_match_set.ca.path
self.setWindowTitle( u"Select correct match or skip ({0} of {1}): {2}".format(
self.current_match_set_idx+1,
len( self.match_set_list ),
os.path.split(path)[1] ))
def populateTable( self ):
self.setWindowFlags(
QtCore.Qt.WindowType(
self.windowFlags()
| QtCore.Qt.WindowType.WindowSystemMenuHint
| QtCore.Qt.WindowType.WindowMaximizeButtonHint
)
)
while self.twList.rowCount() > 0:
self.twList.removeRow(0)
self.twList.setSortingEnabled(False)
self.skipButton = QtWidgets.QPushButton("Skip to Next")
self.buttonBox.addButton(self.skipButton, QtWidgets.QDialogButtonBox.ButtonRole.ActionRole)
self.buttonBox.button(QtWidgets.QDialogButtonBox.StandardButton.Ok).setText("Accept and Write Tags")
row = 0
for match in self.current_match_set.matches:
self.twList.insertRow(row)
item_text = match['series']
item = QtGui.QTableWidgetItem(item_text)
item.setData( QtCore.Qt.ToolTipRole, item_text )
item.setData( QtCore.Qt.UserRole, (match,))
item.setFlags(QtCore.Qt.ItemIsSelectable| QtCore.Qt.ItemIsEnabled)
self.twList.setItem(row, 0, item)
self.match_set_list = match_set_list
self._styles = styles
self.fetch_func = fetch_func
if match['publisher'] is not None:
item_text = u"{0}".format(match['publisher'])
else:
item_text = u"Unknown"
item = QtGui.QTableWidgetItem(item_text)
item.setData( QtCore.Qt.ToolTipRole, item_text )
item.setFlags(QtCore.Qt.ItemIsSelectable| QtCore.Qt.ItemIsEnabled)
self.twList.setItem(row, 1, item)
month_str = u""
year_str = u"????"
if match['month'] is not None:
month_str = u"-{0:02d}".format(int(match['month']))
if match['year'] is not None:
year_str = u"{0}".format(match['year'])
self.current_match_set_idx = 0
item_text = year_str + month_str
item = QtGui.QTableWidgetItem(item_text)
item.setData( QtCore.Qt.ToolTipRole, item_text )
item.setFlags(QtCore.Qt.ItemIsSelectable| QtCore.Qt.ItemIsEnabled)
self.twList.setItem(row, 2, item)
self.twList.currentItemChanged.connect(self.current_item_changed)
self.twList.cellDoubleClicked.connect(self.cell_double_clicked)
self.skipButton.clicked.connect(self.skip_to_next)
item_text = match['issue_title']
item = QtGui.QTableWidgetItem(item_text)
item.setData( QtCore.Qt.ToolTipRole, item_text )
item.setFlags(QtCore.Qt.ItemIsSelectable| QtCore.Qt.ItemIsEnabled)
self.twList.setItem(row, 3, item)
row += 1
self.update_data()
self.twList.resizeColumnsToContents()
self.twList.setSortingEnabled(True)
self.twList.sortItems( 2 , QtCore.Qt.AscendingOrder )
self.twList.selectRow(0)
self.twList.resizeColumnsToContents()
self.twList.horizontalHeader().setStretchLastSection(True)
def update_data(self) -> None:
self.current_match_set = self.match_set_list[self.current_match_set_idx]
def cellDoubleClicked( self, r, c ):
self.accept()
def currentItemChanged( self, curr, prev ):
if self.current_match_set_idx + 1 == len(self.match_set_list):
self.buttonBox.button(QtWidgets.QDialogButtonBox.StandardButton.Cancel).setDisabled(True)
self.skipButton.setText("Skip")
if curr is None:
return
if prev is not None and prev.row() == curr.row():
return
self.altCoverWidget.setIssueID( self.currentMatch()['issue_id'] )
def setCoverImage( self ):
ca = self.current_match_set.ca
self.archiveCoverWidget.setArchive(ca)
self.set_cover_image()
self.populate_table()
self.twList.resizeColumnsToContents()
self.twList.selectRow(0)
def currentMatch( self ):
row = self.twList.currentRow()
match = self.twList.item(row, 0).data( QtCore.Qt.UserRole ).toPyObject()[0]
return match
def accept(self):
path = self.current_match_set.original_path
self.setWindowTitle(
"Select correct match or skip ({} of {}): {}".format(
self.current_match_set_idx + 1,
len(self.match_set_list),
os.path.split(path)[1],
)
)
self.saveMatch()
self.current_match_set_idx += 1
if self.current_match_set_idx == len( self.match_set_list ):
# no more items
QtGui.QDialog.accept(self)
else:
self.updateData()
def populate_table(self) -> None:
if not self.current_match_set:
return
def skipToNext( self ):
self.current_match_set_idx += 1
if self.current_match_set_idx == len( self.match_set_list ):
# no more items
QtGui.QDialog.reject(self)
else:
self.updateData()
def reject(self):
reply = QtGui.QMessageBox.question(self,
self.tr("Cancel Matching"),
self.tr("Are you sure you wish to cancel the matching process?"),
QtGui.QMessageBox.Yes, QtGui.QMessageBox.No )
if reply == QtGui.QMessageBox.No:
return
self.twList.setRowCount(0)
QtGui.QDialog.reject(self)
def saveMatch( self ):
match = self.currentMatch()
ca = self.current_match_set.ca
self.twList.setSortingEnabled(False)
md = ca.readMetadata( self.style )
if md.isEmpty:
md = ca.metadataFromFilename()
# now get the particular issue data
cv_md = self.fetch_func( match )
if cv_md is None:
QtGui.QMessageBox.critical(self, self.tr("Network Issue"), self.tr("Could not connect to ComicVine to get issue details!"))
return
for row, match in enumerate(self.current_match_set.online_results):
self.twList.insertRow(row)
QtGui.QApplication.setOverrideCursor(QtGui.QCursor(QtCore.Qt.WaitCursor))
md.overlay( cv_md )
success = ca.writeMetadata( md, self.style )
ca.loadCache( [ MetaDataStyle.CBI, MetaDataStyle.CIX ] )
QtGui.QApplication.restoreOverrideCursor()
if not success:
QtGui.QMessageBox.warning(self, self.tr("Write Error"), self.tr("Saving the tags to the archive seemed to fail!"))
item_text = match.series
item = QtWidgets.QTableWidgetItem(item_text)
item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item.setData(QtCore.Qt.ItemDataRole.UserRole, (match,))
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, 0, item)
if match.publisher is not None:
item_text = str(match.publisher)
else:
item_text = "Unknown"
item = QtWidgets.QTableWidgetItem(item_text)
item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, 1, item)
month_str = ""
year_str = "????"
if match.month is not None:
month_str = f"-{int(match.month):02d}"
if match.year is not None:
year_str = str(match.year)
item_text = year_str + month_str
item = QtWidgets.QTableWidgetItem(item_text)
item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, 2, item)
item_text = match.issue_title
if item_text is None:
item_text = ""
item = QtWidgets.QTableWidgetItem(item_text)
item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, 3, item)
self.twList.resizeColumnsToContents()
self.twList.setSortingEnabled(True)
self.twList.sortItems(2, QtCore.Qt.SortOrder.AscendingOrder)
self.twList.selectRow(0)
self.twList.resizeColumnsToContents()
self.twList.horizontalHeader().setStretchLastSection(True)
def cell_double_clicked(self, r: int, c: int) -> None:
self.accept()
def current_item_changed(self, curr: QtCore.QModelIndex, prev: QtCore.QModelIndex) -> None:
if curr is None:
return None
if prev is not None and prev.row() == curr.row():
return None
match = self.current_match()
self.altCoverWidget.set_issue_details(match.issue_id, [match.image_url, *match.alt_image_urls])
if match.description is None:
self.teDescription.setText("")
else:
self.teDescription.setText(match.description)
def set_cover_image(self) -> None:
ca = ComicArchive(self.current_match_set.original_path)
self.archiveCoverWidget.set_archive(ca)
def current_match(self) -> IssueResult:
row = self.twList.currentRow()
match: IssueResult = self.twList.item(row, 0).data(QtCore.Qt.ItemDataRole.UserRole)[0]
return match
def accept(self) -> None:
self.save_match()
self.current_match_set_idx += 1
if self.current_match_set_idx == len(self.match_set_list):
# no more items
QtWidgets.QDialog.accept(self)
else:
self.update_data()
def skip_to_next(self) -> None:
self.current_match_set_idx += 1
if self.current_match_set_idx == len(self.match_set_list):
# no more items
QtWidgets.QDialog.reject(self)
else:
self.update_data()
def reject(self) -> None:
reply = QtWidgets.QMessageBox.question(
self,
"Cancel Matching",
"Are you sure you wish to cancel the matching process?",
QtWidgets.QMessageBox.StandardButton.Yes,
QtWidgets.QMessageBox.StandardButton.No,
)
if reply == QtWidgets.QMessageBox.StandardButton.No:
return
QtWidgets.QDialog.reject(self)
def save_match(self) -> None:
match = self.current_match()
ca = ComicArchive(self.current_match_set.original_path)
md = ca.read_metadata(self.config.internal__load_data_style)
if md.is_empty:
md = ca.metadata_from_filename(
self.config.Filename_Parsing__filename_parser,
self.config.Filename_Parsing__remove_c2c,
self.config.Filename_Parsing__remove_fcbd,
self.config.Filename_Parsing__remove_publisher,
)
# now get the particular issue data
self.current_match_set.md = ct_md = self.fetch_func(match)
if ct_md is None:
QtWidgets.QMessageBox.critical(self, "Network Issue", "Could not retrieve issue details!")
return
QtWidgets.QApplication.setOverrideCursor(QtGui.QCursor(QtCore.Qt.CursorShape.WaitCursor))
md.overlay(ct_md)
for style in self._styles:
success = ca.write_metadata(md, style)
QtWidgets.QApplication.restoreOverrideCursor()
if not success:
QtWidgets.QMessageBox.warning(
self,
"Write Error",
f"Saving {metadata_styles[style].name()} the tags to the archive seemed to fail!",
)
break
ca.load_cache(list(metadata_styles))

View File

@ -1,66 +1,77 @@
"""
A PyQT4 dialog to show ID log and progress
"""
"""A PyQT4 dialog to show ID log and progress"""
"""
Copyright 2012 Anthony Beville
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
import logging
http://www.apache.org/licenses/LICENSE-2.0
from PyQt5 import QtCore, QtWidgets, uic
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from comictaggerlib.coverimagewidget import CoverImageWidget
from comictaggerlib.ui import ui_path
from comictaggerlib.ui.qtutils import reduce_widget_font_size
from comictalker.comictalker import ComicTalker
import sys
from PyQt4 import QtCore, QtGui, uic
import os
from settings import ComicTaggerSettings
import utils
logger = logging.getLogger(__name__)
class AutoTagProgressWindow(QtGui.QDialog):
def __init__(self, parent):
super(AutoTagProgressWindow, self).__init__(parent)
uic.loadUi(ComicTaggerSettings.getUIFile('autotagprogresswindow.ui' ), self)
self.lblTest.setPixmap(QtGui.QPixmap(ComicTaggerSettings.getGraphic('nocover.png')))
self.lblArchive.setPixmap(QtGui.QPixmap(ComicTaggerSettings.getGraphic('nocover.png')))
self.isdone = False
self.setWindowFlags(self.windowFlags() |
QtCore.Qt.WindowSystemMenuHint |
QtCore.Qt.WindowMaximizeButtonHint)
class AutoTagProgressWindow(QtWidgets.QDialog):
def __init__(self, parent: QtWidgets.QWidget, talker: ComicTalker) -> None:
super().__init__(parent)
utils.reduceWidgetFontSize( self.textEdit )
def setArchiveImage( self, img_data):
self.setCoverImage( img_data, self.lblArchive )
with (ui_path / "autotagprogresswindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
def setTestImage( self, img_data):
self.setCoverImage( img_data, self.lblTest )
self.lblSourceName.setText(talker.attribution)
def setCoverImage( self, img_data , label):
if img_data is not None:
img = QtGui.QImage()
img.loadFromData( img_data )
label.setPixmap(QtGui.QPixmap(img))
label.setScaledContents(True)
else:
label.setPixmap(QtGui.QPixmap(ComicTaggerSettings.getGraphic('nocover.png')))
label.setScaledContents(True)
QtCore.QCoreApplication.processEvents()
QtCore.QCoreApplication.processEvents()
def reject(self):
QtGui.QDialog.reject(self)
self.isdone = True
self.archiveCoverWidget = CoverImageWidget(
self.archiveCoverContainer, CoverImageWidget.DataMode, None, None, False
)
gridlayout = QtWidgets.QGridLayout(self.archiveCoverContainer)
gridlayout.addWidget(self.archiveCoverWidget)
gridlayout.setContentsMargins(0, 0, 0, 0)
self.testCoverWidget = CoverImageWidget(self.testCoverContainer, CoverImageWidget.DataMode, None, None, False)
gridlayout = QtWidgets.QGridLayout(self.testCoverContainer)
gridlayout.addWidget(self.testCoverWidget)
gridlayout.setContentsMargins(0, 0, 0, 0)
self.isdone = False
self.setWindowFlags(
QtCore.Qt.WindowType(
self.windowFlags()
| QtCore.Qt.WindowType.WindowSystemMenuHint
| QtCore.Qt.WindowType.WindowMaximizeButtonHint
)
)
reduce_widget_font_size(self.textEdit)
def set_archive_image(self, img_data: bytes) -> None:
self.set_cover_image(img_data, self.archiveCoverWidget)
def set_test_image(self, img_data: bytes) -> None:
self.set_cover_image(img_data, self.testCoverWidget)
def set_cover_image(self, img_data: bytes, widget: CoverImageWidget) -> None:
widget.set_image_data(img_data)
QtCore.QCoreApplication.processEvents()
QtCore.QCoreApplication.processEvents()
def reject(self) -> None:
QtWidgets.QDialog.reject(self)
self.isdone = True

View File

@ -1,104 +1,104 @@
"""
A PyQT4 dialog to confirm and set options for auto-tag
"""
"""A PyQT4 dialog to confirm and set config for auto-tag"""
"""
Copyright 2012 Anthony Beville
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
import logging
http://www.apache.org/licenses/LICENSE-2.0
from PyQt5 import QtCore, QtWidgets, uic
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from comictaggerlib.ctsettings import ct_ns
from comictaggerlib.ui import ui_path
logger = logging.getLogger(__name__)
from PyQt4 import QtCore, QtGui, uic
from settings import ComicTaggerSettings
from settingswindow import SettingsWindow
from filerenamer import FileRenamer
import os
import utils
class AutoTagStartWindow(QtWidgets.QDialog):
def __init__(self, parent: QtWidgets.QWidget, config: ct_ns, msg: str) -> None:
super().__init__(parent)
class AutoTagStartWindow(QtGui.QDialog):
def __init__( self, parent, settings, msg ):
super(AutoTagStartWindow, self).__init__(parent)
uic.loadUi(ComicTaggerSettings.getUIFile('autotagstartwindow.ui' ), self)
self.label.setText( msg )
with (ui_path / "autotagstartwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.label.setText(msg)
self.setWindowFlags(self.windowFlags() &
~QtCore.Qt.WindowContextHelpButtonHint )
self.setWindowFlags(
QtCore.Qt.WindowType(self.windowFlags() & ~QtCore.Qt.WindowType.WindowContextHelpButtonHint)
)
self.settings = settings
self.cbxSaveOnLowConfidence.setCheckState( QtCore.Qt.Unchecked )
self.cbxDontUseYear.setCheckState( QtCore.Qt.Unchecked )
self.cbxAssumeIssueOne.setCheckState( QtCore.Qt.Unchecked )
self.cbxIgnoreLeadingDigitsInFilename.setCheckState( QtCore.Qt.Unchecked )
self.cbxRemoveAfterSuccess.setCheckState( QtCore.Qt.Unchecked )
self.cbxSpecifySearchString.setCheckState( QtCore.Qt.Unchecked )
self.leNameLengthMatchTolerance.setText( str(self.settings.id_length_delta_thresh) )
self.leSearchString.setEnabled( False )
self.config = config
nlmtTip = (
""" <html>The <b>Name Length Match Tolerance</b> is for eliminating automatic
search matches that are too long compared to your series name search. The higher
it is, the more likely to have a good match, but each search will take longer and
use more bandwidth. Too low, and only the very closest lexical matches will be
explored.</html>""" )
self.leNameLengthMatchTolerance.setToolTip(nlmtTip)
ssTip = (
"""<html>
The <b>series search string</b> specifies the search string to be used for all selected archives.
Use this when trying to match archives with hard-to-parse or incorrect filenames. All archives selected
should be from the same series.
</html>"""
)
self.leSearchString.setToolTip(ssTip)
self.cbxSpecifySearchString.setToolTip(ssTip)
validator = QtGui.QIntValidator(0, 99, self)
self.leNameLengthMatchTolerance.setValidator(validator)
self.cbxSpecifySearchString.stateChanged.connect(self.searchStringToggle)
self.autoSaveOnLow = False
self.dontUseYear = False
self.assumeIssueOne = False
self.ignoreLeadingDigitsInFilename = False
self.removeAfterSuccess = False
self.searchString = None
self.nameLengthMatchTolerance = self.settings.id_length_delta_thresh
self.cbxSpecifySearchString.setChecked(False)
self.cbxSplitWords.setChecked(False)
self.sbNameMatchSearchThresh.setValue(self.config.Issue_Identifier__series_match_identify_thresh)
self.leSearchString.setEnabled(False)
def searchStringToggle(self):
enable = self.cbxSpecifySearchString.isChecked()
self.leSearchString.setEnabled( enable )
self.cbxSaveOnLowConfidence.setChecked(self.config.Auto_Tag__save_on_low_confidence)
self.cbxDontUseYear.setChecked(self.config.Auto_Tag__dont_use_year_when_identifying)
self.cbxAssumeIssueOne.setChecked(self.config.Auto_Tag__assume_issue_one)
self.cbxIgnoreLeadingDigitsInFilename.setChecked(self.config.Auto_Tag__ignore_leading_numbers_in_filename)
self.cbxRemoveAfterSuccess.setChecked(self.config.Auto_Tag__remove_archive_after_successful_match)
self.cbxAutoImprint.setChecked(self.config.Issue_Identifier__auto_imprint)
def accept( self ):
QtGui.QDialog.accept(self)
nlmt_tip = """<html>The <b>Name Match Ratio Threshold: Auto-Identify</b> is for eliminating automatic
search matches that are too long compared to your series name search. The lower
it is, the more likely to have a good match, but each search will take longer and
use more bandwidth. Too high, and only the very closest matches will be explored.</html>"""
self.autoSaveOnLow = self.cbxSaveOnLowConfidence.isChecked()
self.dontUseYear = self.cbxDontUseYear.isChecked()
self.assumeIssueOne = self.cbxAssumeIssueOne.isChecked()
self.ignoreLeadingDigitsInFilename = self.cbxIgnoreLeadingDigitsInFilename.isChecked()
self.removeAfterSuccess = self.cbxRemoveAfterSuccess.isChecked()
self.nameLengthMatchTolerance = int(self.leNameLengthMatchTolerance.text())
if self.cbxSpecifySearchString.isChecked():
self.searchString = unicode(self.leSearchString.text())
if len(self.searchString) == 0:
self.searchString = None
self.sbNameMatchSearchThresh.setToolTip(nlmt_tip)
ss_tip = """<html>
The <b>series search string</b> specifies the search string to be used for all selected archives.
Use this when trying to match archives with hard-to-parse or incorrect filenames. All archives selected
should be from the same series.
</html>"""
self.leSearchString.setToolTip(ss_tip)
self.cbxSpecifySearchString.setToolTip(ss_tip)
self.cbxSpecifySearchString.stateChanged.connect(self.search_string_toggle)
self.auto_save_on_low = False
self.dont_use_year = False
self.assume_issue_one = False
self.ignore_leading_digits_in_filename = False
self.remove_after_success = False
self.search_string = ""
self.name_length_match_tolerance = self.config.Issue_Identifier__series_match_search_thresh
self.split_words = self.cbxSplitWords.isChecked()
def search_string_toggle(self) -> None:
enable = self.cbxSpecifySearchString.isChecked()
self.leSearchString.setEnabled(enable)
def accept(self) -> None:
QtWidgets.QDialog.accept(self)
self.auto_save_on_low = self.cbxSaveOnLowConfidence.isChecked()
self.dont_use_year = self.cbxDontUseYear.isChecked()
self.assume_issue_one = self.cbxAssumeIssueOne.isChecked()
self.ignore_leading_digits_in_filename = self.cbxIgnoreLeadingDigitsInFilename.isChecked()
self.remove_after_success = self.cbxRemoveAfterSuccess.isChecked()
self.name_length_match_tolerance = self.sbNameMatchSearchThresh.value()
self.split_words = self.cbxSplitWords.isChecked()
# persist some settings
self.config.Auto_Tag__save_on_low_confidence = self.auto_save_on_low
self.config.Auto_Tag__dont_use_year_when_identifying = self.dont_use_year
self.config.Auto_Tag__assume_issue_one = self.assume_issue_one
self.config.Auto_Tag__ignore_leading_numbers_in_filename = self.ignore_leading_digits_in_filename
self.config.Auto_Tag__remove_archive_after_successful_match = self.remove_after_success
if self.cbxSpecifySearchString.isChecked():
self.search_string = self.leSearchString.text()

View File

@ -1,99 +1,90 @@
"""
Class to manage modifying metadata specifically for CBL/CBI
"""
"""A class to manage modifying metadata specifically for CBL/CBI"""
"""
Copyright 2012 Anthony Beville
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
import logging
http://www.apache.org/licenses/LICENSE-2.0
from comicapi.genericmetadata import Credit, GenericMetadata
from comictaggerlib.ctsettings import ct_ns
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import utils
logger = logging.getLogger(__name__)
class CBLTransformer:
def __init__( self, metadata, settings ):
self.metadata = metadata
self.settings = settings
def __init__(self, metadata: GenericMetadata, config: ct_ns) -> None:
self.metadata = metadata
self.config = config
def apply( self ):
# helper funcs
def append_to_tags_if_unique( item ):
if item.lower() not in (tag.lower() for tag in self.metadata.tags):
self.metadata.tags.append( item )
def add_string_list_to_tags( str_list ):
if str_list is not None and str_list != "":
items = [ s.strip() for s in str_list.split(',') ]
for item in items:
append_to_tags_if_unique( item )
def apply(self) -> GenericMetadata:
if self.config.Comic_Book_Lover__assume_lone_credit_is_primary:
# helper
def set_lone_primary(role_list: list[str]) -> tuple[Credit | None, int]:
lone_credit: Credit | None = None
count = 0
for c in self.metadata.credits:
if c["role"].casefold() in role_list:
count += 1
lone_credit = c
if count > 1:
lone_credit = None
break
if lone_credit is not None:
lone_credit["primary"] = True
return lone_credit, count
if self.settings.assume_lone_credit_is_primary:
# helper
def setLonePrimary( role_list ):
lone_credit = None
count = 0
for c in self.metadata.credits:
if c['role'].lower() in role_list:
count += 1
lone_credit = c
if count > 1:
lone_credit = None
break
if lone_credit is not None:
lone_credit['primary'] = True
return lone_credit, count
#need to loop three times, once for 'writer', 'artist', and then 'penciler' if no artist
setLonePrimary( ['writer'] )
c, count = setLonePrimary( ['artist'] )
if c is None and count == 0:
c, count = setLonePrimary( ['penciler', 'penciller'] )
if c is not None:
c['primary'] = False
self.metadata.addCredit( c['person'], 'Artist', True )
# need to loop three times, once for 'writer', 'artist', and then
# 'penciler' if no artist
set_lone_primary(["writer"])
c, count = set_lone_primary(["artist"])
if c is None and count == 0:
c, count = set_lone_primary(["penciler", "penciller"])
if c is not None:
c["primary"] = False
self.metadata.add_credit(c["person"], "Artist", True)
if self.settings.copy_characters_to_tags:
add_string_list_to_tags( self.metadata.characters )
if self.config.Comic_Book_Lover__copy_characters_to_tags:
self.metadata.tags.update(x for x in self.metadata.characters)
if self.settings.copy_teams_to_tags:
add_string_list_to_tags( self.metadata.teams )
if self.settings.copy_locations_to_tags:
add_string_list_to_tags( self.metadata.locations )
if self.settings.copy_notes_to_comments:
if self.metadata.notes is not None:
if self.metadata.comments is None:
self.metadata.comments = ""
else:
self.metadata.comments += "\n\n"
if self.metadata.notes not in self.metadata.comments:
self.metadata.comments += self.metadata.notes
if self.config.Comic_Book_Lover__copy_teams_to_tags:
self.metadata.tags.update(x for x in self.metadata.teams)
if self.settings.copy_weblink_to_comments:
if self.metadata.webLink is not None:
if self.metadata.comments is None:
self.metadata.comments = ""
else:
self.metadata.comments += "\n\n"
if self.metadata.webLink not in self.metadata.comments:
self.metadata.comments += self.metadata.webLink
if self.config.Comic_Book_Lover__copy_locations_to_tags:
self.metadata.tags.update(x for x in self.metadata.locations)
return self.metadata
if self.config.Comic_Book_Lover__copy_storyarcs_to_tags:
self.metadata.tags.update(x for x in self.metadata.story_arcs)
if self.config.Comic_Book_Lover__copy_notes_to_comments:
if self.metadata.notes is not None:
if self.metadata.description is None:
self.metadata.description = ""
else:
self.metadata.description += "\n\n"
if self.metadata.notes not in self.metadata.description:
self.metadata.description += self.metadata.notes
if self.config.Comic_Book_Lover__copy_weblink_to_comments:
for web_link in self.metadata.web_links:
temp_desc = self.metadata.description
if temp_desc is None:
temp_desc = ""
else:
temp_desc += "\n\n"
if web_link.url and web_link.url not in temp_desc:
self.metadata.description = temp_desc + web_link.url
return self.metadata

File diff suppressed because it is too large Load Diff

View File

@ -1,260 +0,0 @@
"""
A python class to encapsulate CoMet data
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from datetime import datetime
import zipfile
from pprint import pprint
import xml.etree.ElementTree as ET
from genericmetadata import GenericMetadata
import utils
class CoMet:
writer_synonyms = ['writer', 'plotter', 'scripter']
penciller_synonyms = [ 'artist', 'penciller', 'penciler', 'breakdowns' ]
inker_synonyms = [ 'inker', 'artist', 'finishes' ]
colorist_synonyms = [ 'colorist', 'colourist', 'colorer', 'colourer' ]
letterer_synonyms = [ 'letterer']
cover_synonyms = [ 'cover', 'covers', 'coverartist', 'cover artist' ]
editor_synonyms = [ 'editor']
def metadataFromString( self, string ):
tree = ET.ElementTree(ET.fromstring( string ))
return self.convertXMLToMetadata( tree )
def stringFromMetadata( self, metadata ):
header = '<?xml version="1.0" encoding="UTF-8"?>\n'
tree = self.convertMetadataToXML( self, metadata )
return header + ET.tostring(tree.getroot())
def indent( self, elem, level=0 ):
# for making the XML output readable
i = "\n" + level*" "
if len(elem):
if not elem.text or not elem.text.strip():
elem.text = i + " "
if not elem.tail or not elem.tail.strip():
elem.tail = i
for elem in elem:
self.indent( elem, level+1 )
if not elem.tail or not elem.tail.strip():
elem.tail = i
else:
if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i
def convertMetadataToXML( self, filename, metadata ):
#shorthand for the metadata
md = metadata
# build a tree structure
root = ET.Element("comet")
root.attrib['xmlns:comet'] = "http://www.denvog.com/comet/"
root.attrib['xmlns:xsi'] = "http://www.w3.org/2001/XMLSchema-instance"
root.attrib['xsi:schemaLocation'] = "http://www.denvog.com http://www.denvog.com/comet/comet.xsd"
#helper func
def assign( comet_entry, md_entry):
if md_entry is not None:
ET.SubElement(root, comet_entry).text = u"{0}".format(md_entry)
# title is manditory
if md.title is None:
md.title = ""
assign( 'title', md.title )
assign( 'series', md.series )
assign( 'issue', md.issue ) #must be int??
assign( 'volume', md.volume )
assign( 'description', md.comments )
assign( 'publisher', md.publisher )
assign( 'pages', md.pageCount )
assign( 'format', md.format )
assign( 'language', md.language )
assign( 'rating', md.maturityRating )
assign( 'price', md.price )
assign( 'isVersionOf', md.isVersionOf )
assign( 'rights', md.rights )
assign( 'identifier', md.identifier )
assign( 'lastMark', md.lastMark )
assign( 'genre', md.genre ) # TODO repeatable
if md.characters is not None:
char_list = [ c.strip() for c in md.characters.split(',') ]
for c in char_list:
assign( 'character', c )
if md.manga is not None and md.manga == "YesAndRightToLeft":
assign( 'readingDirection', "rtl")
date_str = ""
if md.year is not None:
date_str = str(md.year).zfill(4)
if md.month is not None:
date_str += "-" + str(md.month).zfill(2)
assign( 'date', date_str )
assign( 'coverImage', md.coverImage )
# need to specially process the credits, since they are structured differently than CIX
credit_writer_list = list()
credit_penciller_list = list()
credit_inker_list = list()
credit_colorist_list = list()
credit_letterer_list = list()
credit_cover_list = list()
credit_editor_list = list()
# loop thru credits, and build a list for each role that CoMet supports
for credit in metadata.credits:
if credit['role'].lower() in set( self.writer_synonyms ):
ET.SubElement(root, 'writer').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.penciller_synonyms ):
ET.SubElement(root, 'penciller').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.inker_synonyms ):
ET.SubElement(root, 'inker').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.colorist_synonyms ):
ET.SubElement(root, 'colorist').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.letterer_synonyms ):
ET.SubElement(root, 'letterer').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.cover_synonyms ):
ET.SubElement(root, 'coverDesigner').text = u"{0}".format(credit['person'])
if credit['role'].lower() in set( self.editor_synonyms ):
ET.SubElement(root, 'editor').text = u"{0}".format(credit['person'])
# self pretty-print
self.indent(root)
# wrap it in an ElementTree instance, and save as XML
tree = ET.ElementTree(root)
return tree
def convertXMLToMetadata( self, tree ):
root = tree.getroot()
if root.tag != 'comet':
raise 1
return None
metadata = GenericMetadata()
md = metadata
# Helper function
def xlate( tag ):
node = root.find( tag )
if node is not None:
return node.text
else:
return None
md.series = xlate( 'series' )
md.title = xlate( 'title' )
md.issue = xlate( 'issue' )
md.volume = xlate( 'volume' )
md.comments = xlate( 'description' )
md.publisher = xlate( 'publisher' )
md.language = xlate( 'language' )
md.format = xlate( 'format' )
md.pageCount = xlate( 'pages' )
md.maturityRating = xlate( 'rating' )
md.price = xlate( 'price' )
md.isVersionOf = xlate( 'isVersionOf' )
md.rights = xlate( 'rights' )
md.identifier = xlate( 'identifier' )
md.lastMark = xlate( 'lastMark' )
md.genre = xlate( 'genre' ) # TODO - repeatable field
date = xlate( 'date' )
if date is not None:
parts = date.split('-')
if len( parts) > 0:
md.year = parts[0]
if len( parts) > 1:
md.month = parts[1]
md.coverImage = xlate( 'coverImage' )
readingDirection = xlate( 'readingDirection' )
if readingDirection is not None and readingDirection == "rtl":
md.manga = "YesAndRightToLeft"
# loop for character tags
char_list = []
for n in root:
if n.tag == 'character':
char_list.append(n.text.strip())
md.characters = utils.listToString( char_list )
# Now extract the credit info
for n in root:
if ( n.tag == 'writer' or
n.tag == 'penciller' or
n.tag == 'inker' or
n.tag == 'colorist' or
n.tag == 'letterer' or
n.tag == 'editor'
):
metadata.addCredit( n.text.strip(), n.tag.title() )
if n.tag == 'coverDesigner':
metadata.addCredit( n.text.strip(), "Cover" )
metadata.isEmpty = False
return metadata
#verify that the string actually contains CoMet data in XML format
def validateString( self, string ):
try:
tree = ET.ElementTree(ET.fromstring( string ))
root = tree.getroot()
if root.tag != 'comet':
raise Exception
except:
return False
return True
def writeToExternalFile( self, filename, metadata ):
tree = self.convertMetadataToXML( self, metadata )
#ET.dump(tree)
tree.write(filename, encoding='utf-8')
def readFromExternalFile( self, filename ):
tree = ET.parse( filename )
return self.convertXMLToMetadata( tree )

File diff suppressed because it is too large Load Diff

View File

@ -1,152 +0,0 @@
"""
A python class to encapsulate the ComicBookInfo data
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import json
from datetime import datetime
import zipfile
from genericmetadata import GenericMetadata
import utils
import ctversion
class ComicBookInfo:
def metadataFromString( self, string ):
cbi_container = json.loads( unicode(string, 'utf-8') )
metadata = GenericMetadata()
cbi = cbi_container[ 'ComicBookInfo/1.0' ]
#helper func
# If item is not in CBI, return None
def xlate( cbi_entry):
if cbi_entry in cbi:
return cbi[cbi_entry]
else:
return None
metadata.series = xlate( 'series' )
metadata.title = xlate( 'title' )
metadata.issue = xlate( 'issue' )
metadata.publisher = xlate( 'publisher' )
metadata.month = xlate( 'publicationMonth' )
metadata.year = xlate( 'publicationYear' )
metadata.issueCount = xlate( 'numberOfIssues' )
metadata.comments = xlate( 'comments' )
metadata.credits = xlate( 'credits' )
metadata.genre = xlate( 'genre' )
metadata.volume = xlate( 'volume' )
metadata.volumeCount = xlate( 'numberOfVolumes' )
metadata.language = xlate( 'language' )
metadata.country = xlate( 'country' )
metadata.criticalRating = xlate( 'rating' )
metadata.tags = xlate( 'tags' )
# make sure credits and tags are at least empty lists and not None
if metadata.credits is None:
metadata.credits = []
if metadata.tags is None:
metadata.tags = []
#need to massage the language string to be ISO
if metadata.language is not None:
# reverse look-up
pattern = metadata.language
metadata.language = None
for key in utils.getLanguageDict():
if utils.getLanguageDict()[ key ] == pattern.encode('utf-8'):
metadata.language = key
break
metadata.isEmpty = False
return metadata
def stringFromMetadata( self, metadata ):
cbi_container = self.createJSONDictionary( metadata )
return json.dumps( cbi_container )
#verify that the string actually contains CBI data in JSON format
def validateString( self, string ):
try:
cbi_container = json.loads( string )
except:
return False
return ( 'ComicBookInfo/1.0' in cbi_container )
def createJSONDictionary( self, metadata ):
# Create the dictionary that we will convert to JSON text
cbi = dict()
cbi_container = {'appID' : 'ComicTagger/' + ctversion.version,
'lastModified' : str(datetime.now()),
'ComicBookInfo/1.0' : cbi }
#helper func
def assign( cbi_entry, md_entry):
if md_entry is not None:
cbi[cbi_entry] = md_entry
#helper func
def toInt(s):
i = None
if type(s) in [ str, unicode, int ]:
try:
i = int(s)
except ValueError:
pass
return i
assign( 'series', metadata.series )
assign( 'title', metadata.title )
assign( 'issue', metadata.issue )
assign( 'publisher', metadata.publisher )
assign( 'publicationMonth', toInt(metadata.month) )
assign( 'publicationYear', toInt(metadata.year) )
assign( 'numberOfIssues', toInt(metadata.issueCount) )
assign( 'comments', metadata.comments )
assign( 'genre', metadata.genre )
assign( 'volume', toInt(metadata.volume) )
assign( 'numberOfVolumes', toInt(metadata.volumeCount) )
assign( 'language', utils.getLanguageFromISO(metadata.language) )
assign( 'country', metadata.country )
assign( 'rating', metadata.criticalRating )
assign( 'credits', metadata.credits )
assign( 'tags', metadata.tags )
return cbi_container
def writeToExternalFile( self, filename, metadata ):
cbi_container = self.createJSONDictionary(metadata)
f = open(filename, 'w')
f.write(json.dumps(cbi_container, indent=4))
f.close

View File

@ -1,291 +0,0 @@
"""
A python class to encapsulate ComicRack's ComicInfo.xml data
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from datetime import datetime
import zipfile
from pprint import pprint
import xml.etree.ElementTree as ET
from genericmetadata import GenericMetadata
import utils
class ComicInfoXml:
writer_synonyms = ['writer', 'plotter', 'scripter']
penciller_synonyms = [ 'artist', 'penciller', 'penciler', 'breakdowns' ]
inker_synonyms = [ 'inker', 'artist', 'finishes' ]
colorist_synonyms = [ 'colorist', 'colourist', 'colorer', 'colourer' ]
letterer_synonyms = [ 'letterer']
cover_synonyms = [ 'cover', 'covers', 'coverartist', 'cover artist' ]
editor_synonyms = [ 'editor']
def getParseableCredits( self ):
parsable_credits = []
parsable_credits.extend( self.writer_synonyms )
parsable_credits.extend( self.penciller_synonyms )
parsable_credits.extend( self.inker_synonyms )
parsable_credits.extend( self.colorist_synonyms )
parsable_credits.extend( self.letterer_synonyms )
parsable_credits.extend( self.cover_synonyms )
parsable_credits.extend( self.editor_synonyms )
return parsable_credits
def metadataFromString( self, string ):
tree = ET.ElementTree(ET.fromstring( string ))
return self.convertXMLToMetadata( tree )
def stringFromMetadata( self, metadata ):
header = '<?xml version="1.0"?>\n'
tree = self.convertMetadataToXML( self, metadata )
return header + ET.tostring(tree.getroot())
def indent( self, elem, level=0 ):
# for making the XML output readable
i = "\n" + level*" "
if len(elem):
if not elem.text or not elem.text.strip():
elem.text = i + " "
if not elem.tail or not elem.tail.strip():
elem.tail = i
for elem in elem:
self.indent( elem, level+1 )
if not elem.tail or not elem.tail.strip():
elem.tail = i
else:
if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i
def convertMetadataToXML( self, filename, metadata ):
#shorthand for the metadata
md = metadata
# build a tree structure
root = ET.Element("ComicInfo")
root.attrib['xmlns:xsi']="http://www.w3.org/2001/XMLSchema-instance"
root.attrib['xmlns:xsd']="http://www.w3.org/2001/XMLSchema"
#helper func
def assign( cix_entry, md_entry):
if md_entry is not None:
ET.SubElement(root, cix_entry).text = u"{0}".format(md_entry)
assign( 'Title', md.title )
assign( 'Series', md.series )
assign( 'Number', md.issue )
assign( 'Count', md.issueCount )
assign( 'Volume', md.volume )
assign( 'AlternateSeries', md.alternateSeries )
assign( 'AlternateNumber', md.alternateNumber )
assign( 'StoryArc', md.storyArc )
assign( 'SeriesGroup', md.seriesGroup )
assign( 'AlternateCount', md.alternateCount )
assign( 'Summary', md.comments )
assign( 'Notes', md.notes )
assign( 'Year', md.year )
assign( 'Month', md.month )
assign( 'Day', md.day )
# need to specially process the credits, since they are structured differently than CIX
credit_writer_list = list()
credit_penciller_list = list()
credit_inker_list = list()
credit_colorist_list = list()
credit_letterer_list = list()
credit_cover_list = list()
credit_editor_list = list()
# first, loop thru credits, and build a list for each role that CIX supports
for credit in metadata.credits:
if credit['role'].lower() in set( self.writer_synonyms ):
credit_writer_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.penciller_synonyms ):
credit_penciller_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.inker_synonyms ):
credit_inker_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.colorist_synonyms ):
credit_colorist_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.letterer_synonyms ):
credit_letterer_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.cover_synonyms ):
credit_cover_list.append(credit['person'].replace(",",""))
if credit['role'].lower() in set( self.editor_synonyms ):
credit_editor_list.append(credit['person'].replace(",",""))
# second, convert each list to string, and add to XML struct
if len( credit_writer_list ) > 0:
node = ET.SubElement(root, 'Writer')
node.text = utils.listToString( credit_writer_list )
if len( credit_penciller_list ) > 0:
node = ET.SubElement(root, 'Penciller')
node.text = utils.listToString( credit_penciller_list )
if len( credit_inker_list ) > 0:
node = ET.SubElement(root, 'Inker')
node.text = utils.listToString( credit_inker_list )
if len( credit_colorist_list ) > 0:
node = ET.SubElement(root, 'Colorist')
node.text = utils.listToString( credit_colorist_list )
if len( credit_letterer_list ) > 0:
node = ET.SubElement(root, 'Letterer')
node.text = utils.listToString( credit_letterer_list )
if len( credit_cover_list ) > 0:
node = ET.SubElement(root, 'CoverArtist')
node.text = utils.listToString( credit_cover_list )
if len( credit_editor_list ) > 0:
node = ET.SubElement(root, 'Editor')
node.text = utils.listToString( credit_editor_list )
assign( 'Publisher', md.publisher )
assign( 'Imprint', md.imprint )
assign( 'Genre', md.genre )
assign( 'Web', md.webLink )
assign( 'PageCount', md.pageCount )
assign( 'LanguageISO', md.language )
assign( 'Format', md.format )
assign( 'AgeRating', md.maturityRating )
if md.blackAndWhite is not None and md.blackAndWhite:
ET.SubElement(root, 'BlackAndWhite').text = "Yes"
assign( 'Manga', md.manga )
assign( 'Characters', md.characters )
assign( 'Teams', md.teams )
assign( 'Locations', md.locations )
assign( 'ScanInformation', md.scanInfo )
# loop and add the page entries under pages node
if len( md.pages ) > 0:
pages_node = ET.SubElement(root, 'Pages')
for page_dict in md.pages:
page_node = ET.SubElement(pages_node, 'Page')
page_node.attrib = page_dict
# self pretty-print
self.indent(root)
# wrap it in an ElementTree instance, and save as XML
tree = ET.ElementTree(root)
return tree
def convertXMLToMetadata( self, tree ):
root = tree.getroot()
if root.tag != 'ComicInfo':
raise 1
return None
metadata = GenericMetadata()
md = metadata
# Helper function
def xlate( tag ):
node = root.find( tag )
if node is not None:
return node.text
else:
return None
md.series = xlate( 'Series' )
md.title = xlate( 'Title' )
md.issue = xlate( 'Number' )
md.issueCount = xlate( 'Count' )
md.volume = xlate( 'Volume' )
md.alternateSeries = xlate( 'AlternateSeries' )
md.alternateNumber = xlate( 'AlternateNumber' )
md.alternateCount = xlate( 'AlternateCount' )
md.comments = xlate( 'Summary' )
md.notes = xlate( 'Notes' )
md.year = xlate( 'Year' )
md.month = xlate( 'Month' )
md.day = xlate( 'Day' )
md.publisher = xlate( 'Publisher' )
md.imprint = xlate( 'Imprint' )
md.genre = xlate( 'Genre' )
md.webLink = xlate( 'Web' )
md.language = xlate( 'LanguageISO' )
md.format = xlate( 'Format' )
md.manga = xlate( 'Manga' )
md.characters = xlate( 'Characters' )
md.teams = xlate( 'Teams' )
md.locations = xlate( 'Locations' )
md.pageCount = xlate( 'PageCount' )
md.scanInfo = xlate( 'ScanInformation' )
md.storyArc = xlate( 'StoryArc' )
md.seriesGroup = xlate( 'SeriesGroup' )
md.maturityRating = xlate( 'AgeRating' )
tmp = xlate( 'BlackAndWhite' )
md.blackAndWhite = False
if tmp is not None and tmp.lower() in [ "yes", "true", "1" ]:
md.blackAndWhite = True
# Now extract the credit info
for n in root:
if ( n.tag == 'Writer' or
n.tag == 'Penciller' or
n.tag == 'Inker' or
n.tag == 'Colorist' or
n.tag == 'Letterer' or
n.tag == 'Editor'
):
for name in n.text.split(','):
metadata.addCredit( name.strip(), n.tag )
if n.tag == 'CoverArtist':
for name in n.text.split(','):
metadata.addCredit( name.strip(), "Cover" )
# parse page data now
pages_node = root.find( "Pages" )
if pages_node is not None:
for page in pages_node:
metadata.pages.append( page.attrib )
#print page.attrib
metadata.isEmpty = False
return metadata
def writeToExternalFile( self, filename, metadata ):
tree = self.convertMetadataToXML( self, metadata )
#ET.dump(tree)
tree.write(filename, encoding='utf-8')
def readFromExternalFile( self, filename ):
tree = ET.parse( filename )
return self.convertXMLToMetadata( tree )

View File

@ -1,426 +0,0 @@
"""
A python class to manage caching of data from Comic Vine
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from pprint import pprint
import sqlite3 as lite
import sys
import os
import datetime
import ctversion
from settings import ComicTaggerSettings
import utils
class ComicVineCacher:
def __init__(self ):
self.settings_folder = ComicTaggerSettings.getSettingsFolder()
self.db_file = os.path.join( self.settings_folder, "cv_cache.db")
self.version_file = os.path.join( self.settings_folder, "cache_version.txt")
#verify that cache is from same version as this one
data = ""
try:
with open( self.version_file, 'rb' ) as f:
data = f.read()
f.close()
except:
pass
if data != ctversion.version:
self.clearCache()
if not os.path.exists( self.db_file ):
self.create_cache_db()
def clearCache( self ):
try:
os.unlink( self.db_file )
except:
pass
try:
os.unlink( self.version_file )
except:
pass
def create_cache_db( self ):
#create the version file
with open( self.version_file, 'w' ) as f:
f.write( ctversion.version )
# this will wipe out any existing version
open( self.db_file, 'w').close()
con = lite.connect( self.db_file )
# create tables
with con:
cur = con.cursor()
#name,id,start_year,publisher,image,description,count_of_issues
cur.execute("CREATE TABLE VolumeSearchCache(" +
"search_term TEXT," +
"id INT," +
"name TEXT," +
"start_year INT," +
"publisher TEXT," +
"count_of_issues INT," +
"image_url TEXT," +
"description TEXT," +
"timestamp DATE DEFAULT (datetime('now','localtime')) ) "
)
cur.execute("CREATE TABLE Volumes(" +
"id INT," +
"name TEXT," +
"publisher TEXT," +
"count_of_issues INT," +
"start_year INT," +
"timestamp DATE DEFAULT (datetime('now','localtime')), " +
"PRIMARY KEY (id) )"
)
cur.execute("CREATE TABLE AltCovers(" +
"issue_id INT," +
"url_list TEXT," +
"timestamp DATE DEFAULT (datetime('now','localtime')), " +
"PRIMARY KEY (issue_id) )"
)
cur.execute("CREATE TABLE Issues(" +
"id INT," +
"volume_id INT," +
"name TEXT," +
"issue_number TEXT," +
"image_url TEXT," +
"image_hash TEXT," +
"thumb_image_url TEXT," +
"thumb_image_hash TEXT," +
"publish_month TEXT," +
"publish_year TEXT," +
"site_detail_url TEXT," +
"timestamp DATE DEFAULT (datetime('now','localtime')), " +
"PRIMARY KEY (id ) )"
)
def add_search_results( self, search_term, cv_search_results ):
con = lite.connect( self.db_file )
with con:
con.text_factory = unicode
cur = con.cursor()
# remove all previous entries with this search term
cur.execute("DELETE FROM VolumeSearchCache WHERE search_term = ?", [ search_term.lower() ])
# now add in new results
for record in cv_search_results:
timestamp = datetime.datetime.now()
if record['publisher'] is None:
pub_name = ""
else:
pub_name = record['publisher']['name']
if record['image'] is None:
url = ""
else:
url = record['image']['super_url']
cur.execute("INSERT INTO VolumeSearchCache " +
"(search_term, id, name, start_year, publisher, count_of_issues, image_url, description ) " +
"VALUES( ?, ?, ?, ?, ?, ?, ?, ? )" ,
( search_term.lower(),
record['id'],
record['name'],
record['start_year'],
pub_name,
record['count_of_issues'],
url,
record['description'])
)
def get_search_results( self, search_term ):
results = list()
con = lite.connect( self.db_file )
with con:
con.text_factory = unicode
cur = con.cursor()
# purge stale search results
a_day_ago = datetime.datetime.today()-datetime.timedelta(days=1)
cur.execute( "DELETE FROM VolumeSearchCache WHERE timestamp < ?", [ str(a_day_ago) ] )
# fetch
cur.execute("SELECT * FROM VolumeSearchCache WHERE search_term=?", [ search_term.lower() ] )
rows = cur.fetchall()
# now process the results
for record in rows:
result = dict()
result['id'] = record[1]
result['name'] = record[2]
result['start_year'] = record[3]
result['publisher'] = dict()
result['publisher']['name'] = record[4]
result['count_of_issues'] = record[5]
result['image'] = dict()
result['image']['super_url'] = record[6]
result['description'] = record[7]
results.append(result)
return results
def add_alt_covers( self, issue_id, url_list ):
con = lite.connect( self.db_file )
with con:
con.text_factory = unicode
cur = con.cursor()
# remove all previous entries with this search term
cur.execute("DELETE FROM AltCovers WHERE issue_id = ?", [ issue_id ])
url_list_str = utils.listToString(url_list)
# now add in new record
cur.execute("INSERT INTO AltCovers " +
"(issue_id, url_list ) " +
"VALUES( ?, ? )" ,
( issue_id,
url_list_str)
)
def get_alt_covers( self, issue_id ):
con = lite.connect( self.db_file )
with con:
cur = con.cursor()
con.text_factory = unicode
# purge stale issue info - probably issue data won't change much....
a_month_ago = datetime.datetime.today()-datetime.timedelta(days=30)
cur.execute( "DELETE FROM AltCovers WHERE timestamp < ?", [ str(a_month_ago) ] )
cur.execute("SELECT url_list FROM AltCovers WHERE issue_id=?", [ issue_id ])
row = cur.fetchone()
if row is None :
return None
else:
url_list_str = row[0]
if len(url_list_str) == 0:
return []
raw_list = url_list_str.split(",")
url_list = []
for item in raw_list:
url_list.append( str(item).strip())
return url_list
def add_volume_info( self, cv_volume_record ):
con = lite.connect( self.db_file )
with con:
cur = con.cursor()
timestamp = datetime.datetime.now()
if cv_volume_record['publisher'] is None:
pub_name = ""
else:
pub_name = cv_volume_record['publisher']['name']
data = {
"name": cv_volume_record['name'],
"publisher": pub_name,
"count_of_issues": cv_volume_record['count_of_issues'],
"start_year": cv_volume_record['start_year'],
"timestamp": timestamp
}
self.upsert( cur, "volumes", "id", cv_volume_record['id'], data)
# now add in issues
for issue in cv_volume_record['issues']:
data = {
"volume_id": cv_volume_record['id'],
"name": issue['name'],
"issue_number": issue['issue_number'],
"timestamp": timestamp
}
self.upsert( cur, "issues" , "id", issue['id'], data)
def get_volume_info( self, volume_id ):
result = None
con = lite.connect( self.db_file )
with con:
cur = con.cursor()
con.text_factory = unicode
# purge stale volume info
a_week_ago = datetime.datetime.today()-datetime.timedelta(days=7)
cur.execute( "DELETE FROM Volumes WHERE timestamp < ?", [ str(a_week_ago) ] )
# purge stale issue info - probably issue data won't change much....
a_month_ago = datetime.datetime.today()-datetime.timedelta(days=30)
cur.execute( "DELETE FROM Issues WHERE timestamp < ?", [ str(a_month_ago) ] )
# fetch
cur.execute("SELECT id,name,publisher,count_of_issues,start_year FROM Volumes WHERE id = ?", [ volume_id ] )
row = cur.fetchone()
if row is None :
return result
result = dict()
#since ID is primary key, there is only one row
result['id'] = row[0]
result['name'] = row[1]
result['publisher'] = dict()
result['publisher']['name'] = row[2]
result['count_of_issues'] = row[3]
result['start_year'] = row[4]
result['issues'] = list()
cur.execute("SELECT id,name,issue_number,image_url,image_hash FROM Issues WHERE volume_id = ?", [ volume_id ] )
rows = cur.fetchall()
# now process the results
for row in rows:
record = dict()
record['id'] = row[0]
record['name'] = row[1]
record['issue_number'] = row[2]
record['image_url'] = row[3]
record['image_hash'] = row[4]
result['issues'].append(record)
return result
def add_issue_select_details( self, issue_id, image_url, thumb_image_url, publish_month, publish_year, site_detail_url ):
con = lite.connect( self.db_file )
with con:
cur = con.cursor()
con.text_factory = unicode
timestamp = datetime.datetime.now()
data = {
"image_url": image_url,
"thumb_image_url": thumb_image_url,
"publish_month": publish_month,
"publish_year": publish_year,
"site_detail_url": site_detail_url,
"timestamp": timestamp
}
self.upsert( cur, "issues" , "id", issue_id, data)
def get_issue_select_details( self, issue_id ):
con = lite.connect( self.db_file )
with con:
cur = con.cursor()
con.text_factory = unicode
cur.execute("SELECT image_url,thumb_image_url,publish_month,publish_year,site_detail_url FROM Issues WHERE id=?", [ issue_id ])
row = cur.fetchone()
details = dict()
if row is None or row[0] is None :
details['image_url'] = None
details['thumb_image_url'] = None
details['publish_month'] = None
details['publish_year'] = None
details['site_detail_url'] = None
else:
details['image_url'] = row[0]
details['thumb_image_url'] = row[1]
details['publish_month'] = row[2]
details['publish_year'] = row[3]
details['site_detail_url'] = row[4]
return details
def upsert( self, cur, tablename, pkname, pkval, data):
"""
This does an insert if the given PK doesn't exist, and an update it if does
"""
# TODO - look into checking if UPDATE is needed
# TODO - should the cursor be created here, and not up the stack?
ins_count = len(data) + 1
keys = ""
vals = list()
ins_slots = ""
set_slots = ""
for key in data:
if keys != "":
keys += ", "
if ins_slots != "":
ins_slots += ", "
if set_slots != "":
set_slots += ", "
keys += key
vals.append( data[key] )
ins_slots += "?"
set_slots += key + " = ?"
keys += ", " + pkname
vals.append( pkval )
ins_slots += ", ?"
condition = pkname + " = ?"
sql_ins = ( "INSERT OR IGNORE INTO " + tablename +
" ( " + keys + " ) " +
" VALUES ( " + ins_slots + " )" )
cur.execute( sql_ins , vals )
sql_upd = ( "UPDATE " + tablename +
" SET " + set_slots + " WHERE " + condition )
cur.execute( sql_upd , vals )

View File

@ -1,497 +0,0 @@
"""
A python class to manage communication with Comic Vine's REST API
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import json
from pprint import pprint
import urllib2, urllib
import math
import re
import datetime
import ctversion
import sys
from bs4 import BeautifulSoup
try:
from PyQt4.QtNetwork import QNetworkAccessManager, QNetworkRequest
from PyQt4.QtCore import QUrl, pyqtSignal, QObject, QByteArray
except ImportError:
# No Qt, so define a few dummy QObjects to help us compile
class QObject():
def __init__(self,*args):
pass
class pyqtSignal():
def __init__(self,*args):
pass
def emit(a,b,c):
pass
import utils
from settings import ComicTaggerSettings
from comicvinecacher import ComicVineCacher
from genericmetadata import GenericMetadata
from issuestring import IssueString
class ComicVineTalkerException(Exception):
pass
class ComicVineTalker(QObject):
def __init__(self, api_key=""):
QObject.__init__(self)
# key that is registered to comictagger
self.api_key = '27431e6787042105bd3e47e169a624521f89f3a4'
self.log_func = None
def setLogFunc( self , log_func ):
self.log_func = log_func
def writeLog( self , text ):
if self.log_func is None:
#sys.stdout.write(text.encode( errors='replace') )
#sys.stdout.flush()
print >> sys.stderr, text
else:
self.log_func( text )
def testKey( self ):
test_url = "http://api.comicvine.com/issue/1/?api_key=" + self.api_key + "&format=json&field_list=name"
resp = urllib2.urlopen( test_url )
content = resp.read()
cv_response = json.loads( content )
# Bogus request, but if the key is wrong, you get error 100: "Invalid API Key"
return cv_response[ 'status_code' ] != 100
def getUrlContent( self, url ):
try:
resp = urllib2.urlopen( url )
return resp.read()
except Exception as e:
self.writeLog( str(e) )
raise ComicVineTalkerException("Network Error!")
def searchForSeries( self, series_name , callback=None, refresh_cache=False ):
# remove cruft from the search string
series_name = utils.removearticles( series_name ).lower().strip()
# before we search online, look in our cache, since we might have
# done this same search recently
cvc = ComicVineCacher( )
if not refresh_cache:
cached_search_results = cvc.get_search_results( series_name )
if len (cached_search_results) > 0:
return cached_search_results
original_series_name = series_name
series_name = urllib.quote_plus(series_name.encode("utf-8"))
#series_name = urllib.quote_plus(unicode(series_name))
search_url = "http://api.comicvine.com/search/?api_key=" + self.api_key + "&format=json&resources=volume&query=" + series_name + "&field_list=name,id,start_year,publisher,image,description,count_of_issues&sort=start_year"
content = self.getUrlContent(search_url)
cv_response = json.loads(content)
if cv_response[ 'status_code' ] != 1:
self.writeLog( "Comic Vine query failed with error: [{0}]. \n".format( cv_response[ 'error' ] ))
return None
search_results = list()
# see http://api.comicvine.com/documentation/#handling_responses
limit = cv_response['limit']
current_result_count = cv_response['number_of_page_results']
total_result_count = cv_response['number_of_total_results']
if callback is None:
self.writeLog( "Found {0} of {1} results\n".format( cv_response['number_of_page_results'], cv_response['number_of_total_results']))
search_results.extend( cv_response['results'])
offset = 0
if callback is not None:
callback( current_result_count, total_result_count )
# see if we need to keep asking for more pages...
while ( current_result_count < total_result_count ):
if callback is None:
self.writeLog("getting another page of results {0} of {1}...\n".format( current_result_count, total_result_count))
offset += limit
content = self.getUrlContent(search_url + "&offset="+str(offset))
cv_response = json.loads(content)
if cv_response[ 'status_code' ] != 1:
self.writeLog( "Comic Vine query failed with error: [{0}]. \n".format( cv_response[ 'error' ] ))
return None
search_results.extend( cv_response['results'])
current_result_count += cv_response['number_of_page_results']
if callback is not None:
callback( current_result_count, total_result_count )
#for record in search_results:
# print( "{0}: {1} ({2})".format(record['id'], smart_str(record['name']) , record['start_year'] ) )
# print( "{0}: {1} ({2})".format(record['id'], record['name'] , record['start_year'] ) )
#print "{0}: {1} ({2})".format(search_results['results'][0]['id'], smart_str(search_results['results'][0]['name']) , search_results['results'][0]['start_year'] )
# cache these search results
cvc.add_search_results( original_series_name, search_results )
return search_results
def fetchVolumeData( self, series_id ):
# before we search online, look in our cache, since we might already
# have this info
cvc = ComicVineCacher( )
cached_volume_result = cvc.get_volume_info( series_id )
if cached_volume_result is not None:
return cached_volume_result
volume_url = "http://api.comicvine.com/volume/" + str(series_id) + "/?api_key=" + self.api_key + "&format=json"
content = self.getUrlContent(volume_url)
cv_response = json.loads(content)
if cv_response[ 'status_code' ] != 1:
print >> sys.stderr, "Comic Vine query failed with error: [{0}]. ".format( cv_response[ 'error' ] )
return None
volume_results = cv_response['results']
cvc.add_volume_info( volume_results )
return volume_results
def fetchIssueData( self, series_id, issue_number, settings ):
volume_results = self.fetchVolumeData( series_id )
found = False
for record in volume_results['issues']:
if IssueString(issue_number).asFloat() is None:
issue_number = 1
if float(record['issue_number']) == IssueString(issue_number).asFloat():
found = True
break
if (found):
issue_url = "http://api.comicvine.com/issue/" + str(record['id']) + "/?api_key=" + self.api_key + "&format=json"
content = self.getUrlContent(issue_url)
cv_response = json.loads(content)
if cv_response[ 'status_code' ] != 1:
print >> sys.stderr, "Comic Vine query failed with error: [{0}]. ".format( cv_response[ 'error' ] )
return None
issue_results = cv_response['results']
else:
return None
# now, map the comicvine data to generic metadata
return self.mapCVDataToMetadata( volume_results, issue_results, settings )
def fetchIssueDataByIssueID( self, issue_id, settings ):
issue_url = "http://api.comicvine.com/issue/" + str(issue_id) + "/?api_key=" + self.api_key + "&format=json"
content = self.getUrlContent(issue_url)
cv_response = json.loads(content)
if cv_response[ 'status_code' ] != 1:
print >> sys.stderr, "Comic Vine query failed with error: [{0}]. ".format( cv_response[ 'error' ] )
return None
issue_results = cv_response['results']
volume_results = self.fetchVolumeData( issue_results['volume']['id'] )
# now, map the comicvine data to generic metadata
md = self.mapCVDataToMetadata( volume_results, issue_results, settings )
md.isEmpty = False
return md
def mapCVDataToMetadata(self, volume_results, issue_results, settings ):
# now, map the comicvine data to generic metadata
metadata = GenericMetadata()
metadata.series = issue_results['volume']['name']
num_s = IssueString(issue_results['issue_number']).asString()
metadata.issue = num_s
metadata.title = issue_results['name']
metadata.publisher = volume_results['publisher']['name']
metadata.month = issue_results['publish_month']
metadata.year = issue_results['publish_year']
#metadata.issueCount = volume_results['count_of_issues']
metadata.comments = self.cleanup_html(issue_results['description'])
if settings.use_series_start_as_volume:
metadata.volume = volume_results['start_year']
metadata.notes = "Tagged with ComicTagger {0} using info from Comic Vine on {1}. [Issue ID {2}]".format(
ctversion.version,
datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
issue_results['id'])
#metadata.notes += issue_results['site_detail_url']
metadata.webLink = issue_results['site_detail_url']
person_credits = issue_results['person_credits']
for person in person_credits:
for role in person['roles']:
# can we determine 'primary' from CV??
role_name = role['role'].title()
metadata.addCredit( person['name'], role['role'].title(), False )
character_credits = issue_results['character_credits']
character_list = list()
for character in character_credits:
character_list.append( character['name'] )
metadata.characters = utils.listToString( character_list )
team_credits = issue_results['team_credits']
team_list = list()
for team in team_credits:
team_list.append( team['name'] )
metadata.teams = utils.listToString( team_list )
location_credits = issue_results['location_credits']
location_list = list()
for location in location_credits:
location_list.append( location['name'] )
metadata.locations = utils.listToString( location_list )
story_arc_credits = issue_results['story_arc_credits']
arc_list = []
for arc in story_arc_credits:
arc_list.append(arc['name'])
if len(arc_list) > 0:
metadata.storyArc = utils.listToString(arc_list)
return metadata
def cleanup_html( self, string):
# remove all newlines first
string = string.replace("\n", "")
#put in our own
string = string.replace("<br>", "\n")
string = string.replace("</p>", "\n\n")
string = string.replace("<h4>", "*")
string = string.replace("</h4>", "*\n")
# now strip all other tags
p = re.compile(r'<[^<]*?>')
newstring = p.sub('',string)
newstring = newstring.replace('&nbsp;',' ')
newstring = newstring.replace('&amp;','&')
newstring = newstring.strip()
return newstring
def fetchIssueDate( self, issue_id ):
details = self.fetchIssueSelectDetails( issue_id )
return details['publish_month'], details['publish_year']
def fetchIssueCoverURLs( self, issue_id ):
details = self.fetchIssueSelectDetails( issue_id )
return details['image_url'], details['thumb_image_url']
def fetchIssuePageURL( self, issue_id ):
details = self.fetchIssueSelectDetails( issue_id )
return details['site_detail_url']
def fetchIssueSelectDetails( self, issue_id ):
#cached_image_url,cached_thumb_url,cached_month,cached_year = self.fetchCachedIssueSelectDetails( issue_id )
cached_details = self.fetchCachedIssueSelectDetails( issue_id )
if cached_details['image_url'] is not None:
return cached_details
issue_url = "http://api.comicvine.com/issue/" + str(issue_id) + "/?api_key=" + self.api_key + "&format=json&field_list=image,publish_month,publish_year,site_detail_url"
content = self.getUrlContent(issue_url)
details = dict()
details['image_url'] = None
details['thumb_image_url'] = None
details['publish_month'] = None
details['publish_year'] = None
details['site_detail_url'] = None
cv_response = json.loads(content)
if cv_response[ 'status_code' ] != 1:
print >> sys.stderr, "Comic Vine query failed with error: [{0}]. ".format( cv_response[ 'error' ] )
return details
details['image_url'] = cv_response['results']['image']['super_url']
details['thumb_image_url'] = cv_response['results']['image']['thumb_url']
details['publish_year'] = cv_response['results']['publish_year']
details['publish_month'] = cv_response['results']['publish_month']
details['site_detail_url'] = cv_response['results']['site_detail_url']
if details['image_url'] is not None:
self.cacheIssueSelectDetails( issue_id,
details['image_url'],
details['thumb_image_url'],
details['publish_month'],
details['publish_year'],
details['site_detail_url'] )
#print details['site_detail_url']
return details
def fetchCachedIssueSelectDetails( self, issue_id ):
# before we search online, look in our cache, since we might already
# have this info
cvc = ComicVineCacher( )
return cvc.get_issue_select_details( issue_id )
def cacheIssueSelectDetails( self, issue_id, image_url, thumb_url, month, year, page_url ):
cvc = ComicVineCacher( )
cvc.add_issue_select_details( issue_id, image_url, thumb_url, month, year, page_url )
def fetchAlternateCoverURLs(self, issue_id):
url_list = self.fetchCachedAlternateCoverURLs( issue_id )
if url_list is not None:
return url_list
issue_page_url = self.fetchIssuePageURL( issue_id )
# scrape the CV issue page URL to get the alternate cover URLs
resp = urllib2.urlopen( issue_page_url )
content = resp.read()
alt_cover_url_list = self.parseOutAltCoverUrls( content)
# cache this alt cover URL list
self.cacheAlternateCoverURLs( issue_id, alt_cover_url_list )
return alt_cover_url_list
def parseOutAltCoverUrls( self, page_html ):
soup = BeautifulSoup( page_html )
alt_cover_url_list = []
# Using knowledge of the layout of the ComicVine issue page here:
# look for the divs that are in the classes 'content-pod' and 'alt-cover'
div_list = soup.find_all( 'div')
for d in div_list:
if d.has_key('class'):
c = d['class']
if 'content-pod' in c and 'alt-cover' in c:
alt_cover_url_list.append( d.img['src'] )
return alt_cover_url_list
def fetchCachedAlternateCoverURLs( self, issue_id ):
# before we search online, look in our cache, since we might already
# have this info
cvc = ComicVineCacher( )
url_list = cvc.get_alt_covers( issue_id )
if url_list is not None:
return url_list
else:
return None
def cacheAlternateCoverURLs( self, issue_id, url_list ):
cvc = ComicVineCacher( )
cvc.add_alt_covers( issue_id, url_list )
#---------------------------------------------------------------------------
urlFetchComplete = pyqtSignal( str , str, int)
def asyncFetchIssueCoverURLs( self, issue_id ):
self.issue_id = issue_id
details = self.fetchCachedIssueSelectDetails( issue_id )
if details['image_url'] is not None:
self.urlFetchComplete.emit( details['image_url'],details['thumb_image_url'], self.issue_id )
return
issue_url = "http://api.comicvine.com/issue/" + str(issue_id) + "/?api_key=" + self.api_key + "&format=json&field_list=image,publish_month,publish_year,site_detail_url"
self.nam = QNetworkAccessManager()
self.nam.finished.connect( self.asyncFetchIssueCoverURLComplete )
self.nam.get(QNetworkRequest(QUrl(issue_url)))
def asyncFetchIssueCoverURLComplete( self, reply ):
# read in the response
data = reply.readAll()
cv_response = json.loads(str(data))
if cv_response[ 'status_code' ] != 1:
print >> sys.stderr, "Comic Vine query failed with error: [{0}]. ".format( cv_response[ 'error' ] )
return
image_url = cv_response['results']['image']['super_url']
thumb_url = cv_response['results']['image']['thumb_url']
year = cv_response['results']['publish_year']
month = cv_response['results']['publish_month']
page_url = cv_response['results']['site_detail_url']
self.cacheIssueSelectDetails( self.issue_id, image_url, thumb_url, month, year, page_url )
self.urlFetchComplete.emit( image_url, thumb_url, self.issue_id )
altUrlListFetchComplete = pyqtSignal( list, int)
def asyncFetchAlternateCoverURLs( self, issue_id, issue_page_url ):
# This async version requires the issue page url to be provided!
self.issue_id = issue_id
url_list = self.fetchCachedAlternateCoverURLs( issue_id )
if url_list is not None:
self.altUrlListFetchComplete.emit( url_list, int(self.issue_id) )
return
self.nam = QNetworkAccessManager()
self.nam.finished.connect( self.asyncFetchAlternateCoverURLsComplete )
self.nam.get(QNetworkRequest(QUrl(str(issue_page_url))))
def asyncFetchAlternateCoverURLsComplete( self, reply ):
# read in the response
html = str(reply.readAll())
alt_cover_url_list = self.parseOutAltCoverUrls( html )
# cache this alt cover URL list
self.cacheAlternateCoverURLs( self.issue_id, alt_cover_url_list )
self.altUrlListFetchComplete.emit( alt_cover_url_list, int(self.issue_id) )

View File

@ -1,290 +1,300 @@
"""
A PyQt4 widget display cover images from either local archive, or from ComicVine
"""A PyQt5 widget to display cover images
Display cover images from either a local archive, or from comic source metadata.
TODO: This should be re-factored using subclasses!
"""
"""
Copyright 2012 Anthony Beville
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
import logging
import pathlib
http://www.apache.org/licenses/LICENSE-2.0
from PyQt5 import QtCore, QtGui, QtWidgets, uic
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from comicapi.comicarchive import ComicArchive
from comictaggerlib.graphics import graphics_path
from comictaggerlib.imagefetcher import ImageFetcher
from comictaggerlib.imagepopup import ImagePopup
from comictaggerlib.pageloader import PageLoader
from comictaggerlib.ui import ui_path
from comictaggerlib.ui.qtutils import get_qimage_from_data, reduce_widget_font_size
from comictalker.comictalker import ComicTalker
import os
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4 import uic
from settings import ComicTaggerSettings
from genericmetadata import GenericMetadata, PageType
from comicarchive import MetaDataStyle
from comicvinetalker import ComicVineTalker, ComicVineTalkerException
from imagefetcher import ImageFetcher
from pageloader import PageLoader
from imagepopup import ImagePopup
import utils
# helper func to allow a label to be clickable
def clickable(widget):
class Filter(QObject):
dblclicked = pyqtSignal()
def eventFilter(self, obj, event):
if obj == widget:
if event.type() == QEvent.MouseButtonDblClick:
self.dblclicked.emit()
return True
return False
filter = Filter(widget)
widget.installEventFilter(filter)
return filter.dblclicked
logger = logging.getLogger(__name__)
class CoverImageWidget(QWidget):
ArchiveMode = 0
AltCoverMode = 1
URLMode = 1
def __init__(self, parent, mode ):
super(CoverImageWidget, self).__init__(parent)
uic.loadUi(ComicTaggerSettings.getUIFile('coverimagewidget.ui' ), self)
def clickable(widget: QtWidgets.QWidget) -> QtCore.pyqtBoundSignal:
"""Allow a label to be clickable"""
utils.reduceWidgetFontSize( self.label )
class Filter(QtCore.QObject):
dblclicked = QtCore.pyqtSignal()
self.mode = mode
self.comicVine = ComicVineTalker()
self.page_loader = None
self.showControls = True
def eventFilter(self, obj: QtCore.QObject, event: QtCore.QEvent) -> bool:
if obj == widget:
if event.type() == QtCore.QEvent.Type.MouseButtonDblClick:
self.dblclicked.emit()
return True
return False
self.btnLeft.setIcon(QIcon(ComicTaggerSettings.getGraphic('left.png')))
self.btnRight.setIcon(QIcon(ComicTaggerSettings.getGraphic('right.png')))
self.btnLeft.clicked.connect( self.decrementImage )
self.btnRight.clicked.connect( self.incrementImage )
self.resetWidget()
clickable(self.lblImage).connect(self.showPopup)
flt = Filter(widget)
widget.installEventFilter(flt)
return flt.dblclicked
self.updateContent()
def resetWidget(self):
self.comic_archive = None
self.issue_id = None
self.comicVine = None
self.cover_fetcher = None
self.url_list = []
if self.page_loader is not None:
self.page_loader.abandoned = True
self.page_loader = None
self.imageIndex = -1
self.imageCount = 1
def clear( self ):
self.resetWidget()
self.updateContent()
def incrementImage( self ):
self.imageIndex += 1
if self.imageIndex == self.imageCount:
self.imageIndex = 0
self.updateContent()
class CoverImageWidget(QtWidgets.QWidget):
ArchiveMode = 0
AltCoverMode = 1
URLMode = 1
DataMode = 3
def decrementImage( self ):
self.imageIndex -= 1
if self.imageIndex == -1:
self.imageIndex = self.imageCount -1
self.updateContent()
def setArchive( self, ca, page=0 ):
if self.mode == CoverImageWidget.ArchiveMode:
self.resetWidget()
self.comic_archive = ca
self.imageIndex = page
self.imageCount = ca.getNumberOfPages()
self.updateContent()
image_fetch_complete = QtCore.pyqtSignal(str, QtCore.QByteArray)
def setURL( self, url ):
if self.mode == CoverImageWidget.URLMode:
self.resetWidget()
self.updateContent()
self.url_list = [ url ]
self.imageIndex = 0
self.imageCount = 1
self.updateContent()
def __init__(
self,
parent: QtWidgets.QWidget,
mode: int,
cache_folder: pathlib.Path | None,
talker: ComicTalker | None,
expand_on_click: bool = True,
) -> None:
super().__init__(parent)
def setIssueID( self, issue_id ):
if self.mode == CoverImageWidget.AltCoverMode:
self.resetWidget()
self.updateContent()
self.issue_id = issue_id
if mode not in (self.AltCoverMode, self.URLMode) or cache_folder is None:
self.cover_fetcher = None
self.talker = None
else:
self.cover_fetcher = ImageFetcher(cache_folder)
self.talker = None
with (ui_path / "coverimagewidget.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.comicVine = ComicVineTalker()
self.comicVine.urlFetchComplete.connect( self.primaryUrlFetchComplete )
self.comicVine.asyncFetchIssueCoverURLs( int(self.issue_id) )
def primaryUrlFetchComplete( self, primary_url, thumb_url, issue_id ):
self.url_list.append(str(primary_url))
self.imageIndex = 0
self.imageCount = len(self.url_list)
self.updateContent()
reduce_widget_font_size(self.label)
#defer the alt cover search
QTimer.singleShot(1, self.startAltCoverSearch)
self.cache_folder = cache_folder
self.mode: int = mode
self.page_loader: PageLoader | None = None
self.showControls = True
def startAltCoverSearch( self ):
self.current_pixmap = QtGui.QPixmap()
# now we need to get the list of alt cover URLs
self.label.setText("Searching for alt. covers...")
# page URL should already be cached, so no need to defer
self.comicVine = ComicVineTalker()
issue_page_url = self.comicVine.fetchIssuePageURL( self.issue_id )
self.comicVine.altUrlListFetchComplete.connect( self.altCoverUrlListFetchComplete )
self.comicVine.asyncFetchAlternateCoverURLs( int(self.issue_id), issue_page_url)
def altCoverUrlListFetchComplete( self, url_list, issue_id ):
if len(url_list) > 0:
self.url_list.extend(url_list)
self.imageCount = len(self.url_list)
self.updateControls()
self.comic_archive: ComicArchive | None = None
self.issue_id: str = ""
self.issue_url: str | None = None
self.url_list: list[str] = []
if self.page_loader is not None:
self.page_loader.abandoned = True
self.page_loader = None
self.imageIndex = -1
self.imageCount = 1
self.imageData = b""
def setPage( self, pagenum ):
if self.mode == CoverImageWidget.ArchiveMode:
self.imageIndex = pagenum
self.updateContent()
def updateContent( self ):
self.updateImage()
self.updateControls()
def updateImage( self ):
if self.imageIndex == -1:
self.loadDefault()
elif self.mode in [ CoverImageWidget.AltCoverMode, CoverImageWidget.URLMode ]:
self.loadURL()
else:
self.loadPage()
def updateControls( self ):
if not self.showControls:
self.btnLeft.hide()
self.btnRight.hide()
self.label.hide()
return
if self.imageIndex == -1 or self.imageCount == 1:
self.btnLeft.setEnabled(False)
self.btnRight.setEnabled(False)
self.btnLeft.hide()
self.btnRight.hide()
else:
self.btnLeft.setEnabled(True)
self.btnRight.setEnabled(True)
self.btnLeft.show()
self.btnRight.show()
if self.imageIndex == -1 or self.imageCount == 1:
self.label.setText("")
elif self.mode == CoverImageWidget.AltCoverMode:
self.label.setText("Cover {0} ( of {1} )".format(self.imageIndex+1, self.imageCount))
else:
self.label.setText("Page {0} ( of {1} )".format(self.imageIndex+1, self.imageCount))
def loadURL( self ):
self.loadDefault()
self.cover_fetcher = ImageFetcher( )
self.cover_fetcher.fetchComplete.connect(self.coverRemoteFetchComplete)
self.cover_fetcher.fetch( self.url_list[self.imageIndex] )
#print "ATB cover fetch started...."
# called when the image is done loading from internet
def coverRemoteFetchComplete( self, image_data, issue_id ):
img = QImage()
img.loadFromData( image_data )
self.current_pixmap = QPixmap(img)
self.setDisplayPixmap( 0, 0)
#print "ATB cover fetch complete!"
self.btnLeft.setIcon(QtGui.QIcon(str(graphics_path / "left.png")))
self.btnRight.setIcon(QtGui.QIcon(str(graphics_path / "right.png")))
def loadPage( self ):
if self.comic_archive is not None:
if self.page_loader is not None:
self.page_loader.abandoned = True
self.page_loader = PageLoader( self.comic_archive, self.imageIndex )
self.page_loader.loadComplete.connect( self.pageLoadComplete )
self.page_loader.start()
self.btnLeft.clicked.connect(self.decrement_image)
self.btnRight.clicked.connect(self.increment_image)
self.image_fetch_complete.connect(self.cover_remote_fetch_complete)
if expand_on_click:
clickable(self.lblImage).connect(self.show_popup)
else:
self.lblImage.setToolTip("")
def pageLoadComplete( self, img ):
self.current_pixmap = QPixmap(img)
self.setDisplayPixmap( 0, 0)
self.page_loader = None
def loadDefault( self ):
self.current_pixmap = QPixmap(ComicTaggerSettings.getGraphic('nocover.png'))
#print "loadDefault called"
self.setDisplayPixmap( 0, 0)
self.update_content()
def resizeEvent( self, resize_event ):
if self.current_pixmap is not None:
delta_w = resize_event.size().width() - resize_event.oldSize().width()
delta_h = resize_event.size().height() - resize_event.oldSize().height()
#print "ATB resizeEvent deltas", resize_event.size().width(), resize_event.size().height()
self.setDisplayPixmap( delta_w , delta_h )
def setDisplayPixmap( self, delta_w , delta_h ):
# the deltas let us know what the new width and height of the label will be
"""
new_h = self.frame.height() + delta_h
new_w = self.frame.width() + delta_w
print "ATB setDisplayPixmap deltas", delta_w , delta_h
print "ATB self.frame", self.frame.width(), self.frame.height()
print "ATB self.", self.width(), self.height()
frame_w = new_w
frame_h = new_h
"""
new_h = self.frame.height()
new_w = self.frame.width()
frame_w = self.frame.width()
frame_h = self.frame.height()
def reset_widget(self) -> None:
self.comic_archive = None
self.issue_id = ""
self.issue_url = None
self.url_list = []
if self.page_loader is not None:
self.page_loader.abandoned = True
self.page_loader = None
self.imageIndex = -1
self.imageCount = 1
self.imageData = b""
new_h -= 4
new_w -= 4
if new_h < 0:
new_h = 0;
if new_w < 0:
new_w = 0;
def clear(self) -> None:
self.reset_widget()
self.update_content()
#print "ATB setDisplayPixmap deltas", delta_w , delta_h
#print "ATB self.frame", frame_w, frame_h
#print "ATB new size", new_w, new_h
# scale the pixmap to fit in the frame
scaled_pixmap = self.current_pixmap.scaled(new_w, new_h, Qt.KeepAspectRatio)
self.lblImage.setPixmap( scaled_pixmap )
# move and resize the label to be centered in the fame
img_w = scaled_pixmap.width()
img_h = scaled_pixmap.height()
self.lblImage.resize( img_w, img_h )
self.lblImage.move( (frame_w - img_w)/2, (frame_h - img_h)/2 )
def showPopup( self ):
self.popup = ImagePopup(self, self.current_pixmap)
def increment_image(self) -> None:
self.imageIndex += 1
if self.imageIndex == self.imageCount:
self.imageIndex = 0
self.update_content()
def decrement_image(self) -> None:
self.imageIndex -= 1
if self.imageIndex == -1:
self.imageIndex = self.imageCount - 1
self.update_content()
def set_archive(self, ca: ComicArchive, page: int = 0) -> None:
if self.mode == CoverImageWidget.ArchiveMode:
self.reset_widget()
self.comic_archive = ca
self.imageIndex = page
self.imageCount = ca.get_number_of_pages()
self.update_content()
def set_url(self, url: str) -> None:
if self.mode == CoverImageWidget.URLMode:
self.reset_widget()
self.update_content()
self.url_list = [url]
self.imageIndex = 0
self.imageCount = 1
self.update_content()
def set_issue_details(self, issue_id: str, url_list: list[str]) -> None:
if self.mode == CoverImageWidget.AltCoverMode:
self.reset_widget()
self.update_content()
self.issue_id = issue_id
self.set_url_list(url_list)
def set_image_data(self, image_data: bytes) -> None:
if self.mode == CoverImageWidget.DataMode:
self.reset_widget()
if image_data:
self.imageIndex = 0
self.imageData = image_data
else:
self.imageIndex = -1
self.update_content()
def set_url_list(self, url_list: list[str]) -> None:
self.url_list = url_list
self.imageIndex = 0
self.imageCount = len(self.url_list)
self.update_content()
self.update_controls()
def set_page(self, pagenum: int) -> None:
if self.mode == CoverImageWidget.ArchiveMode:
self.imageIndex = pagenum
self.update_content()
def update_content(self) -> None:
self.update_image()
self.update_controls()
def update_image(self) -> None:
if self.imageIndex == -1:
self.load_default()
elif self.mode in [CoverImageWidget.AltCoverMode, CoverImageWidget.URLMode]:
self.load_url()
elif self.mode == CoverImageWidget.DataMode:
self.cover_remote_fetch_complete("", self.imageData)
else:
self.load_page()
def update_controls(self) -> None:
if not self.showControls or self.mode == CoverImageWidget.DataMode:
self.btnLeft.hide()
self.btnRight.hide()
self.label.hide()
return
if self.imageIndex == -1 or self.imageCount == 1:
self.btnLeft.setEnabled(False)
self.btnRight.setEnabled(False)
self.btnLeft.hide()
self.btnRight.hide()
else:
self.btnLeft.setEnabled(True)
self.btnRight.setEnabled(True)
self.btnLeft.show()
self.btnRight.show()
if self.imageIndex == -1 or self.imageCount == 1:
self.label.setText("")
elif self.mode == CoverImageWidget.AltCoverMode:
self.label.setText(f"Cover {self.imageIndex + 1} (of {self.imageCount})")
else:
self.label.setText(f"Page {self.imageIndex + 1} (of {self.imageCount})")
def load_url(self) -> None:
assert isinstance(self.cache_folder, pathlib.Path)
self.load_default()
self.cover_fetcher = ImageFetcher(self.cache_folder)
ImageFetcher.image_fetch_complete = self.image_fetch_complete.emit
if data := self.cover_fetcher.fetch(self.url_list[self.imageIndex]):
self.cover_remote_fetch_complete(self.url_list[self.imageIndex], data)
# called when the image is done loading from internet
def cover_remote_fetch_complete(self, url: str, image_data: bytes) -> None:
if url and url not in self.url_list:
return
img = get_qimage_from_data(image_data)
self.current_pixmap = QtGui.QPixmap.fromImage(img)
self.set_display_pixmap()
def load_page(self) -> None:
if self.comic_archive is not None:
if self.page_loader is not None:
self.page_loader.abandoned = True
self.page_loader = PageLoader(self.comic_archive, self.imageIndex)
self.page_loader.loadComplete.connect(self.page_load_complete)
self.page_loader.start()
def page_load_complete(self, image_data: bytes) -> None:
img = get_qimage_from_data(image_data)
self.current_pixmap = QtGui.QPixmap.fromImage(img)
self.set_display_pixmap()
self.page_loader = None
def load_default(self) -> None:
self.current_pixmap = QtGui.QPixmap(str(graphics_path / "nocover.png"))
self.set_display_pixmap()
def resizeEvent(self, resize_event: QtGui.QResizeEvent) -> None:
if self.current_pixmap is not None:
self.set_display_pixmap()
def set_display_pixmap(self) -> None:
"""The deltas let us know what the new width and height of the label will be"""
new_h = self.frame.height()
new_w = self.frame.width()
frame_w = self.frame.width()
frame_h = self.frame.height()
new_h -= 4
new_w -= 4
new_h = max(new_h, 0)
new_w = max(new_w, 0)
# scale the pixmap to fit in the frame
scaled_pixmap = self.current_pixmap.scaled(
new_w, new_h, QtCore.Qt.AspectRatioMode.KeepAspectRatio, QtCore.Qt.SmoothTransformation
)
self.lblImage.setPixmap(scaled_pixmap)
# move and resize the label to be centered in the fame
img_w = scaled_pixmap.width()
img_h = scaled_pixmap.height()
self.lblImage.resize(img_w, img_h)
self.lblImage.move(int((frame_w - img_w) / 2), int((frame_h - img_h) / 2))
def show_popup(self) -> None:
ImagePopup(self, self.current_pixmap)

View File

@ -1,99 +1,98 @@
"""
A PyQT4 dialog to edit credits
"""
"""A PyQT4 dialog to edit credits"""
"""
Copyright 2012 Anthony Beville
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
import logging
from typing import Any
http://www.apache.org/licenses/LICENSE-2.0
from PyQt5 import QtWidgets, uic
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from comictaggerlib.ui import ui_path
logger = logging.getLogger(__name__)
from PyQt4 import QtCore, QtGui, uic
from settings import ComicTaggerSettings
import os
class CreditEditorWindow(QtWidgets.QDialog):
ModeEdit = 0
ModeNew = 1
class CreditEditorWindow(QtGui.QDialog):
ModeEdit = 0
ModeNew = 1
def __init__(self, parent, mode, role, name, primary ):
super(CreditEditorWindow, self).__init__(parent)
uic.loadUi(ComicTaggerSettings.getUIFile('crediteditorwindow.ui' ), self)
self.mode = mode
if self.mode == self.ModeEdit:
self.setWindowTitle("Edit Credit")
else:
self.setWindowTitle("New Credit")
def __init__(self, parent: QtWidgets.QWidget, mode: int, role: str, name: str, primary: bool) -> None:
super().__init__(parent)
# Add the entries to the role combobox
self.cbRole.addItem( "" )
self.cbRole.addItem( "Writer" )
self.cbRole.addItem( "Artist" )
self.cbRole.addItem( "Penciller" )
self.cbRole.addItem( "Inker" )
self.cbRole.addItem( "Colorist" )
self.cbRole.addItem( "Letterer" )
self.cbRole.addItem( "Cover Artist" )
self.cbRole.addItem( "Editor" )
self.cbRole.addItem( "Other" )
self.cbRole.addItem( "Plotter" )
self.cbRole.addItem( "Scripter" )
self.leName.setText( name )
if role is not None and role != "":
i = self.cbRole.findText( role )
if i == -1:
self.cbRole.setEditText( role )
else:
self.cbRole.setCurrentIndex( i )
with (ui_path / "crediteditorwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
if primary:
self.cbPrimary.setCheckState( QtCore.Qt.Checked )
self.cbRole.currentIndexChanged.connect(self.roleChanged)
self.cbRole.editTextChanged.connect(self.roleChanged)
self.updatePrimaryButton()
self.mode = mode
def updatePrimaryButton( self ):
enabled =self.currentRoleCanBePrimary()
self.cbPrimary.setEnabled( enabled )
if self.mode == self.ModeEdit:
self.setWindowTitle("Edit Credit")
else:
self.setWindowTitle("New Credit")
def currentRoleCanBePrimary( self ):
role = self.cbRole.currentText()
if str(role).lower() == "writer" or str(role).lower() == "artist":
return True
else:
return False
def roleChanged( self, s ):
self.updatePrimaryButton()
def getCredits( self ):
primary = self.currentRoleCanBePrimary() and self.cbPrimary.isChecked()
return self.cbRole.currentText(), self.leName.text(), primary
# Add the entries to the role combobox
self.cbRole.addItem("")
self.cbRole.addItem("Writer")
self.cbRole.addItem("Artist")
self.cbRole.addItem("Penciller")
self.cbRole.addItem("Inker")
self.cbRole.addItem("Colorist")
self.cbRole.addItem("Letterer")
self.cbRole.addItem("Cover Artist")
self.cbRole.addItem("Editor")
self.cbRole.addItem("Other")
self.cbRole.addItem("Plotter")
self.cbRole.addItem("Scripter")
self.leName.setText(name)
def accept( self ):
if self.cbRole.currentText() == "" or self.leName.text() == "":
QtGui.QMessageBox.warning(self, self.tr("Whoops"), self.tr("You need to enter both role and name for a credit."))
else:
QtGui.QDialog.accept(self)
if role is not None and role != "":
i = self.cbRole.findText(role)
if i == -1:
self.cbRole.setEditText(role)
else:
self.cbRole.setCurrentIndex(i)
self.cbPrimary.setChecked(primary)
self.cbRole.currentIndexChanged.connect(self.role_changed)
self.cbRole.editTextChanged.connect(self.role_changed)
self.update_primary_button()
def update_primary_button(self) -> None:
enabled = self.current_role_can_be_primary()
self.cbPrimary.setEnabled(enabled)
def current_role_can_be_primary(self) -> bool:
role = self.cbRole.currentText()
if role.casefold() in ("artist", "writer"):
return True
return False
def role_changed(self, s: Any) -> None:
self.update_primary_button()
def get_credits(self) -> tuple[str, str, bool]:
primary = self.current_role_can_be_primary() and self.cbPrimary.isChecked()
return self.cbRole.currentText(), self.leName.text(), primary
def accept(self) -> None:
if self.cbRole.currentText() == "" or self.leName.text() == "":
QtWidgets.QMessageBox.warning(self, "Whoops", "You need to enter both role and name for a credit.")
else:
QtWidgets.QDialog.accept(self)

View File

@ -0,0 +1,112 @@
from __future__ import annotations
import json
import logging
import pathlib
from typing import Any
import settngs
from comictaggerlib.ctsettings.commandline import (
initial_commandline_parser,
register_commandline_settings,
validate_commandline_settings,
)
from comictaggerlib.ctsettings.file import register_file_settings, validate_file_settings
from comictaggerlib.ctsettings.plugin import group_for_plugin, register_plugin_settings, validate_plugin_settings
from comictaggerlib.ctsettings.settngs_namespace import SettngsNS as ct_ns
from comictaggerlib.ctsettings.types import ComicTaggerPaths
from comictalker import ComicTalker
logger = logging.getLogger(__name__)
talkers: dict[str, ComicTalker] = {}
__all__ = [
"initial_commandline_parser",
"register_commandline_settings",
"register_file_settings",
"register_plugin_settings",
"validate_commandline_settings",
"validate_file_settings",
"validate_plugin_settings",
"ComicTaggerPaths",
"ct_ns",
"group_for_plugin",
]
class SettingsEncoder(json.JSONEncoder):
def default(self, obj: Any) -> Any:
if isinstance(obj, pathlib.Path):
return str(obj)
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, obj)
def validate_types(config: settngs.Config[settngs.Values]) -> settngs.Config[settngs.Values]:
# Go through each setting
for group in config.definitions.values():
for setting in group.v.values():
# Get the value and if it is the default
value, default = settngs.get_option(config.values, setting)
if not default:
if setting.type is not None:
# If it is not the default and the type attribute is not None
# use it to convert the loaded string into the expected value
if isinstance(value, str):
config.values[setting.group][setting.dest] = setting.type(value)
return config
def parse_config(
manager: settngs.Manager,
config_path: pathlib.Path,
args: list[str] | None = None,
) -> tuple[settngs.Config[settngs.Values], bool]:
"""
Function to parse options from a json file and passes the resulting Config object to parse_cmdline.
Args:
manager: settngs Manager object
config_path: A `pathlib.Path` object
args: Passed to argparse.ArgumentParser.parse_args
"""
file_options, success = settngs.parse_file(manager.definitions, config_path)
file_options = validate_types(file_options)
cmdline_options = settngs.parse_cmdline(
manager.definitions,
manager.description,
manager.epilog,
args,
file_options,
)
final_options = settngs.normalize_config(cmdline_options, file=True, cmdline=True)
return final_options, success
def save_file(
config: settngs.Config[settngs.T],
filename: pathlib.Path,
) -> bool:
"""
Helper function to save options from a json dictionary to a file
Args:
config: The options to save to a json dictionary
filename: A pathlib.Path object to save the json dictionary to
"""
file_options = settngs.clean_config(config, file=True)
try:
if not filename.exists():
filename.parent.mkdir(exist_ok=True, parents=True)
filename.touch()
json_str = json.dumps(file_options, cls=SettingsEncoder, indent=2)
filename.write_text(json_str + "\n", encoding="utf-8")
except Exception:
logger.exception("Failed to save config file: %s", filename)
return False
return True

View File

@ -0,0 +1,329 @@
"""CLI settings for ComicTagger"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import argparse
import logging
import os
import platform
import shlex
import subprocess
import settngs
from comicapi import utils
from comicapi.comicarchive import metadata_styles
from comicapi.genericmetadata import GenericMetadata
from comictaggerlib import ctversion
from comictaggerlib.ctsettings.settngs_namespace import SettngsNS as ct_ns
from comictaggerlib.ctsettings.types import (
ComicTaggerPaths,
metadata_type,
metadata_type_single,
parse_metadata_from_string,
)
from comictaggerlib.resulttypes import Action
logger = logging.getLogger(__name__)
def initial_commandline_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(add_help=False)
# Ensure this stays up to date with register_runtime
parser.add_argument(
"--config",
help="Config directory defaults to ~/.ComicTagger\non Linux/Mac and %%APPDATA%% on Windows\n",
type=ComicTaggerPaths,
default=ComicTaggerPaths(),
)
parser.add_argument("-v", "--verbose", action="count", default=0, help="Be noisy when doing what it does.")
return parser
def register_runtime(parser: settngs.Manager) -> None:
parser.add_setting(
"--config",
help="Config directory defaults to ~/.Config/ComicTagger\non Linux, ~/Library/Application Support/ComicTagger on Mac and %%APPDATA%%\\ComicTagger on Windows\n",
type=ComicTaggerPaths,
default=ComicTaggerPaths(),
file=False,
)
parser.add_setting(
"-v",
"--verbose",
action="count",
default=0,
help="Be noisy when doing what it does.",
file=False,
)
parser.add_setting(
"--abort-on-conflict",
action="store_true",
help="""Don't export to zip if intended new filename\nexists (otherwise, creates a new unique filename).\n\n""",
file=False,
)
parser.add_setting(
"--delete-original",
action="store_true",
help="""Delete original archive after successful\nexport to Zip. (only relevant for -e)""",
file=False,
)
parser.add_setting(
"-f",
"--parse-filename",
"--parsefilename",
action="store_true",
help="""Parse the filename to get some info,\nspecifically series name, issue number,\nvolume, and publication year.\n\n""",
file=False,
)
parser.add_setting(
"--id",
dest="issue_id",
type=str,
help="""Use the issue ID when searching online.\nOverrides all other metadata.\n\n""",
file=False,
)
parser.add_setting(
"-o",
"--online",
action="store_true",
help="""Search online and attempt to identify file\nusing existing metadata and images in archive.\nMay be used in conjunction with -f and -m.\n\n""",
file=False,
)
parser.add_setting(
"-m",
"--metadata",
default=GenericMetadata(),
type=parse_metadata_from_string,
help="""Explicitly define some tags to be used in YAML syntax. Use @file.yaml to read from a file. e.g.:\n"series: Plastic Man, publisher: Quality Comics, year: "\n"series: 'Kickers, Inc.', issue: '1', year: 1986"\nIf you want to erase a tag leave the value blank.\nSome names that can be used: series, issue, issue_count, year,\npublisher, title\n\n""",
file=False,
)
parser.add_setting(
"-i",
"--interactive",
action="store_true",
help="""Interactively query the user when there are\nmultiple matches for an online search. Disabled json output\n\n""",
file=False,
)
parser.add_setting(
"--abort",
dest="abort_on_low_confidence",
action=argparse.BooleanOptionalAction,
default=True,
help="""Abort save operation when online match\nis of low confidence.\n\n""",
file=False,
)
parser.add_setting(
"--summary",
default=True,
action=argparse.BooleanOptionalAction,
help="Show the summary after a save operation.\n\n",
file=False,
)
parser.add_setting(
"--raw",
action="store_true",
help="""With -p, will print out the raw tag block(s)\nfrom the file.\n""",
file=False,
)
parser.add_setting(
"-R",
"--recursive",
action="store_true",
help="Recursively include files in sub-folders.",
file=False,
)
parser.add_setting(
"-n",
"--dryrun",
action="store_true",
help="Don't actually modify file (only relevant for -d, -s, or -r).\n\n",
file=False,
)
parser.add_setting("--darkmode", action="store_true", help="Windows only. Force a dark pallet", file=False)
parser.add_setting("-g", "--glob", action="store_true", help="Windows only. Enable globbing", file=False)
parser.add_setting("--quiet", "-q", action="store_true", help="Don't say much (for print mode).", file=False)
parser.add_setting(
"--json", "-j", action="store_true", help="Output json on stdout. Ignored in interactive mode.", file=False
)
parser.add_setting(
"-t",
"--type",
metavar=f"{{{','.join(metadata_styles).upper()}}}",
default=[],
type=metadata_type,
help="""Specify TYPE as either CR, CBL or COMET\n(as either ComicRack, ComicBookLover,\nor CoMet style tags, respectively).\nUse commas for multiple types.\nFor searching the metadata will use the first listed:\neg '-t cbl,cr' with no CBL tags, CR will be used if they exist\n\n""",
file=False,
)
parser.add_setting(
"--overwrite",
action=argparse.BooleanOptionalAction,
default=True,
help="""Apply metadata to already tagged archives, otherwise skips archives with existing metadata (relevant for -s or -c).""",
file=False,
)
parser.add_setting("--no-gui", action="store_true", help="Do not open the GUI, force the commandline", file=False)
parser.add_setting("files", nargs="*", file=False)
def register_commands(parser: settngs.Manager) -> None:
parser.add_setting("--version", action="store_true", help="Display version.", file=False)
parser.add_setting(
"-p",
"--print",
dest="command",
action="store_const",
const=Action.print,
help="""Print out tag info from file. Specify type\n(via -t) to get only info of that tag type.\n\n""",
file=False,
)
parser.add_setting(
"-d",
"--delete",
dest="command",
action="store_const",
const=Action.delete,
help="Deletes the tag block of specified type (via -t).\n",
file=False,
)
parser.add_setting(
"-c",
"--copy",
type=metadata_type_single,
metavar=f"{{{','.join(metadata_styles).upper()}}}",
help="Copy the specified source tag block to\ndestination style specified via -t\n(potentially lossy operation).\n\n",
file=False,
)
parser.add_setting(
"-s",
"--save",
dest="command",
action="store_const",
const=Action.save,
help="Save out tags as specified type (via -t).\nMust specify also at least -o, -f, or -m.\n\n",
file=False,
)
parser.add_setting(
"-r",
"--rename",
dest="command",
action="store_const",
const=Action.rename,
help="Rename the file based on specified tag style.",
file=False,
)
parser.add_setting(
"-e",
"--export-to-zip",
dest="command",
action="store_const",
const=Action.export,
help="Export RAR archive to Zip format.",
file=False,
)
parser.add_setting(
"--only-save-config",
dest="command",
action="store_const",
const=Action.save_config,
help="Only save the configuration (eg, Comic Vine API key) and quit.",
file=False,
)
parser.add_setting(
"--list-plugins",
dest="command",
action="store_const",
const=Action.list_plugins,
help="List the available plugins.\n\n",
file=False,
)
def register_commandline_settings(parser: settngs.Manager) -> None:
parser.add_group("Commands", register_commands, True)
parser.add_persistent_group("Runtime Options", register_runtime)
def validate_commandline_settings(config: settngs.Config[ct_ns], parser: settngs.Manager) -> settngs.Config[ct_ns]:
if config[0].Commands__version:
parser.exit(
status=1,
message=f"ComicTagger {ctversion.version}: Copyright (c) 2012-2022 ComicTagger Team\n"
+ "Distributed under Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0)\n",
)
config[0].Runtime_Options__no_gui = any(
(config[0].Commands__command, config[0].Runtime_Options__no_gui, config[0].Commands__copy)
)
if platform.system() == "Windows" and config[0].Runtime_Options__glob:
# no globbing on windows shell, so do it for them
import glob
globs = config[0].Runtime_Options__files
config[0].Runtime_Options__files = []
for item in globs:
config[0].Runtime_Options__files.extend(glob.glob(item))
if config[0].Runtime_Options__json and config[0].Runtime_Options__interactive:
config[0].Runtime_Options__json = False
if (
config[0].Commands__command not in (Action.save_config, Action.list_plugins)
and config[0].Runtime_Options__no_gui
and not config[0].Runtime_Options__files
):
parser.exit(message="Command requires at least one filename!\n", status=1)
if config[0].Commands__command == Action.delete and not config[0].Runtime_Options__type:
parser.exit(message="Please specify the type to delete with -t\n", status=1)
if config[0].Commands__command == Action.save and not config[0].Runtime_Options__type:
parser.exit(message="Please specify the type to save with -t\n", status=1)
if config[0].Commands__copy:
config[0].Commands__command = Action.copy
if not config[0].Runtime_Options__type:
parser.exit(message="Please specify the type to copy to with -t\n", status=1)
if config[0].Runtime_Options__recursive:
config[0].Runtime_Options__files = utils.get_recursive_filelist(config[0].Runtime_Options__files)
# take a crack at finding rar exe if it's not in the path
if not utils.which("rar"):
if platform.system() == "Windows":
letters = ["C"]
letters.extend({f"{d}" for d in "ABCDEFGHIJKLMNOPQRSTUVWXYZ" if os.path.exists(f"{d}:\\")} - {"C"})
for letter in letters:
# look in some likely places for Windows machines
utils.add_to_path(rf"{letters}:\Program Files\WinRAR")
utils.add_to_path(rf"{letters}:\Program Files (x86)\WinRAR")
else:
if platform.system() == "Darwin":
result = subprocess.run(("/usr/libexec/path_helper", "-s"), capture_output=True)
for path in reversed(
shlex.split(result.stdout.decode("utf-8", errors="ignore"))[0]
.partition("=")[2]
.rstrip(";")
.split(os.pathsep)
):
utils.add_to_path(path)
utils.add_to_path("/opt/homebrew/bin")
return config

View File

@ -0,0 +1,304 @@
from __future__ import annotations
import argparse
import uuid
import settngs
from comicapi import utils
from comictaggerlib.ctsettings.settngs_namespace import SettngsNS as ct_ns
from comictaggerlib.defaults import DEFAULT_REPLACEMENTS, Replacement, Replacements
def general(parser: settngs.Manager) -> None:
# General Settings
parser.add_setting("check_for_new_version", default=False, cmdline=False)
parser.add_setting(
"--disable-cr",
default=False,
action=argparse.BooleanOptionalAction,
help="Disable the ComicRack metadata type",
)
parser.add_setting("use_short_metadata_names", default=False, action=argparse.BooleanOptionalAction, cmdline=False)
parser.add_setting(
"--prompt-on-save",
default=True,
action=argparse.BooleanOptionalAction,
help="Prompts the user to confirm saving tags when using the GUI.",
)
def internal(parser: settngs.Manager) -> None:
# automatic settings
parser.add_setting("install_id", default=uuid.uuid4().hex, cmdline=False)
parser.add_setting("save_data_style", default=["cbi"], cmdline=False)
parser.add_setting("load_data_style", default="cbi", cmdline=False)
parser.add_setting("last_opened_folder", default="", cmdline=False)
parser.add_setting("window_width", default=0, cmdline=False)
parser.add_setting("window_height", default=0, cmdline=False)
parser.add_setting("window_x", default=0, cmdline=False)
parser.add_setting("window_y", default=0, cmdline=False)
parser.add_setting("form_width", default=-1, cmdline=False)
parser.add_setting("list_width", default=-1, cmdline=False)
parser.add_setting("sort_column", default=-1, cmdline=False)
parser.add_setting("sort_direction", default=0, cmdline=False)
def identifier(parser: settngs.Manager) -> None:
# identifier settings
parser.add_setting("--series-match-identify-thresh", default=91, type=int, help="")
parser.add_setting(
"-b",
"--border-crop-percent",
default=10,
type=int,
help="ComicTagger will automatically add an additional cover that has any black borders cropped. If the difference in height is less than %(default)s%% the cover will not be cropped.",
)
parser.add_setting(
"--publisher-filter",
default=["Panini Comics", "Abril", "Planeta DeAgostini", "Editorial Televisa", "Dino Comics"],
action="extend",
nargs="+",
help="When enabled, filters the listed publishers from all search results. Ending a publisher with a '-' removes a publisher from this list",
)
parser.add_setting("--series-match-search-thresh", default=90, type=int)
parser.add_setting(
"--clear-metadata",
default=False,
help="Clears all existing metadata during import, default is to merge metadata.\nMay be used in conjunction with -o, -f and -m.\n\n",
action=argparse.BooleanOptionalAction,
)
parser.add_setting(
"-a",
"--auto-imprint",
action=argparse.BooleanOptionalAction,
default=False,
help="Enables the auto imprint functionality.\ne.g. if the publisher is set to 'vertigo' it\nwill be updated to 'DC Comics' and the imprint\nproperty will be set to 'Vertigo'.\n\n",
)
parser.add_setting(
"--sort-series-by-year", default=True, action=argparse.BooleanOptionalAction, help="Sorts series by year"
)
parser.add_setting(
"--exact-series-matches-first",
default=True,
action=argparse.BooleanOptionalAction,
help="Puts series that are an exact match at the top of the list",
)
parser.add_setting(
"--always-use-publisher-filter",
default=False,
action=argparse.BooleanOptionalAction,
help="Enables the publisher filter",
)
def dialog(parser: settngs.Manager) -> None:
# Show/ask dialog flags
parser.add_setting("show_disclaimer", default=True, cmdline=False)
parser.add_setting("dont_notify_about_this_version", default="", cmdline=False)
parser.add_setting("ask_about_usage_stats", default=True, cmdline=False)
def filename(parser: settngs.Manager) -> None:
# filename parsing settings
parser.add_setting(
"--filename-parser",
default=utils.Parser.ORIGINAL,
metavar=f"{{{','.join(utils.Parser)}}}",
type=utils.Parser,
choices=[p.value for p in utils.Parser],
help="Select the filename parser, defaults to original",
)
parser.add_setting(
"--remove-c2c",
default=False,
action=argparse.BooleanOptionalAction,
help="Removes c2c from filenames. Requires --complicated-parser",
)
parser.add_setting(
"--remove-fcbd",
default=False,
action=argparse.BooleanOptionalAction,
help="Removes FCBD/free comic book day from filenames. Requires --complicated-parser",
)
parser.add_setting(
"--remove-publisher",
default=False,
action=argparse.BooleanOptionalAction,
help="Attempts to remove publisher names from filenames, currently limited to Marvel and DC. Requires --complicated-parser",
)
parser.add_setting(
"--split-words",
action="store_true",
help="""Splits words before parsing the filename.\ne.g. 'judgedredd' to 'judge dredd'\n\n""",
file=False,
)
parser.add_setting(
"--protofolius-issue-number-scheme",
default=False,
action=argparse.BooleanOptionalAction,
help="Use an issue number scheme devised by protofolius for encoding format informatino as a letter in front of an issue number. Implies --allow-issue-start-with-letter. Requires --complicated-parser",
)
parser.add_setting(
"--allow-issue-start-with-letter",
default=False,
action=argparse.BooleanOptionalAction,
help="Allows an issue number to start with a single letter (e.g. '#X01'). Requires --complicated-parser",
)
def talker(parser: settngs.Manager) -> None:
# General settings for talkers
parser.add_setting(
"--source",
default="comicvine",
help="Use a specified source by source ID (use --list-plugins to list all sources)",
)
parser.add_setting(
"--remove-html-tables",
default=False,
action=argparse.BooleanOptionalAction,
display_name="Remove HTML tables",
help="Removes html tables instead of converting them to text",
)
def cbl(parser: settngs.Manager) -> None:
# CBL Transform settings
parser.add_setting("--assume-lone-credit-is-primary", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-characters-to-tags", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-teams-to-tags", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-locations-to-tags", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-storyarcs-to-tags", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-notes-to-comments", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--copy-weblink-to-comments", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--apply-transform-on-import", default=False, action=argparse.BooleanOptionalAction)
parser.add_setting("--apply-transform-on-bulk-operation", default=False, action=argparse.BooleanOptionalAction)
def rename(parser: settngs.Manager) -> None:
# Rename settings
parser.add_setting("--template", default="{series} #{issue} ({year})", help="The teplate to use when renaming")
parser.add_setting(
"--issue-number-padding",
default=3,
type=int,
help="The minimum number of digits to use for the issue number when renaming",
)
parser.add_setting(
"--use-smart-string-cleanup",
default=True,
action=argparse.BooleanOptionalAction,
help="Attempts to intelligently cleanup whitespace when renaming",
)
parser.add_setting(
"--auto-extension",
default=True,
action=argparse.BooleanOptionalAction,
help="Automatically sets the extension based on the archive type e.g. cbr for rar, cbz for zip",
)
parser.add_setting("--dir", default="", help="The directory to move renamed files to")
parser.add_setting(
"--move",
dest="move_to_dir",
default=False,
action=argparse.BooleanOptionalAction,
help="Enables moving renamed files to a separate directory",
)
parser.add_setting(
"--strict",
default=False,
action=argparse.BooleanOptionalAction,
help="Ensures that filenames are valid for all OSs",
)
parser.add_setting("replacements", default=DEFAULT_REPLACEMENTS, cmdline=False)
def autotag(parser: settngs.Manager) -> None:
# Auto-tag stickies
parser.add_setting(
"--save-on-low-confidence",
default=False,
action=argparse.BooleanOptionalAction,
help="Automatically save metadata on low-confidence matches",
)
parser.add_setting(
"--dont-use-year-when-identifying",
default=False,
action=argparse.BooleanOptionalAction,
help="Ignore the year metadata attribute when identifying a comic",
)
parser.add_setting(
"-1",
"--assume-issue-one",
action=argparse.BooleanOptionalAction,
help="Assume issue number is 1 if not found (relevant for -s).\n\n",
default=False,
)
parser.add_setting(
"--ignore-leading-numbers-in-filename",
default=False,
action=argparse.BooleanOptionalAction,
help="When searching ignore leading numbers in the filename",
)
parser.add_setting("remove_archive_after_successful_match", default=False, cmdline=False)
def parse_filter(config: settngs.Config[ct_ns]) -> settngs.Config[ct_ns]:
new_filter = []
remove = []
for x in config[0].Issue_Identifier__publisher_filter:
x = x.strip()
if x: # ignore empty arguments
if x[-1] == "-": # this publisher needs to be removed. We remove after all publishers have been enumerated
remove.append(x.strip("-"))
else:
if x not in new_filter:
new_filter.append(x)
for x in remove: # remove publishers
if x in new_filter:
new_filter.remove(x)
config[0].Issue_Identifier__publisher_filter = new_filter
return config
def migrate_settings(config: settngs.Config[ct_ns]) -> settngs.Config[ct_ns]:
original_types = ("cbi", "cr", "comet")
save_style = config[0].internal__save_data_style
if not isinstance(save_style, list):
if isinstance(save_style, int) and save_style in (0, 1, 2):
config[0].internal__save_data_style = [original_types[save_style]]
elif isinstance(save_style, str):
config[0].internal__save_data_style = [save_style]
else:
config[0].internal__save_data_style = ["cbi"]
return config
def validate_file_settings(config: settngs.Config[ct_ns]) -> settngs.Config[ct_ns]:
config = parse_filter(config)
config = migrate_settings(config)
if config[0].Filename_Parsing__protofolius_issue_number_scheme:
config[0].Filename_Parsing__allow_issue_start_with_letter = True
config[0].File_Rename__replacements = Replacements(
[Replacement(x[0], x[1], x[2]) for x in config[0].File_Rename__replacements[0]],
[Replacement(x[0], x[1], x[2]) for x in config[0].File_Rename__replacements[1]],
)
return config
def register_file_settings(parser: settngs.Manager) -> None:
parser.add_group("internal", internal, False)
parser.add_group("Issue Identifier", identifier, False)
parser.add_group("Filename Parsing", filename, False)
parser.add_group("Sources", talker, False)
parser.add_group("Comic Book Lover", cbl, False)
parser.add_group("File Rename", rename, False)
parser.add_group("Auto-Tag", autotag, False)
parser.add_group("General", general, False)
parser.add_group("Dialog Flags", dialog, False)

View File

@ -0,0 +1,107 @@
from __future__ import annotations
import logging
import os
from typing import cast
import settngs
import comicapi.comicarchive
import comicapi.utils
import comictaggerlib.ctsettings
from comicapi.comicarchive import Archiver
from comictaggerlib.ctsettings.settngs_namespace import SettngsNS as ct_ns
from comictalker.comictalker import ComicTalker
logger = logging.getLogger("comictagger")
def group_for_plugin(plugin: Archiver | ComicTalker | type[Archiver]) -> str:
if isinstance(plugin, ComicTalker):
return f"Source {plugin.id}"
if isinstance(plugin, Archiver) or plugin == Archiver:
return "Archive"
raise NotImplementedError(f"Invalid plugin received: {plugin=}")
def archiver(manager: settngs.Manager) -> None:
for archiver in comicapi.comicarchive.archivers:
if archiver.exe:
# add_setting will overwrite anything with the same name.
# So we only end up with one option even if multiple archivers use the same exe.
manager.add_setting(
f"--{settngs.sanitize_name(archiver.exe)}",
default=archiver.exe,
help="Path to the %(default)s executable\n\n",
)
def register_talker_settings(manager: settngs.Manager, talkers: dict[str, ComicTalker]) -> None:
for talker in talkers.values():
def api_options(manager: settngs.Manager) -> None:
# The default needs to be unset or None.
# This allows this setting to be unset with the empty string, allowing the default to change
manager.add_setting(
f"--{talker.id}-key",
display_name="API Key",
help=f"API Key for {talker.name} (default: {talker.default_api_key})",
)
manager.add_setting(
f"--{talker.id}-url",
display_name="URL",
help=f"URL for {talker.name} (default: {talker.default_api_url})",
)
try:
manager.add_persistent_group(group_for_plugin(talker), api_options, False)
if hasattr(talker, "register_settings"):
manager.add_persistent_group(group_for_plugin(talker), talker.register_settings, False)
except Exception:
logger.exception("Failed to register settings for %s", talker.id)
def validate_archive_settings(config: settngs.Config[ct_ns]) -> settngs.Config[ct_ns]:
cfg = settngs.normalize_config(config, file=True, cmdline=True, default=False)
for archiver in comicapi.comicarchive.archivers:
group = group_for_plugin(archiver())
exe_name = settngs.sanitize_name(archiver.exe)
if not exe_name:
continue
if exe_name in cfg[0][group] and cfg[0][group][exe_name]:
path = cfg[0][group][exe_name]
name = os.path.basename(path)
# If the path is not the basename then this is a relative or absolute path.
# Ensure it is absolute
if path != name:
path = os.path.abspath(path)
archiver.exe = path
return config
def validate_talker_settings(config: settngs.Config[ct_ns], talkers: dict[str, ComicTalker]) -> settngs.Config[ct_ns]:
# Apply talker settings from config file
cfg = settngs.normalize_config(config, True, True)
for talker in list(talkers.values()):
try:
cfg[0][group_for_plugin(talker)] = talker.parse_settings(cfg[0][group_for_plugin(talker)])
except Exception as e:
# Remove talker as we failed to apply the settings
del comictaggerlib.ctsettings.talkers[talker.id]
logger.exception("Failed to initialize talker settings: %s", e)
return cast(settngs.Config[ct_ns], settngs.get_namespace(cfg, file=True, cmdline=True))
def validate_plugin_settings(config: settngs.Config[ct_ns], talkers: dict[str, ComicTalker]) -> settngs.Config[ct_ns]:
config = validate_archive_settings(config)
config = validate_talker_settings(config, talkers)
return config
def register_plugin_settings(manager: settngs.Manager, talkers: dict[str, ComicTalker]) -> None:
manager.add_persistent_group("Archive", archiver, False)
register_talker_settings(manager, talkers)

View File

@ -0,0 +1,153 @@
"""Functions related to finding and loading plugins."""
# Lifted from flake8 https://github.com/PyCQA/flake8/blob/main/src/flake8/plugins/finder.py#L127
from __future__ import annotations
import configparser
import importlib.metadata
import logging
import pathlib
import re
from collections.abc import Generator
from typing import Any, NamedTuple
logger = logging.getLogger(__name__)
NORMALIZE_PACKAGE_NAME_RE = re.compile(r"[-_.]+")
PLUGIN_GROUPS = frozenset(("comictagger.talker", "comicapi.archiver", "comicapi.metadata"))
class FailedToLoadPlugin(Exception):
"""Exception raised when a plugin fails to load."""
FORMAT = 'ComicTagger failed to load local plugin "{name}" due to {exc}.'
def __init__(self, plugin_name: str, exception: Exception) -> None:
"""Initialize our FailedToLoadPlugin exception."""
self.plugin_name = plugin_name
self.original_exception = exception
super().__init__(plugin_name, exception)
def __str__(self) -> str:
"""Format our exception message."""
return self.FORMAT.format(
name=self.plugin_name,
exc=self.original_exception,
)
def normalize_pypi_name(s: str) -> str:
"""Normalize a distribution name according to PEP 503."""
return NORMALIZE_PACKAGE_NAME_RE.sub("-", s).lower()
class Plugin(NamedTuple):
"""A plugin before loading."""
package: str
version: str
entry_point: importlib.metadata.EntryPoint
path: pathlib.Path
class LoadedPlugin(NamedTuple):
"""Represents a plugin after being imported."""
plugin: Plugin
obj: Any
@property
def entry_name(self) -> str:
"""Return the name given in the packaging metadata."""
return self.plugin.entry_point.name
@property
def display_name(self) -> str:
"""Return the name for use in user-facing / error messages."""
return f"{self.plugin.package}[{self.entry_name}]"
class Plugins(NamedTuple):
"""Classified plugins."""
archivers: list[Plugin]
metadata: list[Plugin]
talkers: list[Plugin]
def all_plugins(self) -> Generator[Plugin, None, None]:
"""Return an iterator over all :class:`LoadedPlugin`s."""
yield from self.archivers
yield from self.metadata
yield from self.talkers
def versions_str(self) -> str:
"""Return a user-displayed list of plugin versions."""
return ", ".join(sorted({f"{plugin.package}: {plugin.version}" for plugin in self.all_plugins()}))
def _find_local_plugins(plugin_path: pathlib.Path) -> Generator[Plugin, None, None]:
cfg = configparser.ConfigParser(interpolation=None)
cfg.read(plugin_path / "setup.cfg")
for group in PLUGIN_GROUPS:
for plugin_s in cfg.get("options.entry_points", group, fallback="").splitlines():
if not plugin_s:
continue
name, _, entry_str = plugin_s.partition("=")
name, entry_str = name.strip(), entry_str.strip()
ep = importlib.metadata.EntryPoint(name, entry_str, group)
yield Plugin(plugin_path.name, cfg.get("metadata", "version", fallback="0.0.1"), ep, plugin_path)
def _check_required_plugins(plugins: list[Plugin], expected: frozenset[str]) -> None:
plugin_names = {normalize_pypi_name(plugin.package) for plugin in plugins}
expected_names = {normalize_pypi_name(name) for name in expected}
missing_plugins = expected_names - plugin_names
if missing_plugins:
raise Exception(
"required plugins were not installed!\n"
+ f"- installed: {', '.join(sorted(plugin_names))}\n"
+ f"- expected: {', '.join(sorted(expected_names))}\n"
+ f"- missing: {', '.join(sorted(missing_plugins))}"
)
def find_plugins(plugin_folder: pathlib.Path) -> Plugins:
"""Discovers all plugins (but does not load them)."""
ret: list[Plugin] = []
for plugin_path in plugin_folder.glob("*/setup.cfg"):
try:
ret.extend(_find_local_plugins(plugin_path.parent))
except Exception as err:
FailedToLoadPlugin(plugin_path.parent.name, err)
# for determinism, sort the list
ret.sort()
return _classify_plugins(ret)
def _classify_plugins(plugins: list[Plugin]) -> Plugins:
archivers = []
metadata = []
talkers = []
for p in plugins:
if p.entry_point.group == "comictagger.talker":
talkers.append(p)
elif p.entry_point.group == "comicapi.metadata":
metadata.append(p)
elif p.entry_point.group == "comicapi.archiver":
archivers.append(p)
else:
logger.warning(NotImplementedError(f"what plugin type? {p}"))
return Plugins(
metadata=metadata,
archivers=archivers,
talkers=talkers,
)

View File

@ -0,0 +1,261 @@
from __future__ import annotations
import typing
import settngs
import comicapi.genericmetadata
import comicapi.utils
import comictaggerlib.ctsettings.types
import comictaggerlib.defaults
import comictaggerlib.resulttypes
class SettngsNS(settngs.TypedNS):
Commands__version: bool
Commands__command: comictaggerlib.resulttypes.Action
Commands__copy: str
Runtime_Options__config: comictaggerlib.ctsettings.types.ComicTaggerPaths
Runtime_Options__verbose: int
Runtime_Options__abort_on_conflict: bool
Runtime_Options__delete_original: bool
Runtime_Options__parse_filename: bool
Runtime_Options__issue_id: str
Runtime_Options__online: bool
Runtime_Options__metadata: comicapi.genericmetadata.GenericMetadata
Runtime_Options__interactive: bool
Runtime_Options__abort_on_low_confidence: bool
Runtime_Options__summary: bool
Runtime_Options__raw: bool
Runtime_Options__recursive: bool
Runtime_Options__dryrun: bool
Runtime_Options__darkmode: bool
Runtime_Options__glob: bool
Runtime_Options__quiet: bool
Runtime_Options__json: bool
Runtime_Options__type: list[str]
Runtime_Options__overwrite: bool
Runtime_Options__no_gui: bool
Runtime_Options__files: list[str]
internal__install_id: str
internal__save_data_style: list[str]
internal__load_data_style: str
internal__last_opened_folder: str
internal__window_width: int
internal__window_height: int
internal__window_x: int
internal__window_y: int
internal__form_width: int
internal__list_width: int
internal__sort_column: int
internal__sort_direction: int
Issue_Identifier__series_match_identify_thresh: int
Issue_Identifier__border_crop_percent: int
Issue_Identifier__publisher_filter: list[str]
Issue_Identifier__series_match_search_thresh: int
Issue_Identifier__clear_metadata: bool
Issue_Identifier__auto_imprint: bool
Issue_Identifier__sort_series_by_year: bool
Issue_Identifier__exact_series_matches_first: bool
Issue_Identifier__always_use_publisher_filter: bool
Filename_Parsing__filename_parser: comicapi.utils.Parser
Filename_Parsing__remove_c2c: bool
Filename_Parsing__remove_fcbd: bool
Filename_Parsing__remove_publisher: bool
Filename_Parsing__split_words: bool
Filename_Parsing__protofolius_issue_number_scheme: bool
Filename_Parsing__allow_issue_start_with_letter: bool
Sources__source: str
Sources__remove_html_tables: bool
Comic_Book_Lover__assume_lone_credit_is_primary: bool
Comic_Book_Lover__copy_characters_to_tags: bool
Comic_Book_Lover__copy_teams_to_tags: bool
Comic_Book_Lover__copy_locations_to_tags: bool
Comic_Book_Lover__copy_storyarcs_to_tags: bool
Comic_Book_Lover__copy_notes_to_comments: bool
Comic_Book_Lover__copy_weblink_to_comments: bool
Comic_Book_Lover__apply_transform_on_import: bool
Comic_Book_Lover__apply_transform_on_bulk_operation: bool
File_Rename__template: str
File_Rename__issue_number_padding: int
File_Rename__use_smart_string_cleanup: bool
File_Rename__auto_extension: bool
File_Rename__dir: str
File_Rename__move_to_dir: bool
File_Rename__strict: bool
File_Rename__replacements: comictaggerlib.defaults.Replacements
Auto_Tag__save_on_low_confidence: bool
Auto_Tag__dont_use_year_when_identifying: bool
Auto_Tag__assume_issue_one: bool
Auto_Tag__ignore_leading_numbers_in_filename: bool
Auto_Tag__remove_archive_after_successful_match: bool
General__check_for_new_version: bool
General__disable_cr: bool
General__use_short_metadata_names: bool
General__prompt_on_save: bool
Dialog_Flags__show_disclaimer: bool
Dialog_Flags__dont_notify_about_this_version: str
Dialog_Flags__ask_about_usage_stats: bool
Archive__rar: str
Source_comicvine__comicvine_key: str
Source_comicvine__comicvine_url: str
Source_comicvine__cv_use_series_start_as_volume: bool
class Commands(typing.TypedDict):
version: bool
command: comictaggerlib.resulttypes.Action
copy: str
class Runtime_Options(typing.TypedDict):
config: comictaggerlib.ctsettings.types.ComicTaggerPaths
verbose: int
abort_on_conflict: bool
delete_original: bool
parse_filename: bool
issue_id: str
online: bool
metadata: comicapi.genericmetadata.GenericMetadata
interactive: bool
abort_on_low_confidence: bool
summary: bool
raw: bool
recursive: bool
dryrun: bool
darkmode: bool
glob: bool
quiet: bool
json: bool
type: list[str]
overwrite: bool
no_gui: bool
files: list[str]
class internal(typing.TypedDict):
install_id: str
save_data_style: list[str]
load_data_style: str
last_opened_folder: str
window_width: int
window_height: int
window_x: int
window_y: int
form_width: int
list_width: int
sort_column: int
sort_direction: int
class Issue_Identifier(typing.TypedDict):
series_match_identify_thresh: int
border_crop_percent: int
publisher_filter: list[str]
series_match_search_thresh: int
clear_metadata: bool
auto_imprint: bool
sort_series_by_year: bool
exact_series_matches_first: bool
always_use_publisher_filter: bool
class Filename_Parsing(typing.TypedDict):
filename_parser: comicapi.utils.Parser
remove_c2c: bool
remove_fcbd: bool
remove_publisher: bool
split_words: bool
protofolius_issue_number_scheme: bool
allow_issue_start_with_letter: bool
class Sources(typing.TypedDict):
source: str
remove_html_tables: bool
class Comic_Book_Lover(typing.TypedDict):
assume_lone_credit_is_primary: bool
copy_characters_to_tags: bool
copy_teams_to_tags: bool
copy_locations_to_tags: bool
copy_storyarcs_to_tags: bool
copy_notes_to_comments: bool
copy_weblink_to_comments: bool
apply_transform_on_import: bool
apply_transform_on_bulk_operation: bool
class File_Rename(typing.TypedDict):
template: str
issue_number_padding: int
use_smart_string_cleanup: bool
auto_extension: bool
dir: str
move_to_dir: bool
strict: bool
replacements: comictaggerlib.defaults.Replacements
class Auto_Tag(typing.TypedDict):
save_on_low_confidence: bool
dont_use_year_when_identifying: bool
assume_issue_one: bool
ignore_leading_numbers_in_filename: bool
remove_archive_after_successful_match: bool
class General(typing.TypedDict):
check_for_new_version: bool
disable_cr: bool
use_short_metadata_names: bool
prompt_on_save: bool
class Dialog_Flags(typing.TypedDict):
show_disclaimer: bool
dont_notify_about_this_version: str
ask_about_usage_stats: bool
class Archive(typing.TypedDict):
rar: str
class Source_comicvine(typing.TypedDict):
comicvine_key: str
comicvine_url: str
cv_use_series_start_as_volume: bool
SettngsDict = typing.TypedDict(
"SettngsDict",
{
"Commands": Commands,
"Runtime Options": Runtime_Options,
"internal": internal,
"Issue Identifier": Issue_Identifier,
"Filename Parsing": Filename_Parsing,
"Sources": Sources,
"Comic Book Lover": Comic_Book_Lover,
"File Rename": File_Rename,
"Auto-Tag": Auto_Tag,
"General": General,
"Dialog Flags": Dialog_Flags,
"Archive": Archive,
"Source comicvine": Source_comicvine,
},
)

View File

@ -0,0 +1,244 @@
from __future__ import annotations
import argparse
import pathlib
import sys
import types
import typing
from collections.abc import Collection, Mapping
from typing import Any
import yaml
from appdirs import AppDirs
from comicapi import utils
from comicapi.comicarchive import metadata_styles
from comicapi.genericmetadata import REMOVE, GenericMetadata
if sys.version_info < (3, 10):
@typing.no_type_check
def get_type_hints(obj, globalns=None, localns=None, include_extras=False):
if getattr(obj, "__no_type_check__", None):
return {}
# Classes require a special treatment.
if isinstance(obj, type):
hints = {}
for base in reversed(obj.__mro__):
if globalns is None:
base_globals = getattr(sys.modules.get(base.__module__, None), "__dict__", {})
else:
base_globals = globalns
ann = base.__dict__.get("__annotations__", {})
if isinstance(ann, types.GetSetDescriptorType):
ann = {}
base_locals = dict(vars(base)) if localns is None else localns
if localns is None and globalns is None:
# This is surprising, but required. Before Python 3.10,
# get_type_hints only evaluated the globalns of
# a class. To maintain backwards compatibility, we reverse
# the globalns and localns order so that eval() looks into
# *base_globals* first rather than *base_locals*.
# This only affects ForwardRefs.
base_globals, base_locals = base_locals, base_globals
for name, value in ann.items():
if value is None:
value = type(None)
if isinstance(value, str):
if "|" in value:
value = "Union[" + value.replace(" |", ",") + "]"
value = typing.ForwardRef(value, is_argument=False, is_class=True)
value = typing._eval_type(value, base_globals, base_locals)
hints[name] = value
return hints if include_extras else {k: typing._strip_annotations(t) for k, t in hints.items()}
if globalns is None:
if isinstance(obj, types.ModuleType):
globalns = obj.__dict__
else:
nsobj = obj
# Find globalns for the unwrapped object.
while hasattr(nsobj, "__wrapped__"):
nsobj = nsobj.__wrapped__
globalns = getattr(nsobj, "__globals__", {})
if localns is None:
localns = globalns
elif localns is None:
localns = globalns
hints = getattr(obj, "__annotations__", None)
if hints is None:
# Return empty annotations for something that _could_ have them.
if isinstance(obj, typing._allowed_types):
return {}
else:
raise TypeError("{!r} is not a module, class, method, " "or function.".format(obj))
hints = dict(hints)
for name, value in hints.items():
if value is None:
value = type(None)
if isinstance(value, str):
if "|" in value:
value = "Union[" + value.replace(" |", ",") + "]"
# class-level forward refs were handled above, this must be either
# a module-level annotation or a function argument annotation
value = typing.ForwardRef(
value,
is_argument=not isinstance(obj, types.ModuleType),
is_class=False,
)
hints[name] = typing._eval_type(value, globalns, localns)
return hints if include_extras else {k: typing._strip_annotations(t) for k, t in hints.items()}
else:
from typing import get_type_hints
class ComicTaggerPaths(AppDirs):
def __init__(self, config_path: pathlib.Path | str | None = None) -> None:
super().__init__("ComicTagger", None, None, False, False)
self.path: pathlib.Path | None = None
if config_path:
self.path = pathlib.Path(config_path).absolute()
@property
def user_data_dir(self) -> pathlib.Path:
if self.path:
return self.path
return pathlib.Path(super().user_data_dir)
@property
def user_config_dir(self) -> pathlib.Path:
if self.path:
return self.path
return pathlib.Path(super().user_config_dir)
@property
def user_cache_dir(self) -> pathlib.Path:
if self.path:
path = self.path / "cache"
return path
return pathlib.Path(super().user_cache_dir)
@property
def user_state_dir(self) -> pathlib.Path:
if self.path:
return self.path
return pathlib.Path(super().user_state_dir)
@property
def user_log_dir(self) -> pathlib.Path:
if self.path:
path = self.path / "log"
return path
return pathlib.Path(super().user_log_dir)
@property
def user_plugin_dir(self) -> pathlib.Path:
if self.path:
path = self.path / "plugins"
return path
return pathlib.Path(super().user_config_dir)
@property
def site_data_dir(self) -> pathlib.Path:
return pathlib.Path(super().site_data_dir)
@property
def site_config_dir(self) -> pathlib.Path:
return pathlib.Path(super().site_config_dir)
def metadata_type_single(types: str) -> str:
result = metadata_type(types)
if len(result) > 1:
raise argparse.ArgumentTypeError(f"invalid choice: {result} (only one metadata style allowed)")
return result[0]
def metadata_type(types: str) -> list[str]:
result = []
types = types.casefold()
for typ in utils.split(types, ","):
if typ not in metadata_styles:
choices = ", ".join(metadata_styles)
raise argparse.ArgumentTypeError(f"invalid choice: {typ} (choose from {choices.upper()})")
result.append(metadata_styles[typ].short_name)
return result
def parse_metadata_from_string(mdstr: str) -> GenericMetadata:
def get_type(key: str, tt: Any = get_type_hints(GenericMetadata)) -> Any:
t: Any = tt.get(key, None)
if t is None:
return None
if getattr(t, "__origin__", None) is typing.Union and len(t.__args__) == 2 and t.__args__[1] is type(None):
t = t.__args__[0]
elif isinstance(t, types.GenericAlias) and issubclass(t.mro()[0], Collection):
t = t.mro()[0], t.__args__[0]
if isinstance(t, tuple) and issubclass(t[1], dict):
return (t[0], dict)
if isinstance(t, type) and issubclass(t, dict):
return dict
return t
def convert_value(t: type, value: Any) -> Any:
if not isinstance(value, t):
if isinstance(value, (Mapping)):
value = t(**value)
elif not isinstance(value, str) and isinstance(value, (Collection)):
value = t(*value)
else:
try:
if t is utils.Url and isinstance(value, str):
value = utils.parse_url(value)
else:
value = t(value)
except (ValueError, TypeError):
raise argparse.ArgumentTypeError(f"Invalid syntax for tag '{key}'")
return value
md = GenericMetadata()
if not mdstr:
return md
if mdstr[0] == "@":
p = pathlib.Path(mdstr[1:])
if not p.is_file():
raise argparse.ArgumentTypeError("Invalid filepath")
mdstr = p.read_text()
if mdstr[0] != "{":
mdstr = "{" + mdstr + "}"
md_dict = yaml.safe_load(mdstr)
empty = True
# Map the dict to the metadata object
for key, value in md_dict.items():
if hasattr(md, key):
t = get_type(key)
if value is None:
value = REMOVE
elif isinstance(t, tuple):
if value == "" or value is None:
value = t[0]()
else:
if isinstance(value, str):
values: list[Any] = value.split("::")
if not isinstance(value, Collection):
raise argparse.ArgumentTypeError(f"Invalid syntax for tag '{key}'")
values = list(value)
for idx, v in enumerate(values):
if not isinstance(v, t[1]):
values[idx] = convert_value(t[1], v)
value = t[0](values)
elif value is not None:
value = convert_value(t, value)
empty = False
setattr(md, key, value)
else:
raise argparse.ArgumentTypeError(f"'{key}' is not a valid tag name")
md.is_empty = empty
return md

View File

@ -1,3 +0,0 @@
# This file should contan only these comments, and the line below.
# Used by packaging makefiles and app
version="1.1.3-beta"

View File

@ -0,0 +1,29 @@
from __future__ import annotations
from typing import NamedTuple
class Replacement(NamedTuple):
find: str
replce: str
strict_only: bool
class Replacements(NamedTuple):
literal_text: list[Replacement]
format_value: list[Replacement]
DEFAULT_REPLACEMENTS = Replacements(
literal_text=[
Replacement(": ", " - ", True),
Replacement(":", "-", True),
],
format_value=[
Replacement(": ", " - ", True),
Replacement(":", "-", True),
Replacement("/", "-", False),
Replacement("//", "--", False),
Replacement("\\", "-", True),
],
)

View File

@ -1,65 +1,62 @@
"""
A PyQT4 dialog to confirm and set options for export to zip
"""
"""A PyQT4 dialog to confirm and set options for export to zip"""
"""
Copyright 2012 Anthony Beville
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
import logging
http://www.apache.org/licenses/LICENSE-2.0
from PyQt5 import QtCore, QtWidgets, uic
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from comictaggerlib.ui import ui_path
logger = logging.getLogger(__name__)
from PyQt4 import QtCore, QtGui, uic
from settings import ComicTaggerSettings
from settingswindow import SettingsWindow
from filerenamer import FileRenamer
import os
import utils
class ExportConflictOpts:
dontCreate = 1
overwrite = 2
createUnique = 3
class ExportWindow(QtGui.QDialog):
def __init__( self, parent, settings, msg ):
super(ExportWindow, self).__init__(parent)
uic.loadUi(ComicTaggerSettings.getUIFile('exportwindow.ui' ), self)
self.label.setText( msg )
dontCreate = 1
overwrite = 2
createUnique = 3
self.setWindowFlags(self.windowFlags() &
~QtCore.Qt.WindowContextHelpButtonHint )
self.settings = settings
self.cbxDeleteOriginal.setCheckState( QtCore.Qt.Unchecked )
self.cbxAddToList.setCheckState( QtCore.Qt.Checked )
self.radioDontCreate.setChecked( True )
self.deleteOriginal = False
self.addToList = True
self.fileConflictBehavior = ExportConflictOpts.dontCreate
class ExportWindow(QtWidgets.QDialog):
def __init__(self, parent: QtWidgets.QWidget, msg: str) -> None:
super().__init__(parent)
def accept( self ):
QtGui.QDialog.accept(self)
with (ui_path / "exportwindow.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
self.label.setText(msg)
self.deleteOriginal = self.cbxDeleteOriginal.isChecked()
self.addToList = self.cbxAddToList.isChecked()
if self.radioDontCreate.isChecked():
self.fileConflictBehavior = ExportConflictOpts.dontCreate
elif self.radioCreateNew.isChecked():
self.fileConflictBehavior = ExportConflictOpts.createUnique
#else:
# self.fileConflictBehavior = ExportConflictOpts.overwrite
self.setWindowFlags(
QtCore.Qt.WindowType(self.windowFlags() & ~QtCore.Qt.WindowType.WindowContextHelpButtonHint)
)
self.cbxDeleteOriginal.setChecked(False)
self.cbxAddToList.setChecked(True)
self.radioDontCreate.setChecked(True)
self.deleteOriginal = False
self.addToList = True
self.fileConflictBehavior = ExportConflictOpts.dontCreate
def accept(self) -> None:
QtWidgets.QDialog.accept(self)
self.deleteOriginal = self.cbxDeleteOriginal.isChecked()
self.addToList = self.cbxAddToList.isChecked()
if self.radioDontCreate.isChecked():
self.fileConflictBehavior = ExportConflictOpts.dontCreate
elif self.radioCreateNew.isChecked():
self.fileConflictBehavior = ExportConflictOpts.createUnique

View File

@ -1,235 +0,0 @@
"""
Functions for parsing comic info from filename
This should probably be re-written, but, well, it mostly works!
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# Some portions of this code were modified from pyComicMetaThis project
# http://code.google.com/p/pycomicmetathis/
import re
import os
from urllib import unquote
class FileNameParser:
def fixSpaces( self, string, remove_dashes=True ):
if remove_dashes:
placeholders = ['[-_]',' +']
else:
placeholders = ['[_]',' +']
for ph in placeholders:
string = re.sub(ph, ' ', string )
return string.strip()
# check for silly .1 or .5 style issue strings
# allow up to 5 chars total
def isPointIssue( self, word ):
ret = False
try:
float(word)
if (len(word) < 5 and not word.isdigit()):
ret = True
except ValueError:
pass
return ret
def getIssueCount( self,filename ):
count = ""
# replace any name seperators with spaces
tmpstr = self.fixSpaces(filename)
found = False
match = re.search('(?<=\sof\s)\d+(?=\s)', tmpstr, re.IGNORECASE)
if match:
count = match.group()
found = True
if not found:
match = re.search('(?<=\(of\s)\d+(?=\))', tmpstr, re.IGNORECASE)
if match:
count = match.group()
found = True
count = count.lstrip("0")
return count
def getIssueNumber( self, filename ):
found = False
issue = ''
# first, look for multiple "--", this mean's it's formatted differently from most:
if "--" in filename:
# the pattern seems to be that anything to left of the first "--" is the series name followed by issue
filename = filename.split("--")[0]
elif "___" in filename:
# the pattern seems to be that anything to left of the first "__" is the series name followed by issue
filename = filename.split("__")[0]
filename = filename.replace("+", " ")
# remove parenthetical phrases
filename = re.sub( "\(.*\)", "", filename)
filename = re.sub( "\[.*\]", "", filename)
# guess based on position
# replace any name seperators with spaces
tmpstr = self.fixSpaces(filename)
word_list = tmpstr.split(' ')
#before we search, remove any kind of likely "of X" phrase
for i in range(0, len(word_list)-2):
if ( word_list[i].isdigit() and
word_list[i+1] == "of" and
word_list[i+2].isdigit() ):
word_list[i+1] ="XXX"
word_list[i+2] ="XXX"
# first look for the last "#" followed by a digit in the filename. this is almost certainly the issue number
#issnum = re.search('#\d+', filename)
matchlist = re.findall("#\d+", filename)
if len(matchlist) > 0:
#get the last item
issue = matchlist[ len(matchlist) - 1]
issue = issue[1:]
found = True
# assume the last number in the filename that is under 4 digits is the issue number
if not found:
for word in reversed(word_list):
if len(word) > 0 and word[0] == "#":
word = word[1:]
if (
(word.isdigit() and len(word) < 4) or
(self.isPointIssue(word))
):
issue = word
found = True
#print 'Assuming issue number is ' + str(issue) + ' based on the position.'
break
if not found:
# try a regex
issnum = re.search('(?<=[_#\s-])(\d+[a-zA-Z]|\d+\.\d|\d+)', filename)
if issnum:
issue = issnum.group()
found = True
#print 'Got the issue using regex. Issue is ' + issue
return issue.strip()
def getSeriesName(self, filename, issue ):
# use the issue number string to split the filename string
# assume first element of list is the series name, plus cruft
#!!! this could fail in the case of small numerics in the series name!!!
# TODO: we really should pass in the *INDEX* of the issue, that makes
# finding it easier
filename = filename.replace("+", " ")
tmpstr = self.fixSpaces(filename, remove_dashes=False)
#remove pound signs. this might mess up the series name if there is a# in it.
tmpstr = tmpstr.replace("#", " ")
if issue != "":
# assume that issue substr has at least one space before it
issue_str = " " + str(issue)
series = tmpstr.split(issue_str)[0]
else:
# no issue to work off of
#!!! TODO we should look for the year, and split from that
# and if that doesn't exist, remove parenthetical phrases
series = tmpstr
series = re.sub( "\(.*\)", "", tmpstr)
volume = ""
series = series.rstrip("#")
# search for volume number
match = re.search('(.+)([vV]|[Vv][oO][Ll]\.?\s?)(\d+)\s*$', series)
if match:
series = match.group(1)
volume = match.group(3)
return series.strip(), volume.strip()
def getYear( self,filename):
year = ""
# look for four digit number with "(" ")" or "--" around it
match = re.search('(\(\d\d\d\d\))|(--\d\d\d\d--)', filename)
if match:
year = match.group()
# remove non-numerics
year = re.sub("[^0-9]", "", year)
return year
def parseFilename( self, filename ):
# remove the path
filename = os.path.basename(filename)
# remove the extension
filename = os.path.splitext(filename)[0]
#url decode, just in case
filename = unquote(filename)
# sometimes archives get messed up names from too many decodings
# often url encodings will break and leave "_28" and "_29" in place
# of "(" and ")" see if there are a number of these, and replace them
if filename.count("_28") > 1 and filename.count("_29") > 1:
filename = filename.replace("_28", "(")
filename = filename.replace("_29", ")")
# ----HACK
# remove the first word that word is a 3 digit number.
# some story arcs collection packs do this, but it's ugly
# this will probably break something, i.e. "100 bullets"
#word = filename.split(' ')[0]
#if len(word) == 3 and word[0] =='0' and word.isdigit():
# filename = filename[4:]
# ----HACK -
self.issue = self.getIssueNumber(filename)
self.series, self.volume = self.getSeriesName(filename, self.issue)
self.year = self.getYear(filename)
self.issue_count = self.getIssueCount(filename)
if self.issue != "":
# strip off leading zeros
self.issue = self.issue.lstrip("0")
if self.issue == "":
self.issue = "0"
if self.issue[0] == ".":
self.issue = "0" + self.issue

View File

@ -1,140 +1,248 @@
"""
Functions for renaming files based on metadata
"""
"""Functions for renaming files based on metadata"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import calendar
import logging
import os
import re
import datetime
import utils
from issuestring import IssueString
import pathlib
import string
from collections.abc import Mapping, Sequence
from typing import Any, cast
from pathvalidate import Platform, normalize_platform, sanitize_filename
from comicapi.comicarchive import ComicArchive
from comicapi.genericmetadata import GenericMetadata
from comicapi.issuestring import IssueString
from comictaggerlib.defaults import DEFAULT_REPLACEMENTS, Replacement, Replacements
logger = logging.getLogger(__name__)
def get_rename_dir(ca: ComicArchive, rename_dir: str | pathlib.Path | None) -> pathlib.Path:
folder = ca.path.parent.absolute()
if rename_dir is not None:
if isinstance(rename_dir, str):
rename_dir = rename_dir.strip()
folder = pathlib.Path(rename_dir).absolute()
return folder
class MetadataFormatter(string.Formatter):
def __init__(
self, smart_cleanup: bool = False, platform: str = "auto", replacements: Replacements = DEFAULT_REPLACEMENTS
) -> None:
super().__init__()
self.smart_cleanup = smart_cleanup
self.platform = normalize_platform(platform)
self.replacements = replacements
def format_field(self, value: Any, format_spec: str) -> str:
if value is None or value == "":
return ""
return cast(str, super().format_field(value, format_spec))
def convert_field(self, value: Any, conversion: str) -> str:
if conversion == "u":
return str(value).upper()
if conversion == "l":
return str(value).casefold()
if conversion == "c":
return str(value).capitalize()
if conversion == "S":
return str(value).swapcase()
if conversion == "t":
return str(value).title()
if conversion == "j":
return ", ".join(list(str(v) for v in value))
return cast(str, super().convert_field(value, conversion))
def handle_replacements(self, string: str, replacements: list[Replacement]) -> str:
for find, replace, strict_only in replacements:
if self.is_strict() or not strict_only:
string = string.replace(find, replace)
return string
def none_replacement(self, value: Any, replacement: str, r: str) -> Any:
if r == "-" and value is None or value == "":
return replacement
if r == "+" and value is not None:
return replacement
return value
def split_replacement(self, field_name: str) -> tuple[str, str, str]:
if "-" in field_name:
return field_name.rpartition("-")
if "+" in field_name:
return field_name.rpartition("+")
return field_name, "", ""
def is_strict(self) -> bool:
return self.platform in [Platform.UNIVERSAL, Platform.WINDOWS]
def _vformat(
self,
format_string: str,
args: Sequence[Any],
kwargs: Mapping[str, Any],
used_args: set[Any],
recursion_depth: int,
auto_arg_index: int = 0,
) -> tuple[str, int]:
if recursion_depth < 0:
raise ValueError("Max string recursion exceeded")
result = []
lstrip = False
for literal_text, field_name, format_spec, conversion in self.parse(format_string):
# output the literal text
if literal_text:
if lstrip:
literal_text = literal_text.lstrip("-_)}]#")
if self.smart_cleanup:
literal_text = self.handle_replacements(literal_text, self.replacements.literal_text)
lspace = literal_text[0].isspace() if literal_text else False
rspace = literal_text[-1].isspace() if literal_text else False
literal_text = " ".join(literal_text.split())
if literal_text == "":
literal_text = " "
else:
if lspace:
literal_text = " " + literal_text
if rspace:
literal_text += " "
result.append(literal_text)
lstrip = False
# if there's a field, output it
if field_name is not None and field_name != "":
field_name, r, replacement = self.split_replacement(field_name)
field_name = field_name.casefold()
# this is some markup, find the object and do the formatting
# handle arg indexing when digit field_names are given.
if field_name.isdigit():
raise ValueError("cannot use a number as a field name")
# given the field_name, find the object it references
# and the argument it came from
obj, arg_used = self.get_field(field_name, args, kwargs)
used_args.add(arg_used)
obj = self.none_replacement(obj, replacement, r)
# do any conversion on the resulting object
obj = self.convert_field(obj, conversion) # type: ignore
# expand the format spec, if needed
format_spec, _ = self._vformat(
cast(str, format_spec), args, kwargs, used_args, recursion_depth - 1, auto_arg_index=False
)
# format the object and append to the result
fmt_obj = self.format_field(obj, format_spec)
if fmt_obj == "" and result and self.smart_cleanup and literal_text:
if self.str_contains(result[-1], "({["):
lstrip = True
if result:
if " " in result[-1]:
result[-1], _, _ = result[-1].rstrip().rpartition(" ")
result[-1] = result[-1].rstrip("-_({[#")
if self.smart_cleanup:
# colons and slashes get special treatment
fmt_obj = self.handle_replacements(fmt_obj, self.replacements.format_value)
fmt_obj = " ".join(fmt_obj.split())
fmt_obj = str(sanitize_filename(fmt_obj, platform=self.platform))
result.append(fmt_obj)
return "".join(result), False
def str_contains(self, chars: str, string: str) -> bool:
for char in chars:
if char in string:
return True
return False
class FileRenamer:
def __init__( self, metadata ):
self.setMetadata( metadata )
self.setTemplate( "%series% v%volume% #%issue% (of %issuecount%) (%year%)" )
self.smart_cleanup = True
self.issue_zero_padding = 3
def __init__(
self,
metadata: GenericMetadata | None,
platform: str = "auto",
replacements: Replacements = DEFAULT_REPLACEMENTS,
) -> None:
self.template = "{publisher}/{series}/{series} v{volume} #{issue} (of {issue_count}) ({year})"
self.smart_cleanup = True
self.issue_zero_padding = 3
self.metadata = metadata or GenericMetadata()
self.move = False
self.platform = platform
self.replacements = replacements
def setMetadata( self, metadata ):
self.metdata = metadata
def set_metadata(self, metadata: GenericMetadata) -> None:
self.metadata = metadata
def setIssueZeroPadding( self, count ):
self.issue_zero_padding = count
def set_issue_zero_padding(self, count: int) -> None:
self.issue_zero_padding = count
def setSmartCleanup( self, on ):
self.smart_cleanup = on
def set_smart_cleanup(self, on: bool) -> None:
self.smart_cleanup = on
def setTemplate( self, template ):
self.template = template
def replaceToken( self, text, value, token ):
#helper func
def isToken( word ):
return (word[0] == "%" and word[-1:] == "%")
def set_template(self, template: str) -> None:
self.template = template
if value is not None:
return text.replace( token, unicode(value) )
else:
if self.smart_cleanup:
# smart cleanup means we want to remove anything appended to token if it's empty
# (e.g "#%issue%" or "v%volume%" )
# (TODO: This could fail if there is more than one token appended together, I guess)
text_list = text.split()
#special case for issuecount, remove preceding non-token word, as in "...(of %issuecount%)..."
if token == '%issuecount%':
for idx,word in enumerate( text_list ):
if token in word and not isToken(text_list[idx -1]) :
text_list[idx -1] = ""
text_list = [ x for x in text_list if token not in x ]
return " ".join( text_list )
else:
return text.replace( token, "" )
def determineName( self, filename, ext=None ):
def determine_name(self, ext: str) -> str:
class Default(dict[str, Any]):
def __missing__(self, key: str) -> str:
return "{" + key + "}"
md = self.metdata
new_name = self.template
preferred_encoding = utils.get_actual_preferred_encoding()
md = self.metadata
#print u"{0}".format(md)
new_name = self.replaceToken( new_name, md.series, '%series%')
new_name = self.replaceToken( new_name, md.volume, '%volume%')
if md.issue is not None:
issue_str = u"{0}".format( IssueString(md.issue).asString(pad=self.issue_zero_padding) )
else:
issue_str = None
new_name = self.replaceToken( new_name, issue_str, '%issue%')
new_name = self.replaceToken( new_name, md.issueCount, '%issuecount%')
new_name = self.replaceToken( new_name, md.year, '%year%')
new_name = self.replaceToken( new_name, md.publisher, '%publisher%')
new_name = self.replaceToken( new_name, md.title, '%title%')
new_name = self.replaceToken( new_name, md.month, '%month%')
month_name = None
if md.month is not None:
if (type(md.month) == str and md.month.isdigit()) or type(md.month) == int:
if int(md.month) in range(1,13):
dt = datetime.datetime( 1970, int(md.month), 1, 0, 0)
month_name = dt.strftime(u"%B".encode(preferred_encoding)).decode(preferred_encoding)
new_name = self.replaceToken( new_name, month_name, '%month_name%')
template = self.template
new_name = self.replaceToken( new_name, md.genre, '%genre%')
new_name = self.replaceToken( new_name, md.language, '%language_code%')
new_name = self.replaceToken( new_name, md.criticalRating , '%criticalrating%')
new_name = self.replaceToken( new_name, md.alternateSeries, '%alternateseries%')
new_name = self.replaceToken( new_name, md.alternateNumber, '%alternatenumber%')
new_name = self.replaceToken( new_name, md.alternateCount, '%alternatecount%')
new_name = self.replaceToken( new_name, md.imprint, '%imprint%')
new_name = self.replaceToken( new_name, md.format, '%format%')
new_name = self.replaceToken( new_name, md.maturityRating, '%maturityrating%')
new_name = self.replaceToken( new_name, md.storyArc, '%storyarc%')
new_name = self.replaceToken( new_name, md.seriesGroup, '%seriesgroup%')
new_name = self.replaceToken( new_name, md.scanInfo, '%scaninfo%')
if self.smart_cleanup:
# remove empty braces,brackets, parentheses
new_name = re.sub("\(\s*[-:]*\s*\)", "", new_name )
new_name = re.sub("\[\s*[-:]*\s*\]", "", new_name )
new_name = re.sub("\{\s*[-:]*\s*\}", "", new_name )
new_name = ""
# remove remove duplicate -, _,
new_name = re.sub("[-_]+\s+", "- ", new_name )
new_name = re.sub("(\s-)+", " -", new_name )
fmt = MetadataFormatter(self.smart_cleanup, platform=self.platform, replacements=self.replacements)
md_dict = vars(md)
md_dict["web_link"] = ""
if md.web_links:
md_dict["web_link"] = md.web_links[0]
# remove duplicate spaces
new_name = u" ".join(new_name.split())
if ext is None:
ext = os.path.splitext( filename )[1]
md_dict["issue"] = IssueString(md.issue).as_string(pad=self.issue_zero_padding)
for role in ["writer", "penciller", "inker", "colorist", "letterer", "cover artist", "editor"]:
md_dict[role] = md.get_primary_credit(role)
new_name += ext
# some tweaks to keep various filesystems happy
new_name = new_name.replace("/", "-")
new_name = new_name.replace(":", "-")
new_name = new_name.replace("?", "")
return new_name
if (isinstance(md.month, int) or isinstance(md.month, str) and md.month.isdigit()) and 0 < int(md.month) < 13:
md_dict["month_name"] = calendar.month_name[int(md.month)]
md_dict["month_abbr"] = calendar.month_abbr[int(md.month)]
else:
md_dict["month_name"] = ""
md_dict["month_abbr"] = ""
new_basename = ""
for component in pathlib.PureWindowsPath(template).parts:
new_basename = str(
sanitize_filename(fmt.vformat(component, args=[], kwargs=Default(md_dict)), platform=self.platform)
).strip()
new_name = os.path.join(new_name, new_basename)
new_name += ext
new_basename += ext
if self.move:
return new_name.strip()
return new_basename.strip()

View File

@ -1,389 +1,385 @@
# coding=utf-8
"""
A PyQt4 widget for managing list of comic archive files
"""
"""A PyQt5 widget for managing list of comic archive files"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
#
# Copyright 2012-2014 ComicTagger Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import os
import sys
import platform
from typing import Callable, cast
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4 import uic
from PyQt4.QtCore import pyqtSignal
from PyQt5 import QtCore, QtWidgets, uic
from settings import ComicTaggerSettings
from comicarchive import ComicArchive
from comicarchive import MetaDataStyle
from genericmetadata import GenericMetadata, PageType
import utils
from comicapi import utils
from comicapi.comicarchive import ComicArchive
from comictaggerlib.ctsettings import ct_ns
from comictaggerlib.graphics import graphics_path
from comictaggerlib.optionalmsgdialog import OptionalMessageDialog
from comictaggerlib.settingswindow import linuxRarHelp, macRarHelp, windowsRarHelp
from comictaggerlib.ui import ui_path
from comictaggerlib.ui.qtutils import center_window_on_parent, reduce_widget_font_size
class FileTableWidget( QTableWidget ):
def __init__(self, parent ):
super(FileTableWidget, self).__init__(parent)
self.setColumnCount(5)
self.setHorizontalHeaderLabels (["File", "Folder", "CR", "CBL", ""])
self.horizontalHeader().setStretchLastSection( True )
logger = logging.getLogger(__name__)
class FileTableWidgetItem(QTableWidgetItem):
def __lt__(self, other):
return (self.data(Qt.UserRole).toBool() <
other.data(Qt.UserRole).toBool())
class FileSelectionList(QtWidgets.QWidget):
selectionChanged = QtCore.pyqtSignal(QtCore.QVariant)
listCleared = QtCore.pyqtSignal()
fileColNum = 0
MDFlagColNum = 1
typeColNum = 2
readonlyColNum = 3
folderColNum = 4
dataColNum = fileColNum
class FileInfo( ):
def __init__(self, ca ):
self.ca = ca
def __init__(
self, parent: QtWidgets.QWidget, config: ct_ns, dirty_flag_verification: Callable[[str, str], bool]
) -> None:
super().__init__(parent)
class FileSelectionList(QWidget):
with (ui_path / "fileselectionlist.ui").open(encoding="utf-8") as uifile:
uic.loadUi(uifile, self)
selectionChanged = pyqtSignal(QVariant)
listCleared = pyqtSignal()
fileColNum = 0
CRFlagColNum = 1
CBLFlagColNum = 2
typeColNum = 3
readonlyColNum = 4
folderColNum = 5
dataColNum = fileColNum
self.config = config
def __init__(self, parent , settings ):
super(FileSelectionList, self).__init__(parent)
reduce_widget_font_size(self.twList)
uic.loadUi(ComicTaggerSettings.getUIFile('fileselectionlist.ui' ), self)
self.settings = settings
self.twList.horizontalHeader().setMinimumSectionSize(50)
self.twList.currentItemChanged.connect(self.current_item_changed_cb)
utils.reduceWidgetFontSize( self.twList )
self.twList.currentItemChanged.connect( self.currentItemChangedCB )
self.currentItem = None
self.setContextMenuPolicy(Qt.ActionsContextMenu)
self.modifiedFlag = False
selectAllAction = QAction("Select All", self)
removeAction = QAction("Remove Selected Items", self)
self.separator = QAction("",self)
self.separator.setSeparator(True)
selectAllAction.setShortcut( 'Ctrl+A' )
removeAction.setShortcut( 'Ctrl+X' )
selectAllAction.triggered.connect(self.selectAll)
removeAction.triggered.connect(self.removeSelection)
self.currentItem = None
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
self.dirty_flag = False
self.addAction(selectAllAction)
self.addAction(removeAction)
self.addAction(self.separator)
select_all_action = QtWidgets.QAction("Select All", self)
remove_action = QtWidgets.QAction("Remove Selected Items", self)
self.separator = QtWidgets.QAction("", self)
self.separator.setSeparator(True)
def addAppAction( self, action ):
self.insertAction( None , action )
def setModifiedFlag( self, modified ):
self.modifiedFlag = modified
def selectAll( self ):
self.twList.setRangeSelected( QTableWidgetSelectionRange ( 0, 0, self.twList.rowCount()-1, 5 ), True )
select_all_action.setShortcut("Ctrl+A")
remove_action.setShortcut("Ctrl+X")
def deselectAll( self ):
self.twList.setRangeSelected( QTableWidgetSelectionRange ( 0, 0, self.twList.rowCount()-1, 5 ), False )
select_all_action.triggered.connect(self.select_all)
remove_action.triggered.connect(self.remove_selection)
def removeArchiveList( self, ca_list ):
self.twList.setSortingEnabled(False)
for ca in ca_list:
for row in range(self.twList.rowCount()):
row_ca = self.getArchiveByRow( row )
if row_ca == ca:
self.twList.removeRow(row)
break
self.twList.setSortingEnabled(True)
def getArchiveByRow( self, row):
fi = self.twList.item(row, FileSelectionList.dataColNum).data( Qt.UserRole ).toPyObject()
return fi.ca
def getCurrentArchive( self ):
return self.getArchiveByRow( self.twList.currentRow() )
def removeSelection( self ):
row_list = []
for item in self.twList.selectedItems():
if item.column() == 0:
row_list.append(item.row())
self.addAction(select_all_action)
self.addAction(remove_action)
self.addAction(self.separator)
if len(row_list) == 0:
return
if self.twList.currentRow() in row_list:
if not self.modifiedFlagVerification( "Remove Archive",
"If you close this archive, data in the form will be lost. Are you sure?"):
return
row_list.sort()
row_list.reverse()
self.dirty_flag_verification = dirty_flag_verification
self.rar_ro_shown = False
self.twList.currentItemChanged.disconnect( self.currentItemChangedCB )
self.twList.setSortingEnabled(False)
def get_sorting(self) -> tuple[int, int]:
col = self.twList.horizontalHeader().sortIndicatorSection()
order = self.twList.horizontalHeader().sortIndicatorOrder()
return int(col), int(order)
for i in row_list:
self.twList.removeRow(i)
self.twList.setSortingEnabled(True)
self.twList.currentItemChanged.connect( self.currentItemChangedCB )
if self.twList.rowCount() > 0:
self.twList.selectRow(0)
else:
self.listCleared.emit()
def addPathList( self, pathlist ):
filelist = utils.get_recursive_filelist( pathlist )
# we now have a list of files to add
def set_sorting(self, col: int, order: QtCore.Qt.SortOrder) -> None:
self.twList.horizontalHeader().setSortIndicator(col, order)
progdialog = QProgressDialog("", "Cancel", 0, len(filelist), self)
progdialog.setWindowTitle( "Adding Files" )
#progdialog.setWindowModality(Qt.WindowModal)
progdialog.setWindowModality(Qt.ApplicationModal)
progdialog.show()
firstAdded = None
self.twList.setSortingEnabled(False)
for idx,f in enumerate(filelist):
QCoreApplication.processEvents()
if progdialog.wasCanceled():
break
progdialog.setValue(idx)
progdialog.setLabelText(f)
utils.centerWindowOnParent( progdialog )
QCoreApplication.processEvents()
row = self.addPathItem( f )
if firstAdded is None and row is not None:
firstAdded = row
progdialog.close()
if firstAdded is not None:
self.twList.selectRow(firstAdded)
self.twList.setSortingEnabled(True)
# Adjust column size
self.twList.resizeColumnsToContents()
self.twList.setColumnWidth(FileSelectionList.CRFlagColNum, 35)
self.twList.setColumnWidth(FileSelectionList.CBLFlagColNum, 35)
self.twList.setColumnWidth(FileSelectionList.readonlyColNum, 35)
self.twList.setColumnWidth(FileSelectionList.typeColNum, 45)
if self.twList.columnWidth(FileSelectionList.fileColNum) > 250:
self.twList.setColumnWidth(FileSelectionList.fileColNum, 250)
if self.twList.columnWidth(FileSelectionList.folderColNum ) > 200:
self.twList.setColumnWidth(FileSelectionList.folderColNum, 200)
def add_app_action(self, action: QtWidgets.QAction) -> None:
self.insertAction(QtWidgets.QAction(), action)
def isListDupe( self, path ):
r = 0
while r < self.twList.rowCount():
ca = self.getArchiveByRow( r )
if ca.path == path:
return True
r = r + 1
return False
def addPathItem( self, path):
path = unicode( path )
path = os.path.abspath( path )
#print "processing", path
if self.isListDupe(path):
return None
ca = ComicArchive( path, self.settings )
if ca.seemsToBeAComicArchive() :
row = self.twList.rowCount()
self.twList.insertRow( row )
fi = FileInfo( ca )
filename_item = QTableWidgetItem()
folder_item = QTableWidgetItem()
cix_item = FileTableWidgetItem()
cbi_item = FileTableWidgetItem()
readonly_item = FileTableWidgetItem()
type_item = QTableWidgetItem()
filename_item.setFlags(Qt.ItemIsSelectable| Qt.ItemIsEnabled)
filename_item.setData( Qt.UserRole , fi )
self.twList.setItem(row, FileSelectionList.fileColNum, filename_item)
folder_item.setFlags(Qt.ItemIsSelectable| Qt.ItemIsEnabled)
self.twList.setItem(row, FileSelectionList.folderColNum, folder_item)
def set_modified_flag(self, modified: bool) -> None:
self.dirty_flag = modified
type_item.setFlags(Qt.ItemIsSelectable| Qt.ItemIsEnabled)
self.twList.setItem(row, FileSelectionList.typeColNum, type_item)
def select_all(self) -> None:
self.twList.setRangeSelected(QtWidgets.QTableWidgetSelectionRange(0, 0, self.twList.rowCount() - 1, 5), True)
cix_item.setFlags(Qt.ItemIsSelectable| Qt.ItemIsEnabled)
cix_item.setTextAlignment(Qt.AlignHCenter)
self.twList.setItem(row, FileSelectionList.CRFlagColNum, cix_item)
def deselect_all(self) -> None:
self.twList.setRangeSelected(QtWidgets.QTableWidgetSelectionRange(0, 0, self.twList.rowCount() - 1, 5), False)
cbi_item.setFlags(Qt.ItemIsSelectable| Qt.ItemIsEnabled)
cbi_item.setTextAlignment(Qt.AlignHCenter)
self.twList.setItem(row, FileSelectionList.CBLFlagColNum, cbi_item)
def remove_archive_list(self, ca_list: list[ComicArchive]) -> None:
self.twList.setSortingEnabled(False)
current_removed = False
for ca in ca_list:
for row in range(self.twList.rowCount()):
row_ca = self.get_archive_by_row(row)
if row_ca == ca:
if row == self.twList.currentRow():
current_removed = True
self.twList.removeRow(row)
break
self.twList.setSortingEnabled(True)
readonly_item.setFlags(Qt.ItemIsSelectable| Qt.ItemIsEnabled)
readonly_item.setTextAlignment(Qt.AlignHCenter)
self.twList.setItem(row, FileSelectionList.readonlyColNum, readonly_item)
self.updateRow( row )
return row
if self.twList.rowCount() > 0 and current_removed:
# since on a removal, we select row 0, make sure callback occurs if
# we're already there
if self.twList.currentRow() == 0:
self.current_item_changed_cb(self.twList.currentItem(), None)
self.twList.selectRow(0)
elif self.twList.rowCount() <= 0:
self.listCleared.emit()
def updateRow( self, row ):
fi = self.twList.item( row, FileSelectionList.dataColNum ).data( Qt.UserRole ).toPyObject()
def get_archive_by_row(self, row: int) -> ComicArchive | None:
if row >= 0:
ca: ComicArchive = self.twList.item(row, FileSelectionList.dataColNum).data(QtCore.Qt.ItemDataRole.UserRole)
return ca
return None
filename_item = self.twList.item( row, FileSelectionList.fileColNum )
folder_item = self.twList.item( row, FileSelectionList.folderColNum )
cix_item = self.twList.item( row, FileSelectionList.CRFlagColNum )
cbi_item = self.twList.item( row, FileSelectionList.CBLFlagColNum )
type_item = self.twList.item( row, FileSelectionList.typeColNum )
readonly_item = self.twList.item( row, FileSelectionList.readonlyColNum )
def get_current_archive(self) -> ComicArchive | None:
return self.get_archive_by_row(self.twList.currentRow())
item_text = os.path.split(fi.ca.path)[0]
folder_item.setText( item_text )
folder_item.setData( Qt.ToolTipRole, item_text )
def remove_selection(self) -> None:
row_list = []
for item in self.twList.selectedItems():
if item.column() == 0:
row_list.append(item.row())
item_text = os.path.split(fi.ca.path)[1]
filename_item.setText( item_text )
filename_item.setData( Qt.ToolTipRole, item_text )
if len(row_list) == 0:
return
if fi.ca.isZip():
item_text = "ZIP"
elif fi.ca.isRar():
item_text = "RAR"
else:
item_text = ""
type_item.setText( item_text )
type_item.setData( Qt.ToolTipRole, item_text )
if self.twList.currentRow() in row_list:
if not self.dirty_flag_verification(
"Remove Archive", "If you close this archive, data in the form will be lost. Are you sure?"
):
return
row_list.sort()
row_list.reverse()
if fi.ca.hasCIX():
cix_item.setCheckState(Qt.Checked)
cix_item.setData(Qt.UserRole, True)
else:
cix_item.setData(Qt.UserRole, False)
cix_item.setCheckState(Qt.Unchecked)
self.twList.currentItemChanged.disconnect(self.current_item_changed_cb)
self.twList.setSortingEnabled(False)
if fi.ca.hasCBI():
cbi_item.setCheckState(Qt.Checked)
cbi_item.setData(Qt.UserRole, True)
else:
cbi_item.setData(Qt.UserRole, False)
cbi_item.setCheckState(Qt.Unchecked)
for i in row_list:
self.twList.removeRow(i)
if not fi.ca.isWritable():
readonly_item.setCheckState(Qt.Checked)
readonly_item.setData(Qt.UserRole, True)
else:
readonly_item.setData(Qt.UserRole, False)
readonly_item.setCheckState(Qt.Unchecked)
self.twList.setSortingEnabled(True)
self.twList.currentItemChanged.connect(self.current_item_changed_cb)
# Reading these will force them into the ComicArchive's cache
fi.ca.readCIX()
fi.ca.hasCBI()
if self.twList.rowCount() > 0:
# since on a removal, we select row 0, make sure callback occurs if
# we're already there
if self.twList.currentRow() == 0:
self.current_item_changed_cb(self.twList.currentItem(), None)
self.twList.selectRow(0)
else:
self.listCleared.emit()
def getSelectedArchiveList( self ):
ca_list = []
for r in range( self.twList.rowCount() ):
item = self.twList.item(r, FileSelectionList.dataColNum)
if self.twList.isItemSelected(item):
fi = item.data( Qt.UserRole ).toPyObject()
ca_list.append(fi.ca)
def add_path_list(self, pathlist: list[str]) -> None:
filelist = utils.get_recursive_filelist(pathlist)
# we now have a list of files to add
return ca_list
def updateCurrentRow( self ):
self.updateRow( self.twList.currentRow() )
# Prog dialog on Linux flakes out for small range, so scale up
progdialog = QtWidgets.QProgressDialog("", "Cancel", 0, len(filelist), parent=self)
progdialog.setWindowTitle("Adding Files")
progdialog.setWindowModality(QtCore.Qt.WindowModality.WindowModal)
progdialog.setMinimumDuration(300)
center_window_on_parent(progdialog)
def updateSelectedRows( self ):
self.twList.setSortingEnabled(False)
for r in range( self.twList.rowCount() ):
item = self.twList.item(r, FileSelectionList.dataColNum)
if self.twList.isItemSelected(item):
self.updateRow( r )
self.twList.setSortingEnabled(True)
def currentItemChangedCB( self, curr, prev ):
QtCore.QCoreApplication.processEvents()
first_added = None
rar_added_ro = False
self.twList.setSortingEnabled(False)
for idx, f in enumerate(filelist):
QtCore.QCoreApplication.processEvents()
if progdialog.wasCanceled():
break
progdialog.setValue(idx + 1)
progdialog.setLabelText(f)
QtCore.QCoreApplication.processEvents()
row = self.add_path_item(f)
if row is not None:
ca = self.get_archive_by_row(row)
rar_added_ro = bool(ca and ca.archiver.name() == "RAR" and not ca.archiver.is_writable())
if first_added is None and row != -1:
first_added = row
new_idx = curr.row()
old_idx = -1
if prev is not None:
old_idx = prev.row()
#print "old {0} new {1}".format(old_idx, new_idx)
if old_idx == new_idx:
return
# don't allow change if modified
if prev is not None and new_idx != old_idx:
if not self.modifiedFlagVerification( "Change Archive",
"If you change archives now, data in the form will be lost. Are you sure?"):
self.twList.currentItemChanged.disconnect( self.currentItemChangedCB )
self.twList.setCurrentItem( prev )
self.twList.currentItemChanged.connect( self.currentItemChangedCB )
# Need to defer this revert selection, for some reason
QTimer.singleShot(1, self.revertSelection)
return
progdialog.hide()
QtCore.QCoreApplication.processEvents()
fi = self.twList.item( new_idx, FileSelectionList.dataColNum ).data( Qt.UserRole ).toPyObject()
self.selectionChanged.emit( QVariant(fi))
def revertSelection( self ):
self.twList.selectRow( self.twList.currentRow() )
def modifiedFlagVerification( self, title, desc):
if self.modifiedFlag:
reply = QMessageBox.question(self,
self.tr(title),
self.tr(desc),
QMessageBox.Yes, QMessageBox.No )
if reply != QMessageBox.Yes:
return False
return True
# Attempt to use a special checkbox widget in the cell.
# Couldn't figure out how to disable it with "enabled" colors
#w = QWidget()
#cb = QCheckBox(w)
#cb.setCheckState(Qt.Checked)
#layout = QHBoxLayout()
#layout.addWidget( cb )
#layout.setAlignment(Qt.AlignHCenter)
#layout.setMargin(2)
#w.setLayout(layout)
#self.twList.setCellWidget( row, 2, w )
if first_added is not None:
self.twList.selectRow(first_added)
else:
if len(pathlist) == 1 and os.path.isfile(pathlist[0]):
QtWidgets.QMessageBox.information(
self, "File Open", "Selected file doesn't seem to be a comic archive."
)
else:
QtWidgets.QMessageBox.information(self, "File/Folder Open", "No readable comic archives were found.")
if rar_added_ro:
self.rar_ro_message()
self.twList.setSortingEnabled(True)
# Adjust column size
self.twList.resizeColumnsToContents()
self.twList.setColumnWidth(FileSelectionList.MDFlagColNum, 35)
self.twList.setColumnWidth(FileSelectionList.readonlyColNum, 35)
self.twList.setColumnWidth(FileSelectionList.typeColNum, 45)
if self.twList.columnWidth(FileSelectionList.fileColNum) > 250:
self.twList.setColumnWidth(FileSelectionList.fileColNum, 250)
if self.twList.columnWidth(FileSelectionList.folderColNum) > 200:
self.twList.setColumnWidth(FileSelectionList.folderColNum, 200)
def rar_ro_message(self) -> None:
if not self.rar_ro_shown:
if platform.system() == "Windows":
rar_help = windowsRarHelp
elif platform.system() == "Darwin":
rar_help = macRarHelp
else:
rar_help = linuxRarHelp
OptionalMessageDialog.msg_no_checkbox(
self,
"RAR Files are Read-Only",
"It looks like you have opened a RAR/CBR archive,\n"
"however ComicTagger cannot currently write to them without the rar program and are marked read only!\n\n"
f"{rar_help}",
)
self.rar_ro_shown = True
def is_list_dupe(self, path: str) -> bool:
return self.get_current_list_row(path) >= 0
def get_current_list_row(self, path: str) -> int:
for r in range(self.twList.rowCount()):
ca = cast(ComicArchive, self.get_archive_by_row(r))
if str(ca.path) == path:
return r
return -1
def add_path_item(self, path: str) -> int:
path = str(path)
path = os.path.abspath(path)
if self.is_list_dupe(path):
return self.get_current_list_row(path)
ca = ComicArchive(path, str(graphics_path / "nocover.png"))
if ca.seems_to_be_a_comic_archive():
row: int = self.twList.rowCount()
self.twList.insertRow(row)
filename_item = QtWidgets.QTableWidgetItem()
folder_item = QtWidgets.QTableWidgetItem()
md_item = QtWidgets.QTableWidgetItem()
readonly_item = QtWidgets.QTableWidgetItem()
type_item = QtWidgets.QTableWidgetItem()
filename_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
filename_item.setData(QtCore.Qt.ItemDataRole.UserRole, ca)
self.twList.setItem(row, FileSelectionList.fileColNum, filename_item)
folder_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, FileSelectionList.folderColNum, folder_item)
type_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
self.twList.setItem(row, FileSelectionList.typeColNum, type_item)
md_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
md_item.setTextAlignment(QtCore.Qt.AlignmentFlag.AlignHCenter)
self.twList.setItem(row, FileSelectionList.MDFlagColNum, md_item)
readonly_item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable | QtCore.Qt.ItemFlag.ItemIsEnabled)
readonly_item.setTextAlignment(QtCore.Qt.AlignmentFlag.AlignHCenter)
self.twList.setItem(row, FileSelectionList.readonlyColNum, readonly_item)
self.update_row(row)
return row
return -1
def update_row(self, row: int) -> None:
if row >= 0:
ca: ComicArchive = self.twList.item(row, FileSelectionList.dataColNum).data(QtCore.Qt.ItemDataRole.UserRole)
filename_item = self.twList.item(row, FileSelectionList.fileColNum)
folder_item = self.twList.item(row, FileSelectionList.folderColNum)
md_item = self.twList.item(row, FileSelectionList.MDFlagColNum)
type_item = self.twList.item(row, FileSelectionList.typeColNum)
readonly_item = self.twList.item(row, FileSelectionList.readonlyColNum)
item_text = os.path.split(ca.path)[0]
folder_item.setText(item_text)
folder_item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item_text = os.path.split(ca.path)[1]
filename_item.setText(item_text)
filename_item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
item_text = ca.archiver.name()
type_item.setText(item_text)
type_item.setData(QtCore.Qt.ItemDataRole.ToolTipRole, item_text)
styles = ", ".join(x for x in ca.get_supported_metadata() if ca.has_metadata(x))
md_item.setText(styles)
if not ca.is_writable():
readonly_item.setCheckState(QtCore.Qt.CheckState.Checked)
readonly_item.setData(QtCore.Qt.ItemDataRole.UserRole, True)
readonly_item.setText(" ")
else:
readonly_item.setData(QtCore.Qt.ItemDataRole.UserRole, False)
readonly_item.setCheckState(QtCore.Qt.CheckState.Unchecked)
# This is a nbsp it sorts after a space ' '
readonly_item.setText("\xa0")
def get_selected_archive_list(self) -> list[ComicArchive]:
ca_list: list[ComicArchive] = []
for r in range(self.twList.rowCount()):
item = self.twList.item(r, FileSelectionList.dataColNum)
if item.isSelected():
ca: ComicArchive = item.data(QtCore.Qt.ItemDataRole.UserRole)
ca_list.append(ca)
return ca_list
def update_current_row(self) -> None:
self.update_row(self.twList.currentRow())
def update_selected_rows(self) -> None:
self.twList.setSortingEnabled(False)
for r in range(self.twList.rowCount()):
item = self.twList.item(r, FileSelectionList.dataColNum)
if item.isSelected():
self.update_row(r)
self.twList.setSortingEnabled(True)
def current_item_changed_cb(self, curr: QtCore.QModelIndex | None, prev: QtCore.QModelIndex | None) -> None:
if curr is not None:
new_idx = curr.row()
old_idx = -1
if prev is not None:
old_idx = prev.row()
if old_idx == new_idx:
return
# don't allow change if modified
if prev is not None and new_idx != old_idx:
if not self.dirty_flag_verification(
"Change Archive", "If you change archives now, data in the form will be lost. Are you sure?"
):
self.twList.currentItemChanged.disconnect(self.current_item_changed_cb)
self.twList.setCurrentItem(prev)
self.twList.currentItemChanged.connect(self.current_item_changed_cb)
# Need to defer this revert selection, for some reason
QtCore.QTimer.singleShot(1, self.revert_selection)
return
fi = self.twList.item(new_idx, FileSelectionList.dataColNum).data(QtCore.Qt.ItemDataRole.UserRole)
self.selectionChanged.emit(QtCore.QVariant(fi))
def revert_selection(self) -> None:
self.twList.selectRow(self.twList.currentRow())

View File

@ -1,316 +0,0 @@
"""
A python class for internal metadata storage
The goal of this class is to handle ALL the data that might come from various
tagging schemes and databases, such as ComicVine or GCD. This makes conversion
possible, however lossy it might be
"""
"""
Copyright 2012 Anthony Beville
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import utils
# These page info classes are exactly the same as the CIX scheme, since it's unique
class PageType:
FrontCover = "FrontCover"
InnerCover = "InnerCover"
Roundup = "Roundup"
Story = "Story"
Advertisment = "Advertisment"
Editorial = "Editorial"
Letters = "Letters"
Preview = "Preview"
BackCover = "BackCover"
Other = "Other"
Deleted = "Deleted"
"""
class PageInfo:
Image = 0
Type = PageType.Story
DoublePage = False
ImageSize = 0
Key = ""
ImageWidth = 0
ImageHeight = 0
"""
class GenericMetadata:
def __init__(self):
self.isEmpty = True
self.tagOrigin = None
self.series = None
self.issue = None
self.title = None
self.publisher = None
self.month = None
self.year = None
self.day = None
self.issueCount = None
self.volume = None
self.genre = None
self.language = None # 2 letter iso code
self.comments = None # use same way as Summary in CIX
self.volumeCount = None
self.criticalRating = None
self.country = None
self.alternateSeries = None
self.alternateNumber = None
self.alternateCount = None
self.imprint = None
self.notes = None
self.webLink = None
self.format = None
self.manga = None
self.blackAndWhite = None
self.pageCount = None
self.maturityRating = None
self.storyArc = None
self.seriesGroup = None
self.scanInfo = None
self.characters = None
self.teams = None
self.locations = None
self.credits = list()
self.tags = list()
self.pages = list()
# Some CoMet-only items
self.price = None
self.isVersionOf = None
self.rights = None
self.identifier = None
self.lastMark = None
self.coverImage = None
def overlay( self, new_md ):
# Overlay a metadata object on this one
# that is, when the new object has non-None
# values, over-write them to this one
def assign( cur, new ):
if new is not None:
if type(new) == str and len(new) == 0:
setattr(self, cur, None)
else:
setattr(self, cur, new)
if not new_md.isEmpty:
self.isEmpty = False
assign( 'series', new_md.series )
assign( "issue", new_md.issue )
assign( "issueCount", new_md.issueCount )
assign( "title", new_md.title )
assign( "publisher", new_md.publisher )
assign( "day", new_md.day )
assign( "month", new_md.month )
assign( "year", new_md.year )
assign( "volume", new_md.volume )
assign( "volumeCount", new_md.volumeCount )
assign( "genre", new_md.genre )
assign( "language", new_md.language )
assign( "country", new_md.country )
assign( "criticalRating", new_md.criticalRating )
assign( "alternateSeries", new_md.alternateSeries )
assign( "alternateNumber", new_md.alternateNumber )
assign( "alternateCount", new_md.alternateCount )
assign( "imprint", new_md.imprint )
assign( "webLink", new_md.webLink )
assign( "format", new_md.format )
assign( "manga", new_md.manga )
assign( "blackAndWhite", new_md.blackAndWhite )
assign( "maturityRating", new_md.maturityRating )
assign( "storyArc", new_md.storyArc )
assign( "seriesGroup", new_md.seriesGroup )
assign( "scanInfo", new_md.scanInfo )
assign( "characters", new_md.characters )
assign( "teams", new_md.teams )
assign( "locations", new_md.locations )
assign( "comments", new_md.comments )
assign( "notes", new_md.notes )
assign( "price", new_md.price )
assign( "isVersionOf", new_md.isVersionOf )
assign( "rights", new_md.rights )
assign( "identifier", new_md.identifier )
assign( "lastMark", new_md.lastMark )
self.overlayCredits( new_md.credits )
# TODO
# not sure if the tags and pages should broken down, or treated
# as whole lists....
# For now, go the easy route, where any overlay
# value wipes out the whole list
if len(new_md.tags) > 0:
assign( "tags", new_md.tags )
if len(new_md.pages) > 0:
assign( "pages", new_md.pages )
def overlayCredits( self, new_credits ):
for c in new_credits:
if c.has_key('primary') and c['primary']:
primary = True
else:
primary = False
# Remove credit role if person is blank
if c['person'] == "":
for r in reversed(self.credits):
if r['role'].lower() == c['role'].lower():
self.credits.remove(r)
# otherwise, add it!
else:
self.addCredit( c['person'], c['role'], primary )
def setDefaultPageList( self, count ):
# generate a default page list, with the first page marked as the cover
for i in range(count):
page_dict = dict()
page_dict['Image'] = str(i)
if i == 0:
page_dict['Type'] = PageType.FrontCover
self.pages.append( page_dict )
def getArchivePageIndex( self, pagenum ):
# convert the displayed page number to the page index of the file in the archive
if pagenum < len( self.pages ):
return int( self.pages[pagenum]['Image'] )
else:
return 0
def getCoverPageIndexList( self ):
# return a list of archive page indices of cover pages
coverlist = []
for p in self.pages:
if 'Type' in p and p['Type'] == PageType.FrontCover:
coverlist.append( int(p['Image']))
if len(coverlist) == 0:
coverlist.append( 0 )
return coverlist
def addCredit( self, person, role, primary = False ):
credit = dict()
credit['person'] = person
credit['role'] = role
if primary:
credit['primary'] = primary
# look to see if it's not already there...
found = False
for c in self.credits:
if ( c['person'].lower() == person.lower() and
c['role'].lower() == role.lower() ):
# no need to add it. just adjust the "primary" flag as needed
c['primary'] = primary
found = True
break
if not found:
self.credits.append(credit)
def __str__( self ):
vals = []
if self.isEmpty:
return "No metadata"
def add_string( tag, val ):
if val is not None and u"{0}".format(val) != "":
vals.append( (tag, val) )
def add_attr_string( tag ):
val = getattr(self,tag)
add_string( tag, getattr(self,tag) )
add_attr_string( "series" )
add_attr_string( "issue" )
add_attr_string( "issueCount" )
add_attr_string( "title" )
add_attr_string( "publisher" )
add_attr_string( "year" )
add_attr_string( "month" )
add_attr_string( "day" )
add_attr_string( "volume" )
add_attr_string( "volumeCount" )
add_attr_string( "genre" )
add_attr_string( "language" )
add_attr_string( "country" )
add_attr_string( "criticalRating" )
add_attr_string( "alternateSeries" )
add_attr_string( "alternateNumber" )
add_attr_string( "alternateCount" )
add_attr_string( "imprint" )
add_attr_string( "webLink" )
add_attr_string( "format" )
add_attr_string( "manga" )
add_attr_string( "price" )
add_attr_string( "isVersionOf" )
add_attr_string( "rights" )
add_attr_string( "identifier" )
add_attr_string( "lastMark" )
if self.blackAndWhite:
add_attr_string( "blackAndWhite" )
add_attr_string( "maturityRating" )
add_attr_string( "storyArc" )
add_attr_string( "seriesGroup" )
add_attr_string( "scanInfo" )
add_attr_string( "characters" )
add_attr_string( "teams" )
add_attr_string( "locations" )
add_attr_string( "comments" )
add_attr_string( "notes" )
add_string( "tags", utils.listToString( self.tags ) )
for c in self.credits:
primary = ""
if c.has_key('primary') and c['primary']:
primary = " [P]"
add_string( "credit", c['role']+": "+c['person'] + primary)
# find the longest field name
flen = 0
for i in vals:
flen = max( flen, len(i[0]) )
flen += 1
#format the data nicely
outstr = ""
fmt_str = u"{0: <" + str(flen) + "} {1}\n"
for i in vals:
outstr += fmt_str.format( i[0]+":", i[1] )
return outstr

View File

@ -0,0 +1,5 @@
from __future__ import annotations
import importlib.resources
graphics_path = importlib.resources.files(__package__)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 13 KiB

View File

@ -0,0 +1,102 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Generator: Adobe Illustrator 19.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg
version="1.1"
id="Capa_1"
x="0px"
y="0px"
viewBox="0 0 469.333 469.333"
style="enable-background:new 0 0 469.333 469.333;"
xml:space="preserve"
sodipodi:docname="eye.svg"
inkscape:version="1.2.2 (b0a8486541, 2022-12-01)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><defs
id="defs45" /><sodipodi:namedview
id="namedview43"
pagecolor="#505050"
bordercolor="#eeeeee"
borderopacity="1"
inkscape:showpageshadow="0"
inkscape:pageopacity="0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#505050"
showgrid="false"
inkscape:zoom="2.1882117"
inkscape:cx="234.6665"
inkscape:cy="234.6665"
inkscape:window-width="2560"
inkscape:window-height="1361"
inkscape:window-x="0"
inkscape:window-y="42"
inkscape:window-maximized="1"
inkscape:current-layer="Capa_1" />
<g
id="g10"
style="fill:#333333">
<g
id="g8"
style="fill:#333333">
<g
id="g6"
style="fill:#333333">
<path
d="M234.667,170.667c-35.307,0-64,28.693-64,64s28.693,64,64,64s64-28.693,64-64S269.973,170.667,234.667,170.667z"
id="path2"
style="fill:#333333" />
<path
d="M234.667,74.667C128,74.667,36.907,141.013,0,234.667c36.907,93.653,128,160,234.667,160 c106.773,0,197.76-66.347,234.667-160C432.427,141.013,341.44,74.667,234.667,74.667z M234.667,341.333 c-58.88,0-106.667-47.787-106.667-106.667S175.787,128,234.667,128s106.667,47.787,106.667,106.667 S293.547,341.333,234.667,341.333z"
id="path4"
style="fill:#333333" />
</g>
</g>
</g>
<g
id="g12">
</g>
<g
id="g14">
</g>
<g
id="g16">
</g>
<g
id="g18">
</g>
<g
id="g20">
</g>
<g
id="g22">
</g>
<g
id="g24">
</g>
<g
id="g26">
</g>
<g
id="g28">
</g>
<g
id="g30">
</g>
<g
id="g32">
</g>
<g
id="g34">
</g>
<g
id="g36">
</g>
<g
id="g38">
</g>
<g
id="g40">
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.1 KiB

View File

@ -0,0 +1,106 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Generator: Adobe Illustrator 19.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg
version="1.1"
id="Capa_1"
x="0px"
y="0px"
viewBox="0 0 469.44 469.44"
style="enable-background:new 0 0 469.44 469.44;"
xml:space="preserve"
sodipodi:docname="hidden.svg"
inkscape:version="1.2.2 (b0a8486541, 2022-12-01)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><defs
id="defs47" /><sodipodi:namedview
id="namedview45"
pagecolor="#505050"
bordercolor="#eeeeee"
borderopacity="1"
inkscape:showpageshadow="0"
inkscape:pageopacity="0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#505050"
showgrid="false"
inkscape:zoom="2.187713"
inkscape:cx="234.72"
inkscape:cy="234.72"
inkscape:window-width="2560"
inkscape:window-height="1361"
inkscape:window-x="0"
inkscape:window-y="42"
inkscape:window-maximized="1"
inkscape:current-layer="Capa_1" />
<g
id="g12"
style="fill:#333333">
<g
id="g10"
style="fill:#333333">
<g
id="g8"
style="fill:#333333">
<path
d="M231.147,160.373l67.2,67.2l0.32-3.52c0-35.307-28.693-64-64-64L231.147,160.373z"
id="path2"
style="fill:#333333" />
<path
d="M234.667,117.387c58.88,0,106.667,47.787,106.667,106.667c0,13.76-2.773,26.88-7.573,38.933l62.4,62.4 c32.213-26.88,57.6-61.653,73.28-101.333c-37.013-93.653-128-160-234.773-160c-29.867,0-58.453,5.333-85.013,14.933l46.08,45.973 C207.787,120.267,220.907,117.387,234.667,117.387z"
id="path4"
style="fill:#333333" />
<path
d="M21.333,59.253l48.64,48.64l9.707,9.707C44.48,145.12,16.64,181.707,0,224.053c36.907,93.653,128,160,234.667,160 c33.067,0,64.64-6.4,93.547-18.027l9.067,9.067l62.187,62.293l27.2-27.093L48.533,32.053L21.333,59.253z M139.307,177.12 l32.96,32.96c-0.96,4.587-1.6,9.173-1.6,13.973c0,35.307,28.693,64,64,64c4.8,0,9.387-0.64,13.867-1.6l32.96,32.96 c-14.187,7.04-29.973,11.307-46.827,11.307C175.787,330.72,128,282.933,128,224.053C128,207.2,132.267,191.413,139.307,177.12z"
id="path6"
style="fill:#333333" />
</g>
</g>
</g>
<g
id="g14">
</g>
<g
id="g16">
</g>
<g
id="g18">
</g>
<g
id="g20">
</g>
<g
id="g22">
</g>
<g
id="g24">
</g>
<g
id="g26">
</g>
<g
id="g28">
</g>
<g
id="g30">
</g>
<g
id="g32">
</g>
<g
id="g34">
</g>
<g
id="g36">
</g>
<g
id="g38">
</g>
<g
id="g40">
</g>
<g
id="g42">
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.5 KiB

After

Width:  |  Height:  |  Size: 4.2 KiB

Some files were not shown because too many files have changed in this diff Show More