>>48083245>Robust Archiver>@NAMEWhile I can see this issue, I believe it to be something rare and easily manageable on the user end.
>Warnings suppressedLegacy from prior versions, it might be fine to allow them again, but Byte/No Conn errors still pop up regardless.
>Member streamsThat's actually the interesting part, this catches member-only streams better than /live or /members since it can grab the ones that were started as member-only without needing to sift through text post entries. They actually populate in the /streams tab.
>Soundpost Recombiner>Lazy version is sloppySorta the purpose, it's a duct-tape job. The purpose was for repost-ability on forums that allow sound-containing webms, while the archive version is actually meant strictly for archival purposes.
>AVC/h264I did not, part of the reason I'm formally putting the code up there in tools is so I can workshop it with y'all in a way that's more easily legible rather than just throwing something together for a single persons request and then letting it rot with time.
>BenchmarksYeah, originally they were there for me to do value comparisons over filesize and encoding length for GIFs. I left them since I thought it's neat to know how much computational time was spent to convert a file by each method.
>Soundpost>Increase verbosity/stripping/error handlingI agree, I should also abuse the fact that FFprobe was done on an earlier part of the code to give examples in relation to the input file itself. I could also create some default inputs like for CRF and actual error handling to stop malformed inputs from crashing the code and allow the user to re-attempt the problem section.
>Crop then scaleThat was the intent, since I wanted it straight forward with pixels rather than in*scalar for positioning and size as someone advanced enough to reason those out probably would just run ffmpeg on their own. However this ties in with the first point where increasing the verbosity can teach them more advanced methods while making it easier overall.
>PaddingThis is actually because of the way I'm scanning the video in the first step to quicken up the process. If you scan through the video while looking to encode the segment, it takes a lot more time to actually scan to the segment to retain accuracy. So I had it grab the timestamped zone with the padding (to limit the effect of I/P/B-frame position at intended clip) first then encode only the segment of interest into the output.
>Constrained/Constant QualityThis decision was after doing many test encodes to compare quality/size where I found Constant Quality would outperform on higher CRF, keeping both visual quality at max and filesize at min. That's actually the reason I suggest 30, when comparing visually the text-quality, the moments of large movement, Constant Quality at 30 was identical to Constrained at 20. Reminder we're working with OBS streams where the quality is focused on pass-ability and low bitrate anyway, if we were talking about 4k real life videography I would definitely notice more visual distortion by using these settings, I would definitely be concerned about these settings. Swapping the flags to these HQ ones as you suggest is something I've not tested, but am willing to implement.
>Why MKV middle-man?Remnant of testing, It could also easily probably be shortened further by looking more into -ss -to scanning and doing something like:
>ffmpeg -ss START-5 -i input.vod -ss 5 -t END -video_flags -an -pass 1 -f null NUL>ffmpeg [above stuff] -pass 2 -y out.webm -c:a copy -y out.aac/opusI will take any specific code alterations anyone thinks would be to the benefit, either catbox the .py or pastebin, I'll diff them and do personal testing, discuss the results, and revise. I'm not attached to my code, it's generally for the benefit of /who/ to keep quality higher and help people archive without reliance on others.