>>3803985I'm not that anon, but you're not entirely correct.
Pulling the L-channel from the demosaic'd image will give you luminance values from interpolated image pixels. Since the bayer filter strips light of all but one wavelength per sensor pixel, demosaicing has to assume the values of the two other channels (Green & Blue if working on Red, R & B if working on G, or R & G if working on B) from surrounding sensor pixels. This causes the L-channel to be constructed from one recorded measurement and two guessed measurements per image pixel; the guessed measurements' quality being tied to the demosaicing algorithm used.
It is also because of the bayer filter's filtering that you cannot pull the L-channel before demosaicing; each image pixel would represent a fraction of the actual amount of light that would have reached the sensor had there been no bayer filter.
Neither can you scale a nondemosaic'd image by half in an attempt to generate the other two channels' information since you would be combining light samples that originate from 4 different locations.
Of course this is all too scrutinous for most people's use-case; algorithms are sophisticated and technology is capable. That's why the method in
>>3803571 would suffice.
However, keep in mind that camera sensor technology descends from the technology used to photograph stars and nebulae. Whether you are just photographing your pet, celestial bodies, or anything in between, it's good to have information regarding various gotchas as per your level of photography and color consistency.