>>20980350>It knows some things based on rules but unlike a human it won't realize if it messes something up and the only way to correct that is a coder making more rules for the AI to followThat was the old way of doing it
With "Machine learning", the instructions are generally "how to learn the thing" and the machine learns and writes it's own rules
>The AI knows what a mirror is and knows that there should be a reflection but doesn't always understand the posing and that the woman in the mirror should be the same as the other one in the pictureolder image models couldn't do reflections on water, new ones can. I believe this is an artifact of the limit on the amount of information that can be stored in only 12 billion numbers
>>20980535>do you think there is a lot more development that can happen with themYes.
1) datasets
large, well-curated datasets are very few. and the ones we do have tend to be very narrow in the scope of information they do contain. GPT-2 and other early models took reddit, discord, wikipedia, 4chan, etc threw it all together and something not-unbearable came out. but "garbage in; garbage out". higher quality input datasets will lead to better systems with technology and methods that already exist - no need for extra innovation
2) hardware
current AI systems are being trained on general-purpose hardware at best (some are train with hardware built specifically for video games). Developments in purpose-built hardware will compliment and accelerate the learning architectures that run on them. such devices, however, would likely be obsolete whenever the next big breakthrough happens and new hardware would be needed
3) good research takes time
AI-related white papers are not slowing down, they are coming out at an accelerating rate. It seems inevitable that some of them contain sparks of good ideas. If it doesn't happen immediately I don't think that's too discouraging as doing things carefully- "the right way"- is sometimes a slow process