I just tried Google Gemini in bard , the responses seem to be quite fast and snappy. And the output quality seems to be of a great upgrade to older Lamda, also responses are considerably faster than chat gpt. Till now chat gpt was a better model but now seems both are quite compatible. #google #openai
Ilya already said if you train model on same dataset for sufficiently long time, all models will converge to approximately point.
Itโs not as simple as โtrain a modelโ. A lot of the problem solving ability and performance comes from the systems outside the model - planning and execution of plans, external tools, strategies for evaluating and considering multiple potential responses.
Not to mention the oversimplication here. Do you really think the models are trained on the same datasets?
How do I access it in Bard?
It's on by default, but only rolled out to a subset of users
What did you ask to determine Gemini is better than ChatGPT?
Indeed. Tested on Bard and results were fairly good.
How do you tell if you're using Gemini? I just asked bard, and it didn't know ๐
Lots of people saying it sucks, gets basic questions wrong. pretty poor state of affairs that it's arguably not even better than ChatGPT 3.5 which has been out over a year.
Maybe true.. I just use for basic queries, but one thing I noticed it's pretty good at solving simple mathematics as compared to gpt3.5
Can confirm