RECOMMEND
Robotic
There are many types of robots, ranging from traditional AGV to modern robot dogs, robotaxi, etc. Why do we still need to build robot in the shape of a human? Not a robot arms or others. What advantages do humanoid robots have?

Most recently I read Isaac Asimov's science fiction novels and know about his famous "Three Laws of Robotics" (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics). But I am not so convinced by his first rule "A robot may not injure a human being", especially when AI reaches status of Artificial General intelligence (AGI) or even Artificial Super intelligence (ASI). What if in the near future, among a group of AIs, some of them follow this no-harm rule but the rest don't? I am curious about what's the probability these AIs reach a consensus to start harming people? I would like to build some mathmetical/probability/simulation models to best fit the scenario and calculate the probability. Any thoughts or discussion will be very welcome.
Agent
Hi, I am a developer who have some basic knowledge of AI Agent, RAG and Multi-Agent Dialogues. Now I have a small project on hand, which I need to build a AI Agent on the finance industry to give some realtime information to my clients. When I am choosing different AI Agent platforms, I have some difficulties. Right now, I am comparing among Google Vertex AI Agent Builder, Microsoft Azure AI Agents and Salesforce AI Agents. Any suggestions or some free AI Agent Builder recommendations?I am actually cost-sensitive, for example, Google Vertex AI Agent builders have price is $12 per 1,000 queries, and Vertex AI search is $2 per 1,000 queries, which is a little bit above my budget limit.
NLP
研究如何打通tensorflow线下python脚本训练建模,线上生产环境用C++代码直接调用预先训练好的模型完成预测的工作,而不需要用自己写的Inference的函数。因为目前tensorflow提供的C++的API比较少,所以参考了几篇已有的日志,踩了不少坑一并记录下来。写了一个简单的ANN模型对Iris数据集分类的Demo。
Math
Most recently I read Isaac Asimov's science fiction novels and know about his famous "Three Laws of Robotics" (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics). But I am not so convinced by his first rule "A robot may not injure a human being", especially when AI reaches status of Artificial General intelligence (AGI) or even Artificial Super intelligence (ASI). What if in the near future, among a group of AIs, some of them follow this no-harm rule but the rest don't? I am curious about what's the probability these AIs reach a consensus to start harming people? I would like to build some mathmetical/probability/simulation models to best fit the scenario and calculate the probability. Any thoughts or discussion will be very welcome.