In [ ]:
%%bash
pip install faiss-cpu
Collecting faiss-cpu
  Downloading faiss_cpu-1.7.3-cp310-cp310-macosx_10_9_x86_64.whl (5.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.6/5.6 MB 6.7 MB/s eta 0:00:0000:0100:01
Installing collected packages: faiss-cpu
Successfully installed faiss-cpu-1.7.3

Faiss 是一个用于高维向量相似性搜索和聚类的库,它支持 GPU 运算加速。要让 Faiss 使用 GPU,需要执行以下步骤:

  1. 确认您的计算机上已经安装了 CUDA 和 cuDNN。Faiss 支持的 CUDA 版本可以在 Faiss 文档中查看。

  2. 安装 Faiss GPU 版本。您可以使用 pip 安装 Faiss GPU 版本,方法是执行以下命令:

    pip install faiss-gpu
    
  3. 在代码中指定 Faiss 使用 GPU 进行计算。在创建 Faiss Index 对象时,可以指定使用哪个设备。例如,如果要使用默认的 GPU 设备,可以执行以下代码:

    import faiss
    
    # 创建一个 128 维的向量空间
    d = 128
    
    # 创建一个 IndexFlatL2 对象,使用默认的 GPU 设备
    index = faiss.IndexFlatL2(d)
    

    如果您有多个 GPU 设备,也可以在创建 Index 对象时指定使用哪个 GPU。例如,如果要使用第二个 GPU 设备,可以执行以下代码:

    index = faiss.IndexFlatL2(d)
    co = faiss.GpuMultipleClonerOptions()
    co.select_device(1)
    index = faiss.index_cpu_to_gpu(faiss.StandardGpuResources(), 1, index, co)
    

    在这个例子中,我们使用了 faiss.GpuMultipleClonerOptions() 创建了一个选项对象,然后调用了 co.select_device(1) 方法,将第二个 GPU 设备选为使用的设备。最后,我们使用 faiss.index_cpu_to_gpu() 方法将 Index 对象复制到 GPU 上。

注意,使用 Faiss GPU 版本时,所有输入向量都必须是 float32 类型。如果您的向量是其他类型,需要先进行类型转换。此外,您需要将所有向量一起加载到 GPU 内存中,以便 Faiss 可以使用 GPU 进行计算。您可以使用 faiss.StandardGpuResources() 创建一个 GPU 资源对象,然后使用 index.add() 方法将向量添加到 Index 对象中。

In [ ]:
%%bash
pip install --upgrade faiss-gpu

在 Mac 上安装 CUDA 驱动程序需要进行以下步骤:

  1. 首先,你需要安装 Xcode 和 CUDA Toolkit。如果你已经安装了 Xcode,你可以在终端中输入以下命令来安装 CUDA Toolkit:

    brew install --cask cuda
    
  2. 安装完成后,你需要将 CUDA 环境变量添加到你的 ~/.bash_profile 文件中。你可以使用以下命令来打开该文件:

    open ~/.bash_profile
    

    然后在文件末尾添加以下行:

    export PATH="/usr/local/cuda/bin:$PATH"
    export DYLD_LIBRARY_PATH="/usr/local/cuda/lib:$DYLD_LIBRARY_PATH"
    
  3. 接下来,你需要安装 cuDNN(CUDA Deep Neural Network library)。你可以在 NVIDIA 的网站上下载 cuDNN 库,下载完成后,你需要将库文件解压到 /usr/local/cuda/lib 目录下。你可以使用以下命令来解压文件:

    tar -xvf cudnn-11.4-macos-arm64-v8.2.4.15.tgz
    sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
    sudo cp cuda/lib/libcudnn* /usr/local/cuda/lib
    sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib/libcudnn*
    
  4. 最后,你需要重新启动终端,以使环境变量生效。你可以使用以下命令来检查 CUDA 是否已正确安装:

    nvcc -V
    

    如果 CUDA 已正确安装,则会输出 CUDA 版本号和其他信息。

请注意,AMD Radeon Pro 5500M 是一款 AMD 显卡,而 CUDA 是 NVIDIA 的技术。因此,如果你需要使用 CUDA,你需要在你的 Mac 上安装 NVIDIA 显卡。如果你想使用 AMD 显卡来进行深度学习等计算密集型任务,你可以考虑使用其他的技术,如 ROCm。

In [ ]:
%%bash
HOMEBREW_NO_AUTO_UPDATE=1 brew install --cask cuda

ROCm(Radeon Open Compute)是一款由 AMD 开发的开源软件平台,旨在为 AMD 显卡提供高性能计算和深度学习功能。ROCm 包括对编程语言和编程模型的支持,包括 OpenMP、OpenCL、HIP(Heterogeneous-compute Interface for Portability)、TensorFlow 和 PyTorch 等,可以为各种计算密集型应用提供支持。

ROCm 支持 Linux 平台和部分 Windows 平台,以及 AMD 的 Radeon 显卡和服务器级显卡,如 Radeon Instinct。ROCm 还支持一些第三方硬件,如 IBM 的 PowerPC 平台和 Cavium 的 ThunderX2。

ROCm 不仅提供了 GPU 加速的计算能力,还支持 CPU 和 GPU 的协同计算,可以提高整个系统的计算性能。此外,ROCm 也为深度学习提供了一些特定的工具和库,如 MIOpen 和 RCCL。

总之,ROCm 是一款功能强大的开源软件平台,为 AMD 显卡用户提供了许多高性能计算和深度学习的功能和工具。

Faiss是一款基于GPU的高性能相似度搜索库,可以用于许多机器学习和深度学习任务。Faiss可以使用ROCm进行加速,从而提高搜索效率和准确性。

以下是在Mac上使用ROCm和Faiss的一些基本步骤:

  1. 首先,你需要安装ROCm。你可以从官方网站下载并安装ROCm。

  2. 安装完成后,你需要设置一些环境变量。你可以在你的~/.bash_profile文件中添加以下行:

export PATH=$PATH:/opt/rocm/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/lib:/opt/rocm/hip/lib
export HIP_PLATFORM=hcc
export HCC_HOME=/opt/rocm/hcc
  1. 然后,你需要安装Faiss。你可以使用以下命令在终端中安装Faiss:
pip install faiss-gpu
  1. 安装完成后,你可以使用以下示例代码来测试Faiss是否正常工作:
import numpy as np
import faiss

# 生成一些随机数据
d = 64  # 向量维度
nb = 100000  # 向量数量
xb = np.random.random((nb, d)).astype('float32')

# 构建索引
index = faiss.IndexFlatL2(d)
print(index.is_trained)
index.add(xb)

# 搜索
k = 4
xq = np.random.random((1, d)).astype('float32')
D, I = index.search(xq, k)
print(I)
print(D)

如果Faiss正常工作,则应该能够正确地输出搜索结果。

注意,使用ROCm加速Faiss需要满足一些硬件要求,如AMD显卡和支持ROCm的CPU。如果你的硬件不符合要求,你可能无法使用ROCm加速Faiss,或者搜索效率和准确性可能会受到影响。


In [ ]:
%%bash
pip install langchain
In [ ]:
%%bash
wget -r -A .html -P rtdocs https://langchain.readthedocs.io/en/latest/
In [ ]:
from langchain.document_loaders import ReadTheDocsLoader
ReadTheDocsLoader('rtdocs', features='lxml').load()
Out[ ]:
[Document(page_content='.rst\n.pdf\nWelcome to LangChain\n Contents \nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\nWelcome to LangChain#\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started#\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules#\nThere are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases#\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nCode Understanding: If you want to understand how to use LLMs to query source code from github, you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs#\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem#\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources#\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\nnext\nQuickstart Guide\n Contents\n  \nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\nBy Harrison Chase\n    \n      © Copyright 2023, Harrison Chase.\n      \n  Last updated on Apr 14, 2023.\n  ', metadata={'source': 'rtdocs/langchain.readthedocs.io/en/latest/index.html'})]
In [ ]:
%%bash
rm -rf rtdocs

TODO:

  • 提示模版的中文化
  • 启示、自我提示、自我认知的模型化?

Introduction

Have you ever encountered a problem when using ChatGPT to search for the latest information? The current language model of ChatGPT (gpt-3.5-turbo-0301) was trained on data up until September 2021, so it may not be able to answer questions about the latest information accurately.

In this article, we will explain how to create a chatbot that can use chain of thought to respond, by teaching ChatGPT new knowledge.

Preparing and importing training data

First, clone a repository as training data.

Next, import the repository files as text files using the following code, which contains the OpenAI API key.

In [ ]:
import os
import pickle
from langchain.document_loaders import DirectoryLoader, TextLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores.faiss import FAISS

def get_docs(dir_name):
    # (1) Import a series of documents.
    loader = DirectoryLoader(dir_name, loader_cls=TextLoader, silent_errors=True)
    raw_documents = loader.load()
    # (2) Split them into small chunks.
    text_splitter = RecursiveCharacterTextSplitter(
        chunk_size=800,
        chunk_overlap=200,
    )
    return text_splitter.split_documents(raw_documents)

def ingest_docs(dir_name):
    documents = get_docs(dir_name)
    # (3) Create embeddings for each document (using text-embedding-ada-002).
    embeddings = OpenAIEmbeddings()
    return FAISS.from_documents(documents, embeddings)

vectorstore = ingest_docs('_posts/ultimate-facts')

In [ ]:
import os
from EdgeGPT import Chatbot as Bing, ConversationStyle

bing = Bing(cookiePath = os.path.expanduser('~/.config/EdgeGPT/cookies.json'))

async def ask(prompt):
    res = (await bing.ask(
        prompt = prompt,
        conversation_style = ConversationStyle.balanced,
    ))['item']['messages'][1]

    print(res['text'])
    print('\n---\n')
    print(res['adaptiveCards'][0]['body'][0]['text'])
In [ ]:
await ask('''
text-embedding-ada-002 是什么?
''')
text-embedding-ada-002 是 OpenAI 的一个新的嵌入模型,它替换了五个用于文本搜索、文本相似度和代码搜索的单独模型,并在大多数任务上优于我们以前最强大的模型 Davinci,同时价格降低了 99.8%[^1^]。您可以通过向嵌入 API 端点发送文本字符串以及选择嵌入模型 ID(例如 text-embedding-ada-002)来获取嵌入[^2^]。

---

[1]: https://openai.com/blog/new-and-improved-embedding-model/ "New and improved embedding model - openai.com"
[2]: https://platform.openai.com/docs/guides/embeddings "Embeddings - OpenAI API"

text-embedding-ada-002 是 OpenAI 的一个新的嵌入模型,它替换了五个用于文本搜索、文本相似度和代码搜索的单独模型,并在大多数任务上优于我们以前最强大的模型 Davinci,同时价格降低了 99.8%[^1^][1]。您可以通过向嵌入 API 端点发送文本字符串以及选择嵌入模型 ID(例如 text-embedding-ada-002)来获取嵌入[^2^][2]。


Creating a chatbot

Now, we will create a simple chatbot using the LLM chain.

In [ ]:
from langchain.chains.llm import LLMChain
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT
from langchain.chains.question_answering import load_qa_chain
from langchain.vectorstores.base import VectorStore
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI

# Callback function to stream answers to stdout.
manager = CallbackManager([StreamingStdOutCallbackHandler()])

streaming_llm = ChatOpenAI(streaming=True, callback_manager=manager, verbose=True, temperature=0)
question_gen_llm = ChatOpenAI(temperature=0, verbose=True, callback_manager=manager)
# Prompt to generate independent questions by incorporating chat history and a new question.
question_generator = LLMChain(llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT)
# Pass in documents and a standalone prompt to answer questions.
doc_chain = load_qa_chain(streaming_llm, chain_type='stuff', prompt=QA_PROMPT)
# Generate prompts from embedding model.
qa = ConversationalRetrievalChain(retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)

The prompt given to ChatGPT's API is created in the following steps.

In [ ]:
question = 'What makes Remix different from existing frameworks? Please list in bullet points in English.'
qa({'question': question, 'chat_history': []})
The given context does not provide information about Remix or any existing frameworks, so it is not possible to answer this question.
Out[ ]:
{'question': 'What makes Remix different from existing frameworks? Please list in bullet points in English.',
 'chat_history': [],
 'answer': 'The given context does not provide information about Remix or any existing frameworks, so it is not possible to answer this question.'}
In [ ]:
question = '你知道什么?'
qa({'question': question, 'chat_history': []})
I don't know. The context provided is a collection of excerpts from different sources and it is not clear what specific information is being referred to.
Out[ ]:
{'question': '你知道什么?',
 'chat_history': [],
 'answer': "I don't know. The context provided is a collection of excerpts from different sources and it is not clear what specific information is being referred to."}
In [ ]:
question = '你知道什么?'
qa({'question': question, 'chat_history': []})
I am an AI language model and I only have access to the information provided in the context above. From the context, it discusses topics such as the nature of science, psychology, philosophy, and religion. It also touches on concepts such as truth, reality, consciousness, and perception. However, it is difficult to provide a specific answer to your question because it is very broad. Please provide more context or a specific question for me to answer.
Out[ ]:
{'question': '你知道什么?',
 'chat_history': [],
 'answer': 'I am an AI language model and I only have access to the information provided in the context above. From the context, it discusses topics such as the nature of science, psychology, philosophy, and religion. It also touches on concepts such as truth, reality, consciousness, and perception. However, it is difficult to provide a specific answer to your question because it is very broad. Please provide more context or a specific question for me to answer.'}

Ask a question

How is Remix different from existing frameworks?
Please list in bullet points in Japanese.

The following bullet points will be output as an answer summarizing the context.

In [ ]:
# Get context related to the question from the embedding model
for context in vectorstore.similarity_search(question):
    print(f'{context}\n')
page_content='> 我们刚刚知道自然科学借以掌握质的方法––形成量的概念的方法。我们必须提出的问题是,这种方法是不是也能够适用于主观的意识的质。按照我们前面所说,为了使这种方法能够加以运用,必须有与这些质充分确定地、唯一地联系着的空间变化。如果情况真的如此,那么这个问题就可以通过空间–时间的重合方法来解决,因而**测量**便是可能的。但是,这种重合的方法本质上就是进行物理的观察,而就内省法来说,却不存在物理的观察这种事情。由此立刻就可以得出结论:心理学沿着内省的途径决不可能达到知识的理想。因此,它必须尽量使用物理的观察方法来达到它的目的。但这是不是可能的呢?是不是有依存于意识的质的空间变化,就像例如在光学中干涉带的宽度依存于颜色,在电学中磁铁的偏转度依存于磁场的强度那样呢?\n> 现在我们知道,事实上应当承认在主观的质和推断出来的客观世界之间有一种确切规定的、一义的配列关系。大量的经验材料告诉我们,我们可以发现,至少必须假设与所有经验唯一地联系着的“物理的”过程的存在。没有什么意识的质不可能受到作用于身体的力的影响。的确,我们甚至能够用一种简单的物理方法,例如吸进一种气体,就把意识全部消除掉。我们的行动与我们的意志经验相联系,幻觉与身体的疲惫相联系,抑郁症的发作与消化的紊乱相联系。为了研究这类相互联系,心的理论必须抛弃纯粹内省的方法而成为**生理的**心理学。只有这个学科才能在理论上达到对心理的东西的完全的知识。借助于这样一种心理学,我们就可以用概念和所与的主观的质相配列,正如我们能够用概念与推论出来的客观的质相配列一样。这样,主观的质就像客观的质一样成为可知的了。' metadata={'source': '_posts/ultimate-facts/Neuroscience.md'}

page_content='真理、真实;神造真实、人造真实;真实,想象;记忆,拟构。\n如果哲学更像「真理」,那么各类科学就更像「真实」。如果物理学更像「真理」,那么化学就更像「真实」。如果化学更像「真理」,那么生物学、生理学就更像「真实」。如果生理学更像「真理」,那么脑科学、神经科学就更像「真实」。\n如果理科更像「神造真实」,那么工科就更像「人造真实」。如果生理学更像「神造真实」,那么医学、药学就更像「人造真实」。\n\n---\n\n> 我只是一个碳族生物;一个土生土长的地球人¹。贵主耶稣是灵族人吧。而木星上的风暴²可以拥有怎样的生命和意识呢?贵主耶稣在木星上能否与木星人一起生活和交往呢?\n\n> ¹ 地球人 => 《费曼讲座:宇称不守恒定律和如何与外星人交流》\n\n> ² 风暴 =>\n> 当那一天,到了傍晚,耶稣对他们说:『我们渡到那边去吧。』他们就离开群众,照他在船上的情况把他带走;还有别的船也跟他在一起。当下起了**大暴风**,波浪泼进船内,甚至船简直满了!耶稣竟在船尾上靠着枕头睡觉呢;门徒就叫醒了他,对他说:『老师,我们丧命,你不在意么?』耶稣醒起来,斥责³那风,向海说:『不要作声!噤默罢!』那风不狂吹,便大大平静了。耶稣对他们说:『为什么这么胆怯呢?怎么没有信心呢?』他们就大起了敬畏的心,直彼此说:『这个人到底是谁?连风和海也听从他!』\n (马可福音 4:35-41 吕振中)\n\n> ³ 斥责 => 『ワンパンマン』:サイタマ?キング?\n\n↓↓--------------------继续修订--------------------↓↓\n\n圣经信仰之神经心理学实证纲领\n父的自我信息,是指,对于圣灵的表征。\n纯粹的圣经,含于,父的自我信息。\n从「纯粹的基督徒」到「超基督徒」「超级赌徒」\n\n吾否认圣经中上帝的名,因为那是人们创造的。' metadata={'source': '_posts/ultimate-facts/终极真实.md'}

page_content='> ³ 斥责 => 『ワンパンマン』:サイタマ?キング?\n\n↓↓--------------------继续修订--------------------↓↓\n\n圣经信仰之神经心理学实证纲领\n父的自我信息,是指,对于圣灵的表征。\n纯粹的圣经,含于,父的自我信息。\n从「纯粹的基督徒」到「超基督徒」「超级赌徒」\n\n吾否认圣经中上帝的名,因为那是人们创造的。\n\n超越神论,不可知论;信仰;宁死不屈,抗争到底\n对于神圣生命的信心?或,亲密关系?\n坚贞,「甘愿承担自己的罪罚」是《古兰经》的价值所在。\n真诚、勇敢、坚贞,耶稣的「甘愿承担」是《圣经》的价值所在。\n\n吾,若不是因为怯懦,又怎么会祷告呢?\n所以,吾,应该要,放弃,那种、对于其他的心灵的畏惧、所联结着的祷告。以耶稣为榜样。\n人子要经受三日地狱之火的洗。罪全部被烧尽了后,第三日复活。\n我所爱慕的必定是父所喜爱的,因为父从始至终都在吸引我、塑造我的爱慕。\n我所爱慕的若是父所不喜爱的,父必定会改变我。所以,我总是晓得父的喜爱。\n人子,与父和好,与父为友,爱父并顺从祂。与父同在,就有勇气。与父同行,就有希望。\n子永远与父同在,从未分离。\n「吾要成为超人。」\n「在吾的生活中显明父的荣耀。」\n祷告,是,对于子灵的表征。\n\n感,分为,虚感、实感。\n虚感,分为,信(?)、思感、愉快感、位置感。\n实感,分为,色感、声感、香感、味感、触感、缩紧感、疼痛感、瘙痒感、冷热感。\n\n体,是指,广延。\n\n感、体,平行地,变化。\n感、体,分割的,平行性原理\n感的统合,预示着,体的核心。\n体的核心:在感的集合中占比最大的体。\n\n信,是一种,感。(联结?极深的记忆?)\n灵,是指,具有自我独特视角的、体。 => “我是谁?”\n《圣经》说:信、灵,平行地,变化。\n在苦难中持守坚忍为何能增加信心呢?' metadata={'source': '_posts/ultimate-facts/终极真实.md'}

page_content='问题,是指,对于某次偶然变化的疑问。\n解答,是指,对于某种必然变化的概括、对于某种偶然变化的适应措施。\n\n技术,是指,问题概括及其解法。\n程序,是指,数据结构及其算法。\n模型,是指,对于拟实的技术。\n建模,是指,对于拟实的计划。\n解模,是指,对于拟实的实施。\n软件模型,是指,对于拟实的程序。\n软件建模,是指,对于拟实的编程。\n软件解模,是指,对于拟实的进程。\n\n模拟,分为,拟实、拟虚。\n来原,是指,与模型对应的事实。\n\n当即行动,增强,对于某种偶然变化的适应力。\n但,人会拖延、不愿儆醒\n\n独立、与、交通,对于联结的交通,更不朽的存在形式\n更不朽的心思、身体,永远不朽的平安\n\n兴趣、快乐、生活情趣;佛学、哲学,作为,一种思维训练\n\n弊害、痛苦,错误、误信,有限的价值、终会朽坏;佛学、消极的哲学,作为,一种信仰;\n忽视、漠视,无私、无我、虚空、无恥、不惭\n去分别,就是,注视;不去分别,就是,漠视;\n漠视伤害,导致着,忘记伤害\n走向虚空,就是,放弃羞耻、光荣、尊贵、荣耀\n佛学的惊奇性质的信心,导致着,漠视。\n\n---\n\n> 因为依顺着上帝而有的忧愁能生出不后悔的忏悔来、以至于得救;而世俗的忧愁却能生出死亡。\n(哥林多后书 7:10 吕振中)\n\n「金刚经」的邪灵,完全地,杀死了,吾的心灵。\n真常唯心系,曾经,在吾的心灵中,孕育,却流产了。\n\n忘罪,忘无明。\n\n积极的态度;佛教(真常唯心系)唯一的「用处」就是:让人不再惧怕死亡、平安地享受死亡\n基督教,比,真常唯心系,更加清晰。\n已成、与、未成;易信、与、难信;注意频次,信心,快乐、爱,恐惧、严肃,惊奇、敬畏;对于实感的表征之信,分别由惊奇(客观)、敬畏(主观)而来的信心\n某些次表征的联结;「信」,意味着、某种与真实的关系,是一种、「成」' metadata={'source': '_posts/ultimate-facts/终极真实.md'}

Remix's job is to cross the center of the stack and then get out of your way. We avoid as many "Remixisms" as possible and instead make it easier to use the standard APIs the web already has.

This one is more for us. We've been educators for the 5 years before Remix. Our tagline is Build Better Websites. We also think of it with a little extra on the end: Build Better Websites, Sometimes with Remix. If you get good at Remix, you will accidentally get good at web development in general.

Remix's APIs make it convenient to use the fundamental Browser/HTTP/JavaScript, but those technologies are not hidden from you.

Additionally, if Remix doesn't have an adapter for your server already, you can look at the source of one of the adapters and build your own.

## Server Framework
If you're familiar with server-side MVC web frameworks like Rails and Laravel, Remix is the View and Controller, but it leaves the Model up to you. There are a lot of great databases, ORMs, mailers, etc. in the JavaScript ecosystem to fill that space. Remix also has helpers around the Fetch API for cookie and session management.

Instead of having a split between View and Controller, Remix Route modules take on both responsibilities.

Most server-side frameworks are "model focused". A controller manages multiple URLs for a single model.

## Welcome to Remix!
We are happy you're here!

Remix is a full stack web framework that lets you focus on the user interface and work back through web fundamentals to deliver a fast, slick, and resilient user experience that deploys to any Node.js server and even non-Node.js environments at the edge like Cloudflare Workers.

Want to know more? Read the Technical Explanation of Remix

This repository contains the Remix source code. This repo is a work in progress, so we appreciate your patience as we figure things out.

## Documentation
For documentation about Remix, please

Final Answers:

  • Remix aims to make it easy to use standard APIs.
  • You can learn about web development in general with Remix.
  • Remix is a framework that plays the role of both the View and Controller.
  • Remix leaves the model to the user.
  • Remix provides helpers for the Fetch API.
  • Remix can be deployed in non-Node.js environments such as Node.js servers and Cloudflare Workers.

Notes on API usage:

Regarding ChatGPT on the web, it is currently still an opt-out format (not used for retraining if you apply) as of March 10, 2023, but the ChatGPI API is an opt-in format (not used for retraining unless you apply), and it has been decided that it will not be used for actual model improvement. (It is stored for 30 days for legal monitoring purposes.) Since execution via the API reduces the risk of leakage to third parties other than OpenAI and is not used for retraining, the threshold for using confidential information with ChatGPT seems to have been lowered. The price of the API (gpt-3.5-turbo) is relatively inexpensive at 0.002 dollars per 1000 tokens, while the embedded model (text-embedding-ada-002) is 0.0004 dollars per 1000 tokens. However, if you try to create an embedded model for a large number of files, it will cost more than expected. If the number of tokens cannot be predicted in advance, it is a good idea to calculate the price in advance and decide whether to execute it as follows:

In [ ]:
import tiktoken

encoding = tiktoken.encoding_for_model('text-embedding-ada-002')
text = ''
for doc in get_docs('_posts'):
    text += doc.page_content.replace(' ', ' ')
token_count = len(encoding.encode(text, allowed_special='all'))
print(f'Estimated price: {token_count*0.00000004} USD')
Estimated price: 0.03964988 USD

Summary:

Since ChatGPT learns using past data, it cannot answer questions about the latest information or information that is not publicly available on the internet. This time, by mixing context related to the question content into the prompt, we were able to answer questions about the latest data and files saved locally.

If this article has been even a little helpful to you, I would be delighted. If you have any questions or comments, please feel free to contact me.

Comments

2023-04-04