Codex multiplicou por 10 os projetos de código de Claude
Codex dispara: projetos de código de Claude multiplicam por 10.
Conteudo
TLDR;
-
O que a OpenAI lançou para o Cloud Code?
A OpenAI lançou um plugin oficial do Codex para o Cloud Code, facilitando integrar o Codex ao mesmo fluxo de trabalho para fazer revisões de código e outras tarefas. -
Por que usar Codex junto com Cloud Code?
Porque o conteúdo mostra que o Cloud Code tende a planejar e criar, enquanto o Codex é melhor para revisar, executar e pegar problemas que o Cloud Code pode deixar passar. -
Vale a pena usar esse plugin mesmo sendo algo extra?
Sim, porque o Codex pode ser usado de graça com a assinatura gratuita do ChatGPT e pode complementar o Cloud Code em projetos para revisar código e melhorar a qualidade final. plugins, you would basically
146
00:04:22,040 --> 00:04:24,920
have to, you know, try to install that
147
00:04:23,720 --> 00:04:26,800
marketplace. You can see that I have the
148
00:04:24,920 --> 00:04:27,879
OpenAI Codex one right here. And then
149
00:04:26,800 --> 00:04:30,800
you can see right here I've got the
150
00:04:27,879 --> 00:04:32,279
Codex plugin installed and enabled. So
151
00:04:30,800 --> 00:04:33,720
now if I went ahead to do a {slash}
152
00:04:32,279 --> 00:04:34,719
Codex, you could see all of these
153
00:04:33,720 --> 00:04:36,400
different things that I could actually
154
00:04:34,720 --> 00:04:39,880
call on, and all of these would be using
155
00:04:36,399 --> 00:04:41,319
GPT-5.4 instead of Opus. So real quick
156
00:04:39,879 --> 00:04:42,719
example of what that may look like.
157
00:04:41,319 --> 00:04:44,519
Here's a project where I'm setting up
158
00:04:42,720 --> 00:04:46,880
just some sort of dashboard for an
159
00:04:44,519 --> 00:04:48,159
internal system, and keep in mind, a lot
160
00:04:46,879 --> 00:04:49,680
of this is mock data. This is something
161
00:04:48,160 --> 00:04:51,240
that I just recently spun up. And right
162
00:04:49,680 --> 00:04:52,840
now I'm just working on sort of the flow
163
00:04:51,240 --> 00:04:54,360
and the feel, and rather than like
164
00:04:52,839 --> 00:04:56,359
having the data synced in. But anyways,
165
00:04:54,360 --> 00:04:57,759
I built this obviously using Opus. So
166
00:04:56,360 --> 00:04:59,199
now in this project, if I do {slash}
167
00:04:57,759 --> 00:05:01,120
Codex, I can see all these different
168
00:04:59,199 --> 00:05:02,639
things to run, and right now I want to
169
00:05:01,120 --> 00:05:04,800
decide between a review or an
170
00:05:02,639 --> 00:05:06,279
adversarial review. So if I go over back
171
00:05:04,800 --> 00:05:07,520
to the GitHub, we can read the
172
00:05:06,279 --> 00:05:09,559
difference between the two, which is a
173
00:05:07,519 --> 00:05:11,000
review runs a normal Codex review on
174
00:05:09,560 --> 00:05:12,480
your current work, which is the same
175
00:05:11,000 --> 00:05:14,879
quality of code review as running a
176
00:05:12,480 --> 00:05:16,560
{slash} review inside of Codex directly.
177
00:05:14,879 --> 00:05:17,759
So you can use this for reviewing
178
00:05:16,560 --> 00:05:20,720
uncommitted changes or comparing
179
00:05:17,759 --> 00:05:23,399
branches, and this is a read-only type
180
00:05:20,720 --> 00:05:25,640
of skill. Now, the adversarial review is
181
00:05:23,399 --> 00:05:27,479
kind of just like a review on steroids.
182
00:05:25,639 --> 00:05:29,519
It's steerable, and it questions the
183
00:05:27,480 --> 00:05:30,920
chosen implementation and design, and it
184
00:05:29,519 --> 00:05:32,680
can be used to pressure test things,
185
00:05:30,920 --> 00:05:34,439
look at tradeoffs, failure modes, and
186
00:05:32,680 --> 00:05:36,920
whether different approaches would be
187
00:05:34,439 --> 00:05:38,639
safer or more simple. This is also a
188
00:05:36,920 --> 00:05:40,120
read-only command that does not change
189
00:05:38,639 --> 00:05:41,479
code. Essentially, these are both just
190
00:05:40,120 --> 00:05:43,040
kind of giving you
191
00:05:41,480 --> 00:05:45,640
a nice audit. So I'm going to go ahead
192
00:05:43,040 --> 00:05:46,920
and try the adversarial review right
193
00:05:45,639 --> 00:05:48,639
here. So what you'll notice is right
194
00:05:46,920 --> 00:05:50,240
away it has to get familiarized and
195
00:05:48,639 --> 00:05:52,279
acclimated with the environment. So it's
196
00:05:50,240 --> 00:05:53,280
going to look at the working tree size.
197
00:05:52,279 --> 00:05:54,359
It's going to check the differences
198
00:05:53,279 --> 00:05:56,039
between what's staged and what's
199
00:05:54,360 --> 00:05:58,080
unstaged. And after that, it should come
200
00:05:56,040 --> 00:05:59,920
back and ask us how we want to run this
201
00:05:58,079 --> 00:06:00,919
review. So it's asking me how we want to
202
00:05:59,920 --> 00:06:03,000
run it. I'm just going to go ahead and
203
00:06:00,920 --> 00:06:04,160
shoot that off.
204
00:06:03,000 --> 00:06:06,879
You can see that it also said that this
205
00:06:04,160 --> 00:06:08,720
is a pretty large review, so we'll see
206
00:06:06,879 --> 00:06:10,240
how long this takes. So by the way, I'm
207
00:06:08,720 --> 00:06:12,240
on Win
Formato de saida (IMPORTANTE - siga exatamente):
Escreva APENAS as 3 respostas, sem as perguntas, no formato:
[resposta 1]. [resposta 2]. [resposta 3]
NAO inclua as perguntas na saida, apenas as respostas em italico separadas por ponto. NAO comece as respostas com SIM ou NÃO. Va direto ao ponto.
}
Resumo
A OpenAI lançou oficialmente um plugin do Codex para o Cloud Code, facilitando a integração dos dois no mesmo fluxo de trabalho, especialmente para revisão de código e apoio em tarefas mais complexas. Embora a combinação já fosse usada por alguns desenvolvedores, o plugin torna o processo mais simples e acessível, inclusive porque o Codex pode ser usado gratuitamente com uma assinatura gratuita do ChatGPT. O autor compara benchmarks de codificação entre Opus 4.6 e GPT-5.4 e observa que, apesar de o Opus liderar levemente em um teste específico, o GPT-5.4 supera o concorrente na maioria dos demais, além de ser mais barato. A análise também reúne opiniões de usuários no X e no Reddit: o Cloud Code tende a ser mais forte em planejamento e criatividade, mas pode superengenheirar soluções, consumir muitos tokens e se perder em execuções longas; já o Codex é visto como melhor para revisão, execução e identificação de falhas, embora seja mais rígido e menos forte em planejamento e geração criativa. A conclusão é que as ferramentas se complementam bem, e o ideal é escolher cada uma conforme a etapa do projeto. O texto ainda destaca que a instalação é simples e que a documentação oficial traz comandos, funções e recursos extras para ampliar o uso. plugins, you would basically
146
00:04:22,040 --> 00:04:24,920
have to, you know, try to install that
147
00:04:23,720 --> 00:04:26,800
marketplace. You can see that I have the
148
00:04:24,920 --> 00:04:27,879
OpenAI Codex one right here. And then
149
00:04:26,800 --> 00:04:30,800
you can see right here I've got the
150
00:04:27,879 --> 00:04:32,279
Codex plugin installed and enabled. So
151
00:04:30,800 --> 00:04:33,720
now if I went ahead to do a {slash}
152
00:04:32,279 --> 00:04:34,719
Codex, you could see all of these
153
00:04:33,720 --> 00:04:36,400
different things that I could actually
154
00:04:34,720 --> 00:04:39,880
call on, and all of these would be using
155
00:04:36,399 --> 00:04:41,319
GPT-5.4 instead of Opus. So real quick
156
00:04:39,879 --> 00:04:42,719
example of what that may look like.
157
00:04:41,319 --> 00:04:44,519
Here's a project where I'm setting up
158
00:04:42,720 --> 00:04:46,880
just some sort of dashboard for an
159
00:04:44,519 --> 00:04:48,159
internal system, and keep in mind, a lot
160
00:04:46,879 --> 00:04:49,680
of this is mock data. This is something
161
00:04:48,160 --> 00:04:51,240
that I just recently spun up. And right
162
00:04:49,680 --> 00:04:52,840
now I'm just working on sort of the flow
163
00:04:51,240 --> 00:04:54,360
and the feel, and rather than like
164
00:04:52,839 --> 00:04:56,359
having the data synced in. But anyways,
165
00:04:54,360 --> 00:04:57,759
I built this obviously using Opus. So
166
00:04:56,360 --> 00:04:59,199
now in this project, if I do {slash}
167
00:04:57,759 --> 00:05:01,120
Codex, I can see all these different
168
00:04:59,199 --> 00:05:02,639
things to run, and right now I want to
169
00:05:01,120 --> 00:05:04,800
decide between a review or an
170
00:05:02,639 --> 00:05:06,279
adversarial review. So if I go over back
171
00:05:04,800 --> 00:05:07,520
to the GitHub, we can read the
172
00:05:06,279 --> 00:05:09,559
difference between the two, which is a
173
00:05:07,519 --> 00:05:11,000
review runs a normal Codex review on
174
00:05:09,560 --> 00:05:12,480
your current work, which is the same
175
00:05:11,000 --> 00:05:14,879
quality of code review as running a
176
00:05:12,480 --> 00:05:16,560
{slash} review inside of Codex directly.
177
00:05:14,879 --> 00:05:17,759
So you can use this for reviewing
178
00:05:16,560 --> 00:05:20,720
uncommitted changes or comparing
179
00:05:17,759 --> 00:05:23,399
branches, and this is a read-only type
180
00:05:20,720 --> 00:05:25,640
of skill. Now, the adversarial review is
181
00:05:23,399 --> 00:05:27,479
kind of just like a review on steroids.
182
00:05:25,639 --> 00:05:29,519
It's steerable, and it questions the
183
00:05:27,480 --> 00:05:30,920
chosen implementation and design, and it
184
00:05:29,519 --> 00:05:32,680
can be used to pressure test things,
185
00:05:30,920 --> 00:05:34,439
look at tradeoffs, failure modes, and
186
00:05:32,680 --> 00:05:36,920
whether different approaches would be
187
00:05:34,439 --> 00:05:38,639
safer or more simple. This is also a
188
00:05:36,920 --> 00:05:40,120
read-only command that does not change
189
00:05:38,639 --> 00:05:41,479
code. Essentially, these are both just
190
00:05:40,120 --> 00:05:43,040
kind of giving you
191
00:05:41,480 --> 00:05:45,640
a nice audit. So I'm going to go ahead
192
00:05:43,040 --> 00:05:46,920
and try the adversarial review right
193
00:05:45,639 --> 00:05:48,639
here. So what you'll notice is right
194
00:05:46,920 --> 00:05:50,240
away it has to get familiarized and
195
00:05:48,639 --> 00:05:52,279
acclimated with the environment. So it's
196
00:05:50,240 --> 00:05:53,280
going to look at the working tree size.
197
00:05:52,279 --> 00:05:54,359
It's going to check the differences
198
00:05:53,279 --> 00:05:56,039
between what's staged and what's
199
00:05:54,360 --> 00:05:58,080
unstaged. And after that, it should come
200
00:05:56,040 --> 00:05:59,920
back and ask us how we want to run this
201
00:05:58,079 --> 00:06:00,919
review. So it's asking me how we want to
202
00:05:59,920 --> 00:06:03,000
run it. I'm just going to go ahead and
203
00:06:00,920 --> 00:06:04,160
shoot that off.
204
00:06:03,000 --> 00:06:06,879
You can see that it also said that this
205
00:06:04,160 --> 00:06:08,720
is a pretty large review, so we'll see
206
00:06:06,879 --> 00:06:10,240
how long this takes. So by the way, I'm
207
00:06:08,720 --> 00:06:12,240
on Win
}