-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate use of context + observedBits for performance optimization #1018
Comments
32 bytes are not enough, I am not sure we could create a really good hash function to serve as a bloom filter. What about I've actually have another idea for optimization, which works quite well in my case - react-redux-unbranch. Idea is simple - every |
If we're only concerned with top-level state slices (at least as an initial approach), then 31 bits is way more than enough. I'd say the majority of Redux stores I've seen have <15 top-level state slices, and even if there's more than 31, then having more than one key map to the same bit index certainly isn't a problem. |
Just tracking a top level slices is way more easy task, especially in terms if amount of information one have to store(just first part of a key used), but would help a lot in 99% cases. Are you considering having 2 |
No, I wouldn't want to try to maintain multiple variants. As you've said, it's theoretically possible to get really fancy and track deeply nested state accesses, but limiting ourselves to top-level state slices would be a reasonable first step. It would probably be opt-in at first, and if it works well enough, we could consider making it the default behavior later. FWIW, I recently ran a Twitter poll asking whether people use plain JS objects, Immutable.js, or something else as their Redux state tree: https://twitter.com/acemarke/status/1036729164538281984 . Discounting the 16% who answered "See Answers", 82% of respondents are using POJOs, and I'd bet that most of the people who are using Immutable.js still have a POJO as the root of their state tree. So, even if our solution only worked for a top-level state tree POJO, the vast majority of users could benefit from it. |
Sounds good for me. Look like there are 3 tasks as result:
|
Sounds reasonable. I think our two v6 branches (#995 and #1000 ) are sufficiently far along that we could try adding this functionality to them to see how well it works. Perhaps there's some way we could set it up in "dummy/no-op mode" to begin with - calculate the changes, and somehow log/collect results on whether they would have been "correct" or not. |
Which one could I pick for a spike? (they both use Context, and everything else actually does not matter for this case) |
Before changedbits, I was getting 14 fps, 5 was 16. After, I got 24 fps, 5 still at 16. It's a huge difference. P. S. I don't think we need to change context API to accept a prop. We just accept context as a prop on our provider, and as a connect option in connect. This optimization is going to be a rare use case, most users won't need it |
But you can see the difference in the react-redux-benchmarks repo for yourself, don't take my word for it |
@theKashey : go with #1000 for now, since Greg's already done some fiddling with |
FYI https://github.com/cellog/react-redux/blob/observedBits/src/utils/observedBits.js This version is not unit tested, and has some known bugs. Basically, it allows you to pass in a state shape, and it spits out an object that can return a context to use, as well as utilities to pass in a key to the state map and get a bitmap back. |
P.S. just learned that in a component without sCU, observedBits is pretty useless. If sCU simply returns false, however, it limits re-renders as expected. So we will want to use PureComponent to limit unnecessary re-renders when props don't change, or something similar |
@cellog : |
I think I was unclear. I constructed this test: test("context", () => {
const mapper = bits.makeArrayMapper(40)
const FancyContext = mapper.context()
const updates = []
class Li extends React.Component {
shouldComponentUpdate() {
return false
}
render() {
const { i } = this.props
return (
<FancyContext.Consumer unstable_observedBits={mapper.getBits(i)}>
{state => {
updates.push(i)
return <li>{state[i]}</li>
}}
</FancyContext.Consumer>
)
}
}
class Updates extends React.Component {
constructor(props) {
super(props)
this.state = {
info: Array(40)
.fill(1)
.map((_, i) => i + 1)
}
this.mapper = (thing, i) => <Li i={i} key={i} />
}
render() {
return (
<FancyContext.Provider value={this.state.info}>
<ul>{this.state.info.map(this.mapper)}</ul>
<button
onClick={() =>
this.setState(state => {
const info = [...state.info]
info[3] = "hji"
info[27] = "wow"
return { info }
})
}
>
set
</button>
</FancyContext.Provider>
)
}
}
const tester = rtl.render(<Updates />)
expect(updates.length).toBe(40)
rtl.fireEvent.click(tester.getByText("set"))
expect(updates.length).toBe(43)
}) but until I added that sCU that returns false, I was getting 80 updates. So it's not enough to use observedBits, we have to also prevent re-render the normal way too |
Yeah, I think that's still basically what I was saying. |
@markerikson - PR drafted. |
Looked into this in #1021. It didn't seem to make a huge difference for now. |
I'd like to look at this further after 6.0 goes final, so I'll reopen it as a marker. |
A bit of further discussion: https://spectrum.chat/thread/3d6d5795-227a-49d4-b275-316db611a002 . |
@markerikson Looks like the rfsc isn't going anywhere soon :/ Could we try an implementation of he "double context" approach? And who knows if it works good maybe use it in production 😃 |
@richie-south : I'm juggling several other priorities atm and don't have time to try that out right now, but please feel free to do so yourself and report back results! Note that it's not the "double-context" usage itself that would provide any real benefit - it's potentially using Per earlier discussion, the immediate issues are:
|
As much as I'd like to experiment with this someday, I think this is going to be obsolete for the reasons described in #1177 . |
I've written about this in several places recently, so I might as well consolidate the information as an issue. I'll copy and paste from the other stuff I've written as appropriate.
Prior discussions and materials for reference:
calculateChangedBits
andunstable_observedBits
work, see the post Bitmasks and the new React Context APIContext.Provider
accept acalculateChangedBits
propSummary and Use Case
In React-Redux v5 (the current version), every instance of a connected component is a separate subscriber to the store. If you have a connected list with 10K connected list items, that's 10,001 separate subscriber callbacks.
Every time an action is dispatched, every subscriber callback is run. Each connected component checks to see if the root state has changed. If it has, that component re-runs the
mapState
function it was given, and checks to see if the values it returned have changed. If any of them have, then it re-renders your real component.In v6, we're changing it so that there's only one subscriber to the Redux store: the
<Provider>
. When an action is dispatched, it runsthis.setState({storeState : store.getState()})
, and then the store state is passed down to connected components using the newReact.createContext
API. When React sees the value given to theContext.Provider
has changed, it will force all of the associatedContext.Consumer
instances to update, and that's when the connected components will now re-run theirmapState
functions.So, in v5, there's 10,001 subscriber callbacks and 10,001
mapState
functions executed. In v6, there's only 1 subscriber callback executed, but still 10,001mapState
functions, plus the overhead of React re-rendering the component tree to update the context consumers. Based on what we've seen so far, the overhead of React updating is a bit more expensive than it was to run all those subscriber callbacks, but it's close. Also, there's other benefits to letting React handle this part of the work (including hopefully better compatibility with the upcoming async timeslicing stuff).However... as many people have pointed out, in most apps, for any given action and state update, most of the components don't actually care about the changes. Let me give a different example. Let's say that our root state looks like
{a, b, c, d}
. Doesn't matter what's inside those, but for sake of the argument let's say that each top-level slice holds the data for 2500 items, and a separate connected component for each item.Now, imagine we dispatch an action that updates, say,
state.a[1234].value
.state
,state.a
, andstate.a[1234]
will be updated to new references by our reducers, butstate.b
,state.c
, andstate.d
are all the same.That means that only 2500 out of 10K components would have any interest in the changes at all - the ones whose data is in
state.a
. The other 7500 could really just skip re-running theirmapState
functions completely, because the top-level slices their data is in haven't changed.So, what I imagine is a way for a connected component to say "hey, I only want to re-run my
mapState
function ifstate.b
orstate.c
have changed". (Technically, you could do something sorta like this now with some of the lesser-known options toconnect
, I think, but lemme keep explaining.) If we did some magic to turn those state slice names into a bitmask pattern, and<Provider>
ran a check to calculate a bitmask pattern based on which state slices did change, then we could potentially use that to skip the update process entirely for any components that didn't care about these changes, and things would be a lot faster as a result.Where the proxy stuff comes in would be some real "magic" , where we could maybe "automagically" see which state fields your
mapState
function tries to read. It's theoretically possible, but very very complex with lots of edge cases. If that doesn't work, we'd maybe let you pass an option toconnect
that sayssubscribeToStateSlices : ["a", "c"]
or something like that.Potential Implementation Details
Calculating Changed Bits from State
In the RFC, I wrote this hand-waved function:
Here's how I imagine an un-hand-waved implementation would work.
Let's say we've got a Redux store whose top-level state looks like
{a, b, c, d}
. The contents of those state slices is irrelevant - we can assume that the values are being updated immutably, as you're supposed to do with Redux.A standard shallow equality check loops over the keys for two objects , and for each key, checks to see if the corresponding values in the two objects have changed. If any have changed, it returns false ("these objects are not equal"). I would want to implement a similar function that returns an array of all keys whose values have changed in the newer object (such as
["a", "c"]
).From there, I would run each key name through a hash function of some sort, which hashes the key string into a single bit. I briefly experimented with using a "minimal perfect hashing" library a while back as an early proof of concept.
So, every time the Redux store updated, the React-Redux
<Provider>
component, as the sole subscriber to the store, would determine which keys had their values changed and generate thechangedBits
bitmask accordingly. It wouldn't be a "deep" comparison of the state trees, but rather a quick and simple "top level" list of changes, at least as a default implementation.Connected components would by default still subscribe to all updates, but we would provide a way for the end user to indicate that a connected component only cares about updates to certain state slices (hypothetical example:
connect(mapState, null, null, {subscribeToStateSlices: ["a", "d"]})(MyComponent)
.Calculating Component Usage of State Slices
This is the tricky part. In an ideal world, a connected component could wrap up the store state in an ES6 Proxy, call its
mapState
function with that wrapped state, and track which fields had been accessed. Unfortunately, there's tons of edge cases involved here. @theKashey has been doing a lot of research on this as part of the Remixx project he's working on. Quoting his comment:Use of
calculateChangedBits
I had originally assumed that each
Context.Provider
could accepts acalculateChangedBits
function as a prop. However, I was wrong - you actually pass one function as an argument tocreateContext
, likeReact.createContext(defaultValue, calculateChangedBits)
. This is a problem, because we're likely to use a singleton-styleReactReduxContext.Provider/Consumer
pair for all of React-Redux, and that would be created when the module is imported. It would be really helpful if we had the ability to pass it as a prop to eachContext.Provider
instead. That would let us be more dynamic with our change calculations, such as handling an arrayor an Immutable.js Map as the root state instead of a plain object.At Dan Abramov's suggestion, I filed React RFC #60: Accept calculateChangedBits as a prop on Context.Providers to discuss making that change. Sebastian Markbage liked the writeup itself, but was hesitant about the actual usage of the bitmask functionality on the grounds that it might change in the future, and whether it was appropriate for our use case. The discussion hasn't advanced further since then.
Personally, I think the bitmask API is exactly what React-Redux needs, and that we're the kind of use case the API was created for.
One potential alternative might be a "double context" approach. In this approach, we might create a unique context provider/consumer pair whenever a React-Redux
<Provider>
is instantiated, use the singletonReactReduxContext
instance to pass that unique pair downwards to the connected components, and put the data into that unique context instance. That way, we could pass in acalculateChangedBits
function to thecreateContext
call . It seems hacky, but that was actually something Sebastian suggested as well.Potential Improvements
@cellog did some quick changes to our existing
stockticker
benchmarks, which uses an array of entries. I think it was basicallybitIndex = arrayIndex % 31
, or something close to that - something simple and hardcoded as a proof of concept. He then tweaked his v6 branch (#1000 ) to calculateobservedBits
the same way, and compared it to both v5 and the prior results for that branch. I don't have the specific numbers on hand, but I remember he reported it was noticeably faster than the v5 implementation, whereas the prior results were a bit slower than v5. (Greg, feel free to comment with the actual numbers.)So, conceptually it makes sense this could be an improvement, and our first test indicates it can indeed be an improvement.
Timeline
At this point, I don't think this would be part of a 6.0 release. Instead, we'd probably make sure that 6.0 works correctly and is hopefully equivalent in performance to v5, and release it. Then we could research the bitmasking stuff and put it out in a 6.x release, similar to how React laid the foundation with a rewrite in 16.0 and then actually released
createContext
in 16.3.The text was updated successfully, but these errors were encountered: